Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

PNG was always set up for extension. In particular, it has a clever way of allowing ancillary data sections to be marked as unimportant, so a decoder knows whether it can skip them or report the file as unreadable.

I suspect the big thing these days would be to support brotli and zstd.



A problem that often comes up with extensible formats is that whomever comes along and implements them assumes exactly the test cases they came up with, which can often mean "just the files I have for this project" or "just the output of the encoder I have".

So there will be formats that can reorder the chunks, and those minimum-viable readers will all break when they encounter a file from a different source, because they hardcoded a read order. This leads to an experience on the user's end where they need a "fixer" tool to reorder it to deal with the bad decoder.

There were tons of old binary formats that were like this. It can happen with text, but it's less likely to, because a larger proportion of textual formats build over a container like XML or JSON to offload the text parsing, and then they end up with some pre-structured data.


> There were tons of old binary formats that were like this. It can happen with text, but it's less likely to, because a larger proportion of textual formats build over a container like XML or JSON to offload the text parsing, and then they end up with some pre-structured data.

Note that PNG also "build[s] over a container", since it's a descendant of IFF.


Many formats have stuff like that (like cover art in MP3 ID3 tags), but usually they're used for, well, ancillary purposes.

It's dangerous to use this to change the actual primary output of the file (the image), especially in a way that users and editors can't easily detect.


I would say at least in the context of extra data to extend the bit depth for HDR, that data could be considered ancillary?

We've been rendering images in SDR forever, and most people don't have HDR capable hardware or software yet, so I don't know how you could consider it as broken to render the image without the HDR data?


This assumes the image is presented in isolation.

I’ve seen countless of issues where you place a PNG logo on top of a css background:#123456 and expect the colors to match, so the logo blends in seamlessly to the whole page.

On your machine it does and everything looks beautiful. On the customer machine with Internet explorer they don’t, so the logo has an ugly square around it.


The difference in experience for seeing a black or white background instead of a transparant one on the one hand, and missing HDR on the other is pretty big.

95%+ of humans won't even notice HDR being missing. Everyone with eyes will notice a black or white square.


Nobody said anything about black or white. Try googling for png color problems and you’ll find thousands of questions, in all kinds of tools and browsers. The css color and the png color need to match exactly. Just a slight difference will look off if placed next to each other. The risk of css and png rendering two same hex-codes differently increase when putting semi-supported hdr extensions in the mix.

For this particular use case, yes, transparency is more suitable than trying to match.


You could add an "ignorable" zstd-compressed IDAT variant, but that wouldn't give you backwards-compat in any useful way - the zlib-compressed IDAT still has to be present, and unless you fill it full of zeroes or a placeholder image, the overall file is going to be larger.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: