It’s “compatible” in that it can represent old JPEG/JFIF data more efficiently and in less space, and the transformation to JPEG XL and back to JPEG/JFIF is lossless (in that you don’t lose any /more/ quality, you can get the same bits back out) and quick enough to be doable on-demand. You could, for example, re-encode all your old photos on your CDN as JPEG XL without loss of quality but save a bunch of disc space and bandwidth when serving to modern browsers, and translate dynamically back to the old format for older browers, all with no loss of quality.
No, I’m saying that JPEG XL can perfectly represent old JPEG/JFIF data, so on the server side you can store all your image data once and more efficiently, and still support old clients without any lossy cascade or the CPU load of having to re-encode. That is what is meant about it offering backwards compatibility.
What they’re saying is that a web server can create a traditional jpeg file from a jpeg xl to send to a client as needed. So you’re saving backend storage space… sometimes. Until widespread adoption by browsers, you’re still creating and transmitting a traditional jpeg file. And now you’ve increased the server space needed because you’re having to create and store two copies of the file in two different formats.
Developers are already doing this with webp and everyone hates webp (if your browser doesn’t support webp, the backend sends you the jpeg copy). I dont see any advantage here except some hand waving “but in the future” just like has been done for most new formats trying to win adoption.
What they’re saying is that a web server can create a traditional jpeg file from a jpeg xl to send to a client as needed.
Other way around, you can convert a “web safe” JPEG file into a JXL one (and back again), but you can’t turn any random JXL file into a JPEG file.
But yeah, something like Lemmy could recompress uploaded JPEG images as JXL on the server, serving them at JXL to updated clients, and converting back to JPEG as needed, saving server storage and bandwidth with no quality loss.
How is it backwards compatible? Everything I’ve read so far says the opposite — That it requires recoding the image into the new format, and keeping around or generating an old copy of the image in current jpeg format for older software.
Are you saying a browser or app that currently only supports Jpeg can open and render a Jpeg-XL image?
Edit: Yeah. It’s not backward compatible. And system admins are already doing the “make two copies of an image thing with webp and the current jpg format.
They don’t have to. It’s backwards compatible. You can ignore it and we can keep on happily using it.
Fuck Google, fuck WebP.
why webp is bad? besides google forcing it apparently
How is JPEG XL backwards compatible?
It’s “compatible” in that it can represent old JPEG/JFIF data more efficiently and in less space, and the transformation to JPEG XL and back to JPEG/JFIF is lossless (in that you don’t lose any /more/ quality, you can get the same bits back out) and quick enough to be doable on-demand. You could, for example, re-encode all your old photos on your CDN as JPEG XL without loss of quality but save a bunch of disc space and bandwidth when serving to modern browsers, and translate dynamically back to the old format for older browers, all with no loss of quality.
So what you’re saying is: both formats can encode image data
No, I’m saying that JPEG XL can perfectly represent old JPEG/JFIF data, so on the server side you can store all your image data once and more efficiently, and still support old clients without any lossy cascade or the CPU load of having to re-encode. That is what is meant about it offering backwards compatibility.
What they’re saying is that a web server can create a traditional jpeg file from a jpeg xl to send to a client as needed. So you’re saving backend storage space… sometimes. Until widespread adoption by browsers, you’re still creating and transmitting a traditional jpeg file. And now you’ve increased the server space needed because you’re having to create and store two copies of the file in two different formats.
Developers are already doing this with webp and everyone hates webp (if your browser doesn’t support webp, the backend sends you the jpeg copy). I dont see any advantage here except some hand waving “but in the future” just like has been done for most new formats trying to win adoption.
Other way around, you can convert a “web safe” JPEG file into a JXL one (and back again), but you can’t turn any random JXL file into a JPEG file.
But yeah, something like Lemmy could recompress uploaded JPEG images as JXL on the server, serving them at JXL to updated clients, and converting back to JPEG as needed, saving server storage and bandwidth with no quality loss.
The difference (claimed by the comment above) is in the words
So you can convert back and forth without the photo copy of a photo copy problem.
And you don’t have to store the second copy of the file except for caching of frequently fetched files which I’m sure will just be an nginx rule.
And let’s not forget HEIF and JPEG-2000.
How is it backwards compatible? Everything I’ve read so far says the opposite — That it requires recoding the image into the new format, and keeping around or generating an old copy of the image in current jpeg format for older software.
Are you saying a browser or app that currently only supports Jpeg can open and render a Jpeg-XL image?
Edit: Yeah. It’s not backward compatible. And system admins are already doing the “make two copies of an image thing with webp and the current jpg format.
The re-encoding requires less computation vs other formats.