• rjek@feddit.uk
    link
    fedilink
    English
    arrow-up
    33
    ·
    14 hours ago

    It’s “compatible” in that it can represent old JPEG/JFIF data more efficiently and in less space, and the transformation to JPEG XL and back to JPEG/JFIF is lossless (in that you don’t lose any /more/ quality, you can get the same bits back out) and quick enough to be doable on-demand. You could, for example, re-encode all your old photos on your CDN as JPEG XL without loss of quality but save a bunch of disc space and bandwidth when serving to modern browsers, and translate dynamically back to the old format for older browers, all with no loss of quality.

      • rjek@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        No, I’m saying that JPEG XL can perfectly represent old JPEG/JFIF data, so on the server side you can store all your image data once and more efficiently, and still support old clients without any lossy cascade or the CPU load of having to re-encode. That is what is meant about it offering backwards compatibility.

      • reddig33@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        7 hours ago

        What they’re saying is that a web server can create a traditional jpeg file from a jpeg xl to send to a client as needed. So you’re saving backend storage space… sometimes. Until widespread adoption by browsers, you’re still creating and transmitting a traditional jpeg file. And now you’ve increased the server space needed because you’re having to create and store two copies of the file in two different formats.

        Developers are already doing this with webp and everyone hates webp (if your browser doesn’t support webp, the backend sends you the jpeg copy). I dont see any advantage here except some hand waving “but in the future” just like has been done for most new formats trying to win adoption.

        • The_Decryptor@aussie.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 hours ago

          What they’re saying is that a web server can create a traditional jpeg file from a jpeg xl to send to a client as needed.

          Other way around, you can convert a “web safe” JPEG file into a JXL one (and back again), but you can’t turn any random JXL file into a JPEG file.

          But yeah, something like Lemmy could recompress uploaded JPEG images as JXL on the server, serving them at JXL to updated clients, and converting back to JPEG as needed, saving server storage and bandwidth with no quality loss.

        • Logi@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 hours ago

          The difference (claimed by the comment above) is in the words

          without loss of quality

          So you can convert back and forth without the photo copy of a photo copy problem.

          And you don’t have to store the second copy of the file except for caching of frequently fetched files which I’m sure will just be an nginx rule.