• Natanael@slrpnk.net
    link
    fedilink
    arrow-up
    25
    arrow-down
    11
    ·
    edit-2
    12 hours ago

    Nobody needs lossless over Bluetooth

    Edit: plenty of downvotes by people who have never listened to ABX tests with high quality lossy compare versus lossless

    At high bitrate lossy you literally can’t distinguish it. There’s math to prove it;

    https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem

    At 44 kHz 16 bit with over 192 Kbps with good encoders your ear literally can’t physically discern the difference

      • Natanael@slrpnk.net
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        12 hours ago

        Why use lossless for that when transparent lossy compression already does that with so much less bandwidth?

        Opus is indistinguishable from lossless at 192 Kbps. Lossless needs roughly 800 - 1400 Kbps. That’s a savings of between 4x - 7x with the exact same quality.

        Your wireless antenna often draws more energy in proportion to bandwidth use than the decoder chip does, so using high quality lossy even gives you better battery life, on top of also being more tolerant to radio noise (easier to add error correction) and having better latency (less time needed to send each audio packet). And you can even get better range with equivalent radio chips due to needing less bandwidth!

        You only need lossless for editing or as a source for transcoding, there’s no need for it when just listening to media

        • gaylord_fartmaster@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          12 hours ago

          This has strong “nobody needs a monitor over 120Hz because the human eye can’t see it” logic. Transparency is completely subjective and people have different perceptions and sensitivities to audio and video compression artifacts. The quality of the hardware playing it back is also going to make a difference, and different setups are going to have a different ceiling for what can be heard.

          The vast majority of people are genuinely going to hear zero difference between even 320kbps and a FLAC but that doesn’t mean there actually is zero difference, you’re still losing audio data. Even going from a 24-bit to a 16-bit FLAC can have a perceptible difference.

          • Natanael@slrpnk.net
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            7 hours ago

            The Nyquist-Shannon sampling theorem isn’t subjective, it’s physics.

            Your example isn’t great because it’s about misconceptions about the eye, not about physical limits. The physical limits for transparency are real and absolute, not subjective. The eye can perceive quick flashes of objects that takes less than a thousandth of a second. The reason we rarely go above 120 Hz for monitors (other than cost) is because differences in continous movement barely can be perceived so it’s rarely worth it.

            We know where the upper limits for perception are. The difference typically lies in the encoder / decoder or physical setup, not the information a good codec is able to embedd with that bitrate.