• magiccupcake@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    7 months ago

    This is a truly terrible article.

    Like why not test these things? This just sounds like ai generated garbage.

    That being said, 8gb is an abysmally low amount of ram in 2024. I had a mid range surface in 2014 that had that much ram. And the upcharge for more is quite ridiculous too.

    I know it’s pc ram but I bought 64gb of ddr4 3600mhz for like $130. How on earth is apple charging $200 for 8!!!

    • Shadywack@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      7 months ago

      Looks like you didn’t read the article either.

      Overall, I’m using 12.5GB of memory and the only application I have open is Chrome. Oh, and did I mention I’m typing this on a 16GB MacBook Air? I used to have an 8GB Apple silicon Air and to be frank it was a nightmare, constantly running out of memory just browsing the web.

      Earlier it’s mentioned that they have 15 tabs open. I don’t like a lot of things they do in “gaming journalism” but on this article they’re spot on. Apple is full of shit in saying 8GB is enough by today’s standards. 8GB is a fuckin joke, and you can’t add any RAM later.

      • ABCDE@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        7 months ago

        That doesn’t make sense. I have the 8GB M2 and don’t have any issues with 20+ tabs, video calling, torrents, Luminar, Little Snitch, etc open right now.

        • Shadywack@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          7 months ago

          15 tabs of Safari, which is demonstrably a better browser by some opinions due to its efficiency and available privacy configuration options. What if you prefer Chrome or Firefox?

          I will argue in Apple’s defense that their stack includes very effective libraries that intrinsically made applications on Mac OS better in many regards, but 8GB is still 8GB, and an SoC isn’t upgradeable. Competition has far cheaper 16GB options, and Apple is back to looking like complete assholes again.

            • Adam@doomscroll.n8e.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 months ago

              The fact you got downvoted for someone else’s assumption (that was upvoted) makes me chuckle. There’s some serious Apple hating going on here*.

              *sometimes deserved. Not really in this case.

        • disguy_ovahea@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          11
          ·
          7 months ago

          That’s because PC people try to equate specs in dissimilar architecture with an OS that is not written explicitly to utilize that architecture. They haven’t read enough about it or experienced it in practice to have an informed opinion. We can get downvoted together on our “sub standard hardware” that works wonderfully. lol

          • pivot_root@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            7 months ago

            The only memory-utilization-related advantage gained by sharing memory between the CPU and GPU is zero-copy operations between the CPU and GPU. The occasional texture upload and framebuffer access is nowhere near enough to make 8 GiB the functional equivalent of 16 GiB.

            If you want to see something “written explicitly to utilize [a unified memory] architecture,” look no further than the Nintendo Switch. The operating system and applications are designed specifically for the hardware, and even first-party titles are choked by the hardware’s memory capacity and bandwidth.

            • disguy_ovahea@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              6
              ·
              7 months ago

              The Tegra is similar being an SoC, however it does not possess nearly as many dedicated independent processing cores designed around specialized processes.

              The M1 has 10-core CPU with 8 performance cores and 2 efficiency cores, a 16-core GPU, a 16-core Neural Engine, and all with 200GB/s memory bandwidth.

              • pivot_root@lemmy.world
                link
                fedilink
                English
                arrow-up
                6
                ·
                7 months ago

                The M1-3 is still miles ahead of the Tegra, I don’t disagree. My point was that software designed specifically for a platform can’t make up for the platform’s shortcomings. The SOC itself is excellently designed to meet needs well into the future, but that 8 GiB of total system memory on the base model is unfortunately a shortcoming.

                Apple’s use of memory compression stops it from being too noticeable, but it’s going to become a problem as application memory requirements grow (like with the Switch).

                • disguy_ovahea@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  5
                  ·
                  7 months ago

                  Sure, but no one is saying 8GB is good enough for everyone. It’s a base model. Grandma can use it to check her Facebook and do online banking. It’s good for plenty of basic users. I have an M1 Mini with 8GB that I use as a home server. It works great, but I need my M2 MBP with 16GB UM to use FCP, PS, and Logic Pro. With that, I can master 4K HDR in FCP from an unmastered source in Logic Pro without high memory pressure, let alone swap. There’s no way I’d have the same performance from a PC with 16GB of RAM in Adobe Premiere and Pro Tools. I’ve been there before.

                  8GB really is a suitable low-end configuration, and most Mac users would agree. I’m not surprised a magazine dedicated to PC gaming hardware thinks otherwise.

      • magiccupcake@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        edit-2
        7 months ago

        Oh no I read the article, I just don’t consider that testing.

        It’s not really apt to compare using ram on a browser on one computer and extract that to another, there’s a lot of complicated ram and cache management that happens in the background.

        Testing would involve getting a 8gb ram Mac computer and running common tasks to see if you can measure poorer performance, be it lag, stutters or frame drops.

        • Shadywack@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 months ago

          You do have a point, but I think the intent of the article is to convey the common understanding that Apple is leaning on sales tactics to convince people of a thing that anyone with technical acumen sees through immediately. Regardless of how efficient Mach/Darwin is, it’s still apples to apples (pun intended) to understand how quickly 8GB fills up in 2024. For those who need a fully quantitative performance measurement between 8 and 16GB, with enough applications loaded to display the thrashing that starts happening, they’re not really the audience. THAT audience is busy reading about gardening tips, lifestyle, and celebrity gossip.

    • Adam@doomscroll.n8e.dev
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      7 months ago

      Written by someone who apparently has no understanding of virtual memory. Chrome may claim 500MB per tab but I’ll eat my hat if the majority of that isn’t shared between tabs and paged out.

      If I’m misunderstanding then how the fuck is chrome with it’s 35+ open tabs functioning on my 16GB M1 machine (with a full other application load including IDE’s and docker (with 8GB allocated)

      • magiccupcake@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        I have plenty of understanding of what virtual memory memory is. For one, virtual memory is orders of magnitude slower than physical RAM.

        My point still stands, 8gb is fine if all you do is light web browsing and writing documents which is basically nothing, but at that point you don’t need a 2024 Macbook anything, you could use a older M1 Macbook and be perfectly happy.

        All web browsers will use up as much ram as possible, that doesn’t mean they need it.

        Even you don’t have a device with 8gb of memory, just because it’s usable doesn’t mean that’s it’s optimal, or that it’s not a ripoff to charge $200 for another 8gb.

    • fuckwit_mcbumcrumble@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      6
      ·
      7 months ago

      Your 64 gigs of ram probably uses 10x the power and takes up significantly more space than the single memory chip that’s on the M1-M3s die. And yet it still has less bandwidth than the M1, and on top of that the M1 utilizes it more efficiently than a “normal” desktop or laptop can since there’s one pool of memory for RAM RAM and VRAM.

      https://en.wikipedia.org/wiki/Apple_M1#:~:text=While the M1 SoC has 66.67GB/s memory bandwidth

      Chat GPT guestimates 57GB/s for dual channel DDR4 at 3600mhz

      $1000 for 8 gigs of RAM in the Air is whatever. $1200 for 8 gigs of ram in the Pro was not great. But 1600 for 8 gigs of ram in the new M3 MBP is really awful.

      • pivot_root@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 months ago

        the M1 utilizes it more efficiently than a “normal” desktop or laptop can since there’s one pool of memory for RAM RAM and VRAM.

        That’s not how it works, unfortunately.

        A UMA (unified memory architecture) enables zero-copy texture uploads and frame buffer access, but that’s not likely to constitute notable memory savings outside games or GPU-accelerated photo editing. Most of the memory is going to be consumed by applications running on the CPU anyway, and that’s not something that can be improved by sharing memory between the CPU and GPU.

        And yet [your 64 gigs of ram] still has less bandwidth than the M1

        It’s by necessity that the M1 has higher memory bandwidth. UMA comes with the drawback of the GPU and CPU having to share that memory, and there’s only so much bandwidth to go around. GPU cores are bandwidth hungry, which is mitigated by either using a pile of L2 cache or by giving the system better memory bandwidth.

      • azuth
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 months ago

        Memory bandwidth is useless if you run out of memory and need to swap.

        GPU not having it’s own pool of memory is really going to help to.

        Pigs fly in apple land.