• Coasting0942@reddthat.com
    link
    fedilink
    arrow-up
    29
    arrow-down
    5
    ·
    6 months ago

    Exqueese me? How does AI impact electrical use? Cause last I heard we’re supposed to be cutting back on energy usage.

    • Thrashy@lemmy.world
      link
      fedilink
      arrow-up
      21
      ·
      6 months ago

      This is a reference to upscaling algorithms informed by machine learning a la Nvidia’s DLSS – seems like AMD is finally going to add the inference hardware to their GPUs that will let them close that technological gap with the competition. I’m guessing it won’t come until RDNA5, though.

    • QuadratureSurfer@lemmy.world
      link
      fedilink
      arrow-up
      16
      arrow-down
      1
      ·
      6 months ago

      If you’re trying to compare “AI” and electrical use you need to compare every use case to how we traditionally do things vs how any sort of “AI” does it. Even then we need to ask ourselves if there’s a better way to do it, or if it’s worth the increase in productivity.

      For example, a rain sensor on your car.
      Now, you could setup some AI/ML model with a camera and computer vision to detect when to turn on your windshield wipers.
      But why do that when you could use this little sensor that shoots out a small laser against the window and when it detects a difference in the energy that’s normally reflected back it can activate the windshield wipers.
      The dedicated sensor with a low power laser will use far less energy and be way more efficient for this use case.

      On the other hand, I could spend time/electricity to watch a Video over and over again trying to translate what someone said from one language to another, or I could use Whisper (another ML model) to quickly translate and transcribe what was said in a matter of seconds. In this case, Whisper uses less electricity.

      In the context of this article we’re talking about DLSS where Nvidia has trained a few different ML models for upscaling, optical flow (predicting where the pixels/objects are moving to next), and frame generation (being able to predict what the in-between frames will look like to boost your FPS).

      This can potentially save energy because it puts less of a load on the CPU, and most of the work is being done at a lower resolution before upscaling it at the end. But honestly, I haven’t seen anyone compare the energy use differences on this yet… and either way you’re already using a lot of electricity just by gaming.

    • kerrigan778@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      6 months ago

      In this context it is being used to reduce rendering load and therefore be less intensive on computer resources.

        • halendos@lemmy.world
          link
          fedilink
          arrow-up
          18
          ·
          edit-2
          6 months ago

          Not really, AMD’s FSR upscaling can increase visual quality/fidelity while using less power than rendering at full resolution. This can be easily seen in Steam Deck’s battery life improvement when enabling it. Scaling this to millions of devices can indeed reduce energy usage.

          When you read about “AI power consumption”, its mostly about training the models, not as much the usage after it’s trained.

            • SpacetimeMachine@lemmy.world
              link
              fedilink
              arrow-up
              14
              ·
              6 months ago

              FSR in this case doesn’t need to be trained more. It’s already a complete dataset, so now it can be released to run on MILLIONS of devices and reduce their load. And then you knock railroads which are one of the most efficient forms of land transportation we have. Just full of bad takes here.

            • doggle@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              14
              ·
              6 months ago

              Training an AI is intensive, but using them after the fact is relatively cheap. Cheaper than traditional rendering to reach the same level of detail. The upfront cost of training is offset by the savings on every video card running the tech from then on. Kinda like how railroads are expensive to build but much cheaper to operate after the fact.

              It’s pretty simple. If you can’t understand delayed gratification, then you’re right: school did fail you.

              Ps.: the railroad comparison really breaks down when you consider that they’re cheaper to build than the highways that trucks use and that we don’t, in fact, need to truck in the resources anyway. We’ve been building railroads longer than trucks have existed, after all.

              • meseek #2982@lemmy.ca
                link
                fedilink
                arrow-up
                2
                arrow-down
                18
                ·
                6 months ago

                Thanks for the totally made up figures. I’m glad we agree that training itself is quite costly. No data on how much energy AI will save vs rendering (as we don’t know how much we can avoid rendering; there has to be a cap) so can’t really keep riding that horse.

                You’re right tho, the rail analogy sucks. Not for the reasons you list tho, but rather because they will never stop training AI. Unless you feel AI will stop learning and needing to evolve.

            • kerrigan778@lemmy.world
              link
              fedilink
              arrow-up
              15
              ·
              6 months ago

              No, I’m saying you are fundamentally misunderstanding what technology they’re talking about and are thinking every type of AI is the same. In this article she is talking about graphics AI running on the local system as part of the graphics pipeline. It is less performance and therefore power intensive. There is no “vast AI network” behind AMDs presumptive work on a competor to DLSS/frame generation.

    • Brokkr@lemmy.world
      link
      fedilink
      arrow-up
      14
      ·
      6 months ago

      This is kind of the opposite of that idea though. This is saying that not everything put on the screen needs to be computed from the game engine. Some of the content on the screen can be inferred from a predictive model. What remains to be seen is if that requires less computing power from the GPU.

    • QuadratureSurfer@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      6 months ago

      Yes, but with DLSS we’re adding ML models to the mix where each one has been trained on different aspects:

      Interpreting between frames
      For instance, normally you might get 30FPS, but between the frames the ML model has an idea of what everything should look like (based off of what it has been trained on), so it can insert additional frames to boost your framerate up to 60FPS or more.

      Upscaling (making the picture larger) - the CPU and other hardware can do work on a smaller resolution which makes their job easier, while the ML model here has been trained on how to make the image larger while filling in the correct pixels so that everything still looks good.

      Optical Flow -
      This ML model has been trained in motion which objects/pixels go where so that better prediction of frame generation can be achieved.

      Not only that but Nvidia can update us with the latest ML models that have been trained on specific game titles using their driver updates.

      While each of these could be accomplished with older techniques, I think the results we’re already seeing speak for themselves.

      Edit: added some sources below and fixed up optical flow description.

      https://www.digitaltrends.com/computing/everything-you-need-to-know-about-nvidias-rtx-dlss-technology/
      https://www.youtube.com/watch?v=pSiczcJgY1s

        • azuth
          link
          fedilink
          arrow-up
          5
          ·
          6 months ago

          No, rendering at a smaller resolution and upscaling is not the same concept as only rendering what will end up in frame.

    • baconisaveg@lemmy.ca
      link
      fedilink
      arrow-up
      6
      ·
      6 months ago

      It has yes, however the techniques Carmack used in Doom’s engine probably don’t have much of an impact on something like Cyberpunk 2077.

      • snooggums@midwest.social
        link
        fedilink
        arrow-up
        1
        arrow-down
        7
        ·
        6 months ago

        The exact techniques, maybe not. But the fundamental approach of only rendering what you see has been continued since then.

        • baconisaveg@lemmy.ca
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          6 months ago

          Right, so what is the point in bringing it up?

          “Sony just released a new 150 megapixel mirrorless digital camera!”

          “Cameras have been a thing since the 1800’s…”