• No-Roll-3759@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    12600k owner. i’m so frustrated. big.Little has never delivered on the behavior they promised, and now i’m being locked out of the fix. forcing me over to windows11 was not a fix, it was just aggravation.

    i early adopted the new arch because i really wanted to use an optane accelerator. intel quietly software locked 12th gen out of optane support, so when i built my system i spent an hour poring through the bios trying to figure out how to get it running and wondering why intel’s web instructions weren’t working for me.

    overall it’s been a pretty bad experience, and one intel curated for me. based on my 12600k experience i’ll be very reluctant to adopt intel proprietary technologies in the future.

    • ConsciousWallaby3@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I also went for a 12400 over more expensive options at the time because not only was it good value, I also wasn’t interested in the experience of being an early adopter for mixing different cores on Wintel.

  • nohpex@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Has anyone else seen these videos where people change the frequency (I believe*) of how often Windows has an interrupt request to check the power of the system to reduce overall system latency.

    For whatever reason, Windows checks this every 15ms, but people are changing it to the maximum setting of 5,000ms, which reduces latency for the CPU considerably… apparently fiddling with this setting is particularly bad for AMD’s X3D chips.

    What are the pros and cons to this? Has any reputable journalist looked into this?

    • veotrade@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It works. Set to 5000ms, which is the max value.

      It’s garbage that end users need to do any tweaking at all.

      A good number of tweaks are unproven and famously just bog down the system even more.

      As a casual user myself, I wouldn’t even know if changing one setting, let alone dozens of settings, makes a difference. I’m not qualified to test, so on some of these “fixes” I just blindly follow the advice of the tutorial.

      But disabling e cores, and changing the frequency 15ms->5000ms have helped me.

      I also have prescribed to the LatencyMon optimizations. Like setting interrupt affinity masks for my gpu, ethernet, and usb host controller.

  • XenonJFt@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The insanity that after 3 generations. Windows kernel still can’t priorotise P-E core usage in games and background desktops. parking them still gives them better results. AMD cache was kinda acceptable on 7950x3d vs 7800x3d debait because games cant utilise that much cores anyway.

    And its that all that bios and mobo hoops you have to go through to be compatible for 2 titles.

    Intel mostly abandoned ship on any gaming competitiveness. The clock speeds and high tgp is at least has its use in workloads

  • battler624@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I have no idea how it works but its probably moving everything away from p-cores that isn’t the game itself and keeps the game restricted to P-Cores.

    • Knjaz136@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Question is, why Windows doesnt have that option.
      Instead of core affinity, just restricting cores to manually defined task, forbidding everything else.

      • F9-0021@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Because Microsoft, the biggest software company in history, cannot make good software.

  • Due_Teaching_6974@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Intel’s E cores doing what they are supposed to on 2 games and 2 years after their debut, and only on their newest cpu lineup, peak Intel engineering right here

    • msolace@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      which just shows the scheduler is wrong. which people who cared to put the effort in already did manually with lasso. the only missing piece is random main kernel threads jumping on to p cores. AMD scheduler isn’t perfect either. And both companies are going big/little. so plenty of room to keep improving.

      • CascadiaKaz@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        correction: it shows that Intel Thread Director is wrong, and that the scheduler shouldn’t trust it.

        • SkillYourself@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Thread Director doesn’t do any directing, it’s a a set of new registers the OS scheduler is supposed to read for feedback on how well a thread is running on a core. If APO can do it right, it means the scheduler is wrong.

          15.6 HARDWARE FEEDBACK INTERFACE AND INTEL® THREAD DIRECTOR

          Intel processors that enumerate CPUID.06H.0H:EAX.HW_FEEDBACK[bit 19] as 1 support Hardware Feedback Interface (HFI). Hardware provides guidance to the Operating System (OS) scheduler to perform optimal workload scheduling through a hardware feedback interface structure in memory.

          • CascadiaKaz@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            facepalm are you daft?

            how the scheduler gets information from ITD doesn’t change what ITD does.

    • AgeOk2348@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      and they refuse to let people buy cpus without them, cant let amd win every bench mark that the vast majority of gamers will never use

    • splerdu@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I mean unless you’re Apple and have full top to bottom control of your hardware and software stack it takes some time for software to catch up with the hardware.

      Took a while for games to use MMX, SSE, AVX. Stuff that uses AVX512 can probably be counted on one hand.

      Good ray traced games are becoming mainstream just now, two whole generations after GeForce 20 series.

      I do begrudge Intel for holding this back from 12th and 13th gen users though.

      • p3ngwin@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Took a while for games to use MMX

        Even Intel’s 1st iteration of MMX was a kludge, as it used the floating point unit, so you could either use FP, or MMX, but not both simultaneously o.O

        Took awhile for that to be separated and gain the benefits of both available together.

        Intel also added 57 new instructions specifically designed to manipulate and process video, audio, and graphical data more efficiently.

        These instructions are oriented to the highly parallel and often repetitive sequences often found in multimedia operations.

        Highly parallel refers to the fact that the same processing is done on many different data points, such as when modifying a graphic image.

        The main drawbacks to MMX were that it only worked on integer values and used the floating-point unit for processing, meaning that time was lost when a shift to floating-point operations was necessary.

        These drawbacks were corrected in the additions to MMX from Intel and AMD.

        https://www.informit.com/articles/article.aspx?p=130978&seqNum=7

    • F9-0021@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      More like peak Microsoft engineering, since this is something that was always supposed to be done by the operating system. Microsoft is so awful Intel had to do it themselves.

    • siazdghw@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It’s the exact opposite of what you’re saying.

      Intel’s E-cores + Thread Director work perfectly fine 98% of the time, but there are edge cases where the Windows Scheduler cant get it right, even with the hints from Thread Director, and that’s where APO comes in, to manually force the correct scheduling.

      Also lets not pretend that AMD isnt suffering scheduling issues themselves, the 7950x3D and 7900x3D are shunned because they have WORSE scheduling in games as they rely on the Windows Scheduler to just try and figure things out itself, and that doesnt usually work with 2 CCD’s with one having a higher frequency and the other more cache.

      • shopchin@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Importantly, you think the fix will come for 12/13 gen Intel? You seem to know what you are talking about.

  • GenZia@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    From what I’m seeing, even with APO enabled, only 4 E-Cores are actually doing anything. The rest of the cluster is parked, doing absolutely nothing.

    Actually, that’s false. They’re actually consuming power, how miniscule it may be!

    And that’s one of the many reasons I don’t understand why Intel is stuffing so many E-Cores into their CPUs. Their practicality in real-world scenarios is mostly academic from the perspective of most users.

    A quad-core or - at most - an octa-core cluster of E-Cores should be more than enough for handling ‘mundane’ background activity while the P-Cores are busy doing all the heavy-lifting.

    Frankly, I just can’t help but feel like the purpose of these plethora of little cores it to artificially boost scores in multi-core synthetic benchmarks! After all, there are only a handful of ‘consumer-grade’ programs which are parallel enough to actually make use of a CPU with 32 threads.

    Anyhow, fingers crossed for Intel’s mythical ‘Royal Core.’ A tile-based CPU architecture sans hyper-threading sounds pretty interesting… at least on paper.

    • soggybiscuit93@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      More E cores aren’t for “mundane background tasks”. They’re to maximize MT performance in a given die space.

      It’s why 8+16 14900K competes with 7950X in MT applications, but would clearly lose if it was the alternative 12+0.

      Most people, myself included, would struggle to really utilize 32 threads. But the 7950X and 14900K exist for those that can or may be able to.

      • GenZia@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        They’re to maximize MT performance in a given die space.

        And I never said otherwise.

        I explicitly mentioned that more E-Cores can boost scores in multi-threaded synthetic benchmark and - in turn - any parallel workload.

    • VankenziiIV@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      You think e cores are only for synthetics? What if I show you 6p+6e or 6p+8e can defeat 8p in real world applications?

      • GenZia@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Well, applications are definitely getting optimized for 8C/16T as of late so it won’t be all that surprising.

        Hyper-threaded threads (hyper-threads?) can’t match an actual core by design, after all.

        However, I’m merely question the addition of 8+ E-Cores in Intel’s high-end SKUs. I believe I explicitly mentioned that I can see the potential of integrating 4 to 8 E-Cores into a CPU.

        • VankenziiIV@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          What if I showed you Intel 12th 6p+6e was able to defeat amd’s 8p in real world applications 2 years ago?

          • GenZia@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            A quad-core or - at most - an octa-core cluster of E-Cores should be more than enough for handling ‘mundane’ background activity while the P-Cores are busy doing all the heavy-lifting.

        • carpcrucible@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          It’s perfectly reasonable for high-end SKUs.

          You either have single-threaded workloads or games that might use 6-8 threads at most. Or you have “embarrassingly parallel” workloads like rendering or all sorts of scientific computing that will use as many cores as you have.

          If you literally only game on your PC then I guess just disable the e-cores.

    • liesancredit@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The 10900K was the last best designed intel CPU. Just straight up 10 powerful cores. That’s how a CPU should be.

      • dudemanguy301@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        ah yes who could forget the absolute TRIUMPH of the same tired architecture recycled for the 4th time in a row, on the same tired process recycled for the 5th time in a row.

  • advester@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The most interesting thing is that APO dropped the power from 190W to 160W while increasing the performance.

  • Berengal@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    To me this looks like it’s too early to make any definite conclusions on APO. I get that it’s tempting to conclude that they only support 14th gen CPUs as some sort of planned obsolescence scheme, but given that it also only works in two games really weakens that idea and makes the early release idea fit much better. So don’t judge them on the current state of APO, they may provide support for older gens in the future, but also don’t give them credit for it and factor it into the value of the product until APO becomes useful in practice, not just as a tech demo. This discussion is rather pointless at the moment. The technical details of how it works are much more interesting to discuss.

    • kasakka1@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      If Intel in the response to HUB says “We have no plans to support previous generations for APO”, how else are you supposed to interpret it?

      Ok, plans may change, but it’s very possible Intel will simply keep this locked on 14th gen just to be able to sell them.

      For me as a 4K gamer, it doesn’t seem like APO brings anything to the table, but it’s still disappointing to see software feature gatekeeping without a technical requirement behind it.

      • siazdghw@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        If Intel in the response to HUB says “We have no plans to support previous generations for APO”, how else are you supposed to interpret it?

        When a reviewer or journalist reaches out to companies, they usually get a response from someone that has no technical knowledge or insight on future products or changes, unless the inquiry is very serious and then it gets forwarded internally.

        Im not saying this wont possibly stay exclusive to 14th gen and beyond, but this response is almost certainly by someone that has zero knowledge of how APO works, what the team working on APO is doing, and if it will come to older generations and what games they are currently testing.

        • MdxBhmt@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          When a reviewer or journalist reaches out to companies, they usually get a response from someone that has no technical knowledge or insight on future products or changes, unless the inquiry is very serious and then it gets forwarded internally.

          And that’s on them, not on journalists or consumers. It’s their job to have messaging in line with the technical side of the business.

          Hell, if a PR team is making such explicitly stated messaging without consulting with engineering, it’s frankly a disfuncional corporate PR inventing stuff on the spot. We should take them at their word and act accordingly. Eat the damn negative PR from a damn anti-consumerist response. They could have stated it differently if they wanted some margin of interpretation.

  • zakats@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Ah, right, that’s why I wouldn’t have bought Intel. My fault for forgetting.

  • Knjaz136@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Isn’t this basically a thread scheduler fix that makes E cores do what they are actually supposed to do?

    And they are reserving this fix for 14th gen only for, seemingly, no reason? With a good chance that they had this fix for a while, but management decided to reserve it for 14th gen?

    This is what I’m reading from their reply to HUB.

    • reddanit@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Well, it does look like it’s just a scheduler fix at the very surface level. On the other hand it does seem to need some firmware support and presumably there is some reason why it only supports 2 games. So maybe it is something more complicated?

  • ktaktb@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Damn.

    Slimy as hell. Really bad move here. Hopefully every gamer channel provides similar coverage and a legion of 12th and 13th gen owners will become aware of this and really push back against this (as the gentleman in the video also hopes).

    I know reviews mentioned some wonky stuff going on with E-core and P-core scheduling on 12th gen, when I purchased 12600k and 12700k for two machines for my home.

    I’m feeling foolish for approaching this in good faith and assuming that Intel/Microsoft/game developers would continue to iterate on the issues and make software-based optimizations readily available.

    If I had realized, I would have AMD systems right now.

    It’s a very poor decision on their part to roll out APO in this way. If I was compelled to upgrade from 12th gen for more performance, this APO mess guarantees that I move my platforms to AMD.

    • siazdghw@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The example you chose is a terrible one, as for those that dont know that behavior is INTENDED by the Handbrake developers, This has been a known thing since Alder Lake’s launch. The developers didnt ever want Handbrake to use 100% of your system, so its flagged as a low priority process so you can still use your PC without it being lagged out while encoding. So the scheduler sees that and will free up the P-cores when you put another window in focus, so you can use your system without lag, while the e-cores encode in the background.

      If you go through the github you’ll see the developers tell people they can override this manually, but the current implementation is exactly how its supposed to work.

      You cant blame Thread Director or Windows scheduling for this specific case with Handbrake as its what the developers intended.

  • imaginary_num6er@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    “I asked them is there a technical reason for why 12th and 13thgen Parts aren’t supported and if not will they be included in the future? their response to that question was as follows: Intel has no plans to support prior generations of products with application optimization. That’s a really garbage response to be perfectly blunt about it.”

    Yeah, let’s have people rush to upgrade to 14th gen when it already had questionable value to upgrade. This APO feature will die in obscurity since Intel will realize 14th gen is not being adopted and unless they want a repeat of XeSS, they will cut their losses and decide not to invest resources into a feature that barely anyone uses.

    • Jesburger@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Not being adopted? Dell, HP, Lenovo, will slowly stop selling 13th gen and move on to 14th gen, like they do every year. Businesses will buy the computers with the biggest number gen, as they do. Gamers on reddit aren’t the huge market you may think it is for these companies.

    • soggybiscuit93@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Not if the game library keeps increasing and APO is supported on all future Intel CPUs.
      It really seems to be like a software optimization to better leverage E cores in gaming to improve performance. I don’t see how that feature is going to die as Intel seems to be committed to hybrid for the foreseeable future.

    • Put_It_All_On_Blck@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      unless they want a repeat of XeSS, they will cut their losses and decide not to invest resources into a feature that barely anyone uses.

      XeSS is in close to 100 games now, more users are using XeSS than people even own Arc GPUs, as it has better quality than FSR and works on AMD and Nvidia GPUs too. Also Intel has already marketed Meteor Lake + XeSS, which they are expecting around 100 million people to buy MTL in 2024.

      If anything XeSS has been the most successful part of Intels consumer GPU push.

      • AgeOk2348@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        as it has better quality than FSR

        *depending on the game. spiderman and hogwarts legacy for instance have much worse ghosting with xess than fsr. so its kinda useless for those.

  • aj0413@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This feature is a lot like DLSS1; was cool to follow and for some of us early adopters to trial, but not something anyone should be basing any serious discussion/evaluation on

    • from someone that updated every RTX gen specially for DLSS
  • DktheDarkKnight@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    So APO is just Intel fixing the E-core issues. Whoa. I thought Intel stumbled onto something special when they mentioned per application optimization.