Get 1 answer in 10 microseconds: Use a CPU.
Get 1000 answers in 1000 microseconds: use a GPU.
Get 1 answer in 10 microseconds: Use a CPU.
Get 1000 answers in 1000 microseconds: use a GPU.
Yeah, I prevent fast charge by charging from a USB port on my PC. Typically I plug in at 30-40% and stop at 70%. I do have a crack though… in the screen protector, which I will get around to replacing eventually, I swear.
I agree with you 95%, but a case can’t stop the battery from eating itself after 500 cycles or 4 years, so that does need to be replaceable (at a workbench, with proper tools, by someone with a modicum of care and patience).
(In fact, to some degree cases make it worse, by holding heat in during charging.)
There is a lot more to e-waste than just repairability. There is the recycled materials in the initial phone. Quality of the components. Sturdiness of the phone. Do people trade in their phones so they can be recycled? Is there even a trade-in program for this phone? What percentage of the phone is recycled after use?
This doesn’t matter. E-waste is a crock of shit. All of the phones you will ever use over your lifetime will fit in your coffin with you, there’s nothing seriously poisonous in there else it wouldn’t be safe to carry phone around in sweaty pockets, and the recoverable raw material value is approxmiately 0% of the manufacturing cost of a phone.
Apple’s “recycling” program is half virtue signal, half sneaky way of keeping devices off the used market. Which, by the way, is the only way real value is ever recovered from old phones. Recycle is the last R for a reason.
How many years does the phone get updates?
This, on the other hand, is very important. The real reason disposable and unreliable phones are bad is that getting a new phone sucks. Search costs suck, transaction costs suck, the “features” that the new phone comes with inevitably suck, and migrating data to a new device sucks. Which is at least partly intentional. Observe one scumbag Android developer cheering about the prospect of users no longer being in control of their own data.
But do be aware that by building your NAS / homelab around x99 instead of OEM desktop Skylake, you are spending quite a lot of electricity to buy experience with high-port-count platforms.
Cache is not a different thing than single thread performance. Cache is part of single thread performance.
* only on Intel, which has the L3 made out of slices attached to each P-core or E-core cluster (x4).
AMD segregates its L3 at the CCX level, so every part made from the same die set has the same L3. There’s a bit of a complication with the 12 and 16 core, because if all the threads are working on the same data the L3 is effectively 1-CCD-sized, but if they’re working on different data (like with make -j
, VMs, or some batch jobs), you get the benefit of both CCD’s worth of L3.
Average doesn’t matter. If the game I play uses parallelism well, I don’t care about the ones that don’t.
The difference in boost clock is only ~2%, so anything more than that is either due to core count or less (soft) thermal throttling from spreading the heat across more die area. And since they tested with a 360mm AIO, it’s probably not soft throttling.
100 MHz is only 1.9%, and the L2 cache is private per core. Both the 7600X and the 7800X have 2 MiB L2 cache.
Blender seems like it should be pretty close to embarrassingly parallel. I wonder how much of the <100% scaling is due to clock speed, and how much is due to memory bandwidth limitation? 4 memory channels for 64 cores is twice as tight as even the 7950X.
Eyeballing the graphs, it looks like ~4 GHz vs ~4.6 GHz average, which…
4000*64 / (4600*32) = 1.739
Assuming a memory bound performance loss of x
, we can solve
4000*64*(1-x) / (4600*32) = 1.64
for x = 5.7%
.
Any numbskull can figure out how to do it, given the assigned task of doing it, and there doesn’t seem to be any unique value in the various ways of doing it showcased here. (Personally, I’d prefer a very short cable to allow for mechanical tolerances in fan mounting positions and long-term reliable wiping electrical contact.)
I hope the court focuses on the questions 1) is the idea that it’s something you’d want to do itself patentable?, and 2) is the patent written in a way to cover that?
Static electricity, you say?
But /r/buildapc told me that ESD is basically a myth and modern hardware is immune!
/s
Lots of good technical information in this one. I wish this sub’s reflexive distaste for Linus Sebastien wouldn’t bury it. Way better than the got dman Verge.
OLED display is thinner which allows the battery to be physically larger. There are also chemistry changes.
Valve did not switch to hall-effect analog sticks… because they weren’t satisified with their reliability??? AFAIK, reliability is the raison d’être of hall-effect analog sticks. I notice I am confused.
Machine screws into metal screw bosses instead of self-tappers, so it can be disassembled and reassembled less carefully.
Battery says limited charging voltage 8.9 Vdc if I’m reading the sticker right, which is 4.45 V/cell. Yowza! If anybody can link any papers about cycle life of state-of-the-art chemistries at that voltage please do. It sounds super hot to my 4.2 V sensibilities.
High-end model has a transparent shell. Alas, not Retro™ Purple.
Current monolithic Ryzen and Core i can’t power down to that extent.
Why not? I get that the LP-e cores are optimized for lower voltage/frequency than generalist cores historically have been, but monolithic chips can use power gating and multiple voltage domains too.
Video playback for example, can take place on the LPe-cores, so expect long battery life in that scenario.
But what about video playback in a web browser? With 30+ background tabs?
If Meteor Lake manages to avoid regressing real-world battery life, I will be pleasantly surprised.
Ultra-high density chips is exactly what you want to support the largest possible memory amounts and speeds with minimal ranks. With 32 Gib chips, you could build 32 GiB single-rank UDIMMs, or 64 GiB dual-rank.
That means up to 128 GiB of RAM in mini-ITX!