Looking at all these charts, it's not hard to come to the conclusion that if you're cost-constrained, and the only stressful thing you're doing is gaming that the best strategy is to buy a 7600x and put all of your budget towards your GPU. But if you need to do something that can use a lot of threads (compiling, encoding, etc), the 7950x3d is better, by a small amount, in most games.
Or the 5800X3D. AMD made it too good. I think 3D Cache will not be worthwhile until 8000 or 9000 series Ryzen when they get the clockspeed throttling resolved.
3D V-Cache will absolutely be worth in Ryzen 7000. Just wait for the 7800X3D that has a homogenous core design. I had suspicions that AMD wouldn’t figure out how to properly optimize for the dual-CCD V-Cache designs, and that seems to have been correct. Hopefully, AMD gets this fixed for Zen 5 because V-Cache is an amazing technology when it works.
I fully agree with you there, I too shared the same suspicion and so far it seems to have come true. Another good reason to keep my good and trusted, lovely Intel 13900 KS overclocked to 6.4 GHZ (yes that is for all P cores simultaneously and of course the E-(core)waste is turned off, don't ask, I am running a high flow industrial liquid cooling system that keeps temperatures precisely 2 degrees above the respective dew points; yes it is way larger than a full tower by far and does not rely on those commonly found small crappy radiators and noctua fans as it is used to cool parts of my laboratory equipment, thus I've just added an additional loop (with individual target temperature control) for my processor and graphics card, a trivial matter by the way) and to skip this only half hearted implementation of the AMD 3D-Vcache technology.
IMHO the 7950X3D and 7900X3D have been intentionally crippled for various product placement and business strategy related reasons and I absolutely abhor such practices and condemn them in the strongest way possible. AMD could and should have equipped the 7950X3D with its 3D Vertical Cache technology on both chiplets instead of turning it into an undead zombie like hybrid of neither fish nor fowl. I really despise such artificial hindrances to what could have been a fully functional and well rounded product. In fact I would gladly have paid 1000$ for a hypothetical 7950X2*3D if I would have gotten the chance to buy something like it with more cache on all chiplets, what a shame and a waste first and foremost. Anyway, since my employer foots the bill I've already placed my order (with my employer, I don't know when it gets fulfilled by Intel) for a 4 socket system containing 4 Intel Sapphire Rapids 8490H processors (with all accelerators enabled) that will hopefully help me to get over this enormous disappointment AMD has caused me here. Yeah, if AMD had delivered a solid product in the form of an 7950X2*3D I might have opted for an Epyc based system, but since they decided to basically artificially cripple what could have become a very promising product it is bye bye Epyc 9654 instead and rightly so! Thankfully core density on a per system basis with multiple sockets is higher with intel platinum processors anyway.
I concur 1 x XCCD seems the way to go for AMD optimized titles only and on price performance however Epyc LC or 8490H 4-way or 9654 2P for containerized / virtualized / partitioned are entirely different realms. Have fun with the Intel SR applied science project. mb
Dannyzreviews actually found that e-cores on average increases performance in games. It’s like -5 to 10% performance delta and something like 3% faster on average if I recall correctly. That video came out on YouTube fairly recently so give it a watch.
I noticed the same thing while overclocking. Disabling e-cores to get higher clocked p-cores actually hurt overall FPS in Battlefield 2042, a highly CPU-bound game. Unfortunately e-cores do not overclock well, and they are more power hungry than you'd think which limits p-core overclocking potential. Alder Lake is especially dicey with the silicon lottery I hear. I somewhat lucked out but there are tons of people who have identical stepping 12700K's running a full 0.100v higher than mine - the microcode or something in the chip seems to dictate a stock + offset! Mine never exceeds 1.300v stock while many are 1.35+ on the same motherboard, BIOS and settings.
Think about the reason why that might happen, you have background tasks, and for some games that use 6-8 cores(there are a few of them around), background programs DO get in the way. So, 8 cores is good for games, but having extra cores will take care of that background stuff. This is why I went with a Ryzen 9 7900X, so I have the extra four cores for anything in the background that might get in the way of games or whatever else I may be doing.
But the 7900X has the same 6+6 configuration as the 3D CPU. So those extra cores are not on the same CCD. Any game using more than 6 cores will have use both CCDs at the same time.
If they would give us the very best of their tech at an acceptable price then the consumers would not upgrade every X short interval of time, they would not consume as much as desired, and profits would dwindle. The spice must flow.
I don't see that and intentionally cripple the 7950x3d.
And has to balance out price Vs market demand and finishing returns. For end users, the main benefit from the extra cache is mostly just gaming. Dumping more cache doesn't necessarily means games will run faster.. then we have to look at thermals as well.
Also, even though you are willing to pay $1000, it doesn't mean everyone will. AMD doesn't make decisions based on what a few individuals want.
No matter how you look at this, improperly optimized or not, it's impressive AMD pulled this off with reliable performance gains. The optimizations are all in software\drivers at this point, which is strange to say for a CPU that hasn't had any 'architectural' modification unless one were to consider L3 cache part of the 'architecture.'
But I think we all agree this is going to be nuts when they stack V-Cache on both CCD's, presumably using a cheaper manufacturing process to possible keep costs identical or lower.
The trick now is going to be getting devs to optimize for large L3 caches. That shouldn't be hard due to AMD's presence in the console market, possibly pitching this as a next-gen console design win, but at the same time they have a fraction of the marketshare Intel has and traditionally devs have slept with Intel while AMD is on the sofa.
AMD knows exactly how to add X3D to all CCDs, just like the existing 7773X and the upcoming 9004X series. The problem is, however, that those server chips are NOT designed for gaming, and don't have crazy high clock rates. For gaming, you need high clock on at least some cores, and adding X3D severely limits clock speed. This is the balance they try to strike.
The 50W less TDP penalty is what you pay for adding X3D to one CCD, and if you add X3D to both, you will have to pay 100W of TDP penalty. This makes the total power budget a mere 70W, and this is probably not good for single core speed.
Though 35W/CCD is kinda low, but if you put 12 of those, it still has a total TDP in north of 400W, what's how Genoa-X gets its performance. On a desktop platform for gaming? This could be really bad.
That being said, 35W-rated laptop Ryzens can still sustain 3.xGHz, so that is not low by any standard. Heck, my laptop runs at less than 2.5GHz sustained, and I never had a problem running any productivity tools, though clearly I do not game. This lower clock impacts server applications by very little, so the "all X3D" method works for this market.
Getting higher TDP while having X3D is kinda difficult, and I don't think they are getting there any soon. The problem is the thickness of that extra stacked chip, and that thickness translated to thermal resistance. Moreover, the thermal gradient on the X3D chip causes it to expand more near the CCD and less near the IHS, giving it a trapezoidal deformation. This applies tremendous amount of stress on the hard material (in the level of hundreds of MPa to a few GPa), causing both the CCD and X3D to fail prematurely.
So, mother nature doesn't seem to like the concept of X3D when single core performance is important, and AMD must figure out a way to solve this.
Buying a mid-range CPU and putting the savings into the GPU has been the most short term bang for the buck in a gaming system for years. The main risk if you're someone who keeps the base system and swaps in faster GPUs every other year or so is that the mid-range CPU might not age well.
A fairly recent example is that while fine at the time, intel's 4 core 4 thread i5 processors ended up becoming CPU bottlenecks several years before the 4/8 core i7s from their generation did.
intel hedt 5960x (8c/16t) oc'd to 4ghz 32gig pc-2166 memory asus rampage V extreme X99 mobo thermalake 1200watt ps all 7 yrs old.. was running dual 980 ti's in sli when I put it together then switched out to a 2080 ti 4 years ago.. then last month 2080 ti died so now I'm sporting a brand new gigabyte aero oc 4080 running pretty well atm (win 10 pro w/all latest patches and feature updates) I did recently mothball my original Intel 750 1.2tb nvme ssd card (was very close to max endurance). put in a 2tb Kingston fury nvme ssd (running in 3.0 mode since mobo only has single 3.0 m.2 slot)
Yes, but that only applies to Intel. AMD systems are built with the future in-mind.
For instance, if you hopped on AM4 with a low budget and only built an r3-1400 and threw the rest at a GTX 1060 you got a good deal. Better than an r5-1600 and GTX 1050. And you can upgrade both the CPU and GPU without having to change other components like the Case, Motherboard, Cooler, PSU, etc etc.
For gaming, there's not much gains to be had from upgrading the CPU frequently, unlike with GPU. For that reason, it doesn't make sense to upgrade to Zen+ and regular Zen3. So I'd envision from the r3-1400 to the r5-3600 then to the r7-5800x3D. Meanwhile GPU upgrade can goto GTX 1660-Super, or RTX 2060-Super, then RTX 3060, lastly RTX 4060Ti. That's the more value oriented way of Gaming and upgrading from 2017 to 2021, a 5-Year period akin to a console generation.
ROFL! You start with about $420-450 worth of a system and end up paying THOUSANDS of dollars on video cards upgrades over 6-7 years. All that during the crypto craze, the scalpers paradise and the pandemic nuttery.
What are you even on about? It's perfectly common for someone to start with a midrange system, and make upgrades down the path. Sometimes it's a shortage, or pricing, or budgetary restrictions, or even lack of knowledge.
The initial PC may have only been an entry level USD $600 build. Each CPU upgrade would've costed $200 and $400, and the old ones possibly bringing $50 and $100 back (total $450). Each GPU upgrade would've costed $200, $400, $350, and $500. The old cards would've sold for $50, $100, $150, $250 back (total $900).
Which means a savvy shopper would've spent around $1,300 on upgrades in that +5 Year period. Total coming in at USD $2,000. Which is pretty good value for money.
Someone stupid, as you implied, would blow that budget on a single upgrade. Because they're buying from scalpers and not being responsible with their budget. During the past 3 years, it was very difficult to get decent parts AND cheaply, but it wasn't impossible.
Having only six cores is going to be limiting in a number of games going forward, so I'd say going to an eight core CPU is a safer bet for those who only game, or be prepared to replace your CPU with an eight core or better CPU in the next few years.
That ability to replace the CPU with a newer generation is going to remain an advantage for AMD, since with Intel, you can count on any motherboard only being good for 1-2 generations.
I just want to say that you are really cost-contrained, you might not even want to get Ryzen 7000 series... The older 5000 series is still selling very well...
The performance gap between 7000 and 5000 series are evident only if you have a graphics card capable enough to produce that difference. If you are using cards like 3060, the difference becomes extremely tiny since GPU is the bottleneck.
For gamers on a budget, its better to scale back on the CPU and use that difference for a faster GPU. Eg, 5600 + 3070 cost around the same as 7600x + 3060 (might be more expensive after factoring in board and RAM). However 5600 + 3070 is clearly faster in gaming.
AMD has been on such a roll lately. This seems to be quite an underperformer. It's slower and more expensive than the 7950X in anything application related and trades blows in games. I don't think a few extra frames at 1080p in some titles is a when frame rates are already 150+ is going to matter. Seems like a lot of intellectual and manufacturing effort for very little return. I had thought the faster memory subsystem and larger cache of Zen 4 would make less of a difference than with Zen 3. This review seems to corroborate that opinion.
You are regularly GPU bound at 1440p you freaking dingus... 🤦 But the R9 7950X3D will be able to utilize more of a future high end GPU upgrade for a 1440p or 4K gaming rig than if you went with a slower gaming chip. Why? Because at 1440p and ESPECIALLY 4K IT HAS SIGNIFICANT LEFTOVER CPU HEADROOM! (This might be a hard concept for your peanut brain to grasp).
If all you care about is high resolution gaming performance with current GPU's just get a i3-12100F and STFU while recognizing you'll likely need to replace it in when faster GPU's come out.
CPU's are benchmarked for gaming at 1080p for a reason Mr. Tech Illiterate. 🤦 It's to remove GPU bottlenecking as much as possible so that you're ACTUALLY TESTING THE CPU vs your graphics card!
Look at the power consumption reduction. This is a real cost. Waste heat from the power supply, extra HVAC loads, more fan speed / noise. These factors are real and have a monetary value as well.
The X3D has significantly more hardware running in a lower power budget. As a result, max boost not withstanding I suspect that it's generally running at lower clock speeds than the conventional part.
From an engineering standpoint I'd be interested in seeing what the comparison looks like with them forced to run at identical clocks.
For more even real world results, can the X3D part have it's power level/clocks be increased to match where the non-3d part operates at? I know the 5800X3D had minimal overclockability, the rumor mill for the current generation has gone back and forth repeatedly; I'm unsure what the current/final status for it is.
This discussion comes up now and then. But right now our stance remains unchanged: when AMD is willing to warranty memory overclocking, we'll start using it as a base setting.
Otherwise, it's disingenuous to test a CPU in a state that's outside its normal operating parameters, and which would result in an RMA being rejected if anything happened to it.
Intel RMA survey actually asks if you used XMP as grounds to deny your RMA. They count it as oveclocking. AMD on the other hand doesn't even ask about memory profiles. I've had to do a RMA for both an i7 7700 and a R7 2700.
Yes, the scheduler clearly isn't smart enough to assign threads to the optimal CCD with any consistency. AMD's "best of both worlds" design ends up being the worst of both worlds about half the time.
They should have just put the 3D V-Cache on both CCDs and avoided this whole mess; anyone putting this (instead of a 7950X) in a workstation/HEDT obviously is running some kind of cache-limited workload and would prefer extra L3 on all cores to this design even if the scheduler worked perfectly.
Fully agree with the content and all arguments made in the above comment. Very good points! I too would have preferred a fully functional product instead of such an IMHO half assed approach which after all is said and done excels at basically nothing compared to all the other already existing good options from Intel and AMD. Shame on you AMD for what you have done there! IMHO that's just disgusting! I can't believe I've waited months for something like this, never again, urghhh...
I've waited years to upgrade my i7-860, PCIe 2.0 system. I'm still waiting, now for the Ryzen 7900X3D to pop out. I'm pretty sure some other bad things will happen till then, that will prevent me to "upgrade". Hopefully a nuke.
Yes I am sure as well that some other not so good things will happen preventing me to "upgrade." Not withstanding our seemingly never-ending financial and inflationary conditions. The 7950X3D only excelling in gaming but not in content creation is disappointing to say the least. Many actually make money with content creation to pay for their casual gaming luxuries. For the 7950X3D a MSRP of USD$699 is an expensive trip for just gaming. Especially since the greater majority of gamers are no exactly living in the higher-end of the food chain nor are they the primary bread-winner for their family. What was AMD thinking? The only good news is that AMD usually starts dropping their intro-prices on all of their products virtually within a few months. So for me now sitting pretty and free in Mom’s basement, there may be hope after all and for now keeping on playing 'Wolfenstein' in all of its glory and a train ride to Berlin!
3D V cache and 5.7+GHz across all cores would have been ideal, but it seems apparent it was not possible to achieve in this generation. Given the choice was cache or speed I think AMD did the best they could.
If you play a lot of simulation games, particularly games like: factorio, Timberborn, Infraspace, Transport Fever 2, etc. and to a lesser degree Civ6, and Total War, those games will benefit substantially when the game asks the CPU to do all the pathfinding calculations.
For those games, it's easy to have 120fps at the start of the game, but as soon as the game has thousands of units asking for pathfinding at the same time, you will quickly find yourself with 100% CPU usage, 10% GPU usage, and <15fps.
These games benefit greatly from that massive cache on X3D.
7950X3D falls behind 7950X in simulation tests, while 5800X3D outperforms 7950X. That is strange. Could you please clarify which CCD is used in these tests?
I'm looking into it. I'm testing in all three modes in our compute tests to see where AMD's PPM and 3D V-Cache driver gets it right and where it gets it wrong (if at all).
AMD does not have an intelligent scheduler built into their CPUs to handle the heterogeneous design like Intel. As a result that have a driver that is basically just: if a game is detected then only schedule on the V-Cache CCD. This driver is imperfect as is for games as seen with Factorio, and clearly not optimized for other workloads. The Windows 11 scheduler will typically park threads on the higher frequency CCD because the driver is supposed to tell it not to. As a result, the driver isn’t activating causing threads to be scheduled on the higher frequency CCD and performance is worse than the 7950X because the X3D has a lower power limit and all-core turbo.
Yeah I know the scheduler is basically "if game then cache else frequency". But Dwarf Fortress and Factorio are also games, though not 3D games. So it seems hard to say which CCD will be used.
Yeah, that's right about the 'scheduler.' It hinges on Xbox Game Bar feeding the information to the drivers if A. If it's a game, park one CCD and enable V-Cache CCD or B. Not a game, then act as normal.
In relation to Factorio, I know our version in our suite is without the UI, which is possibly why it wasn't flagged. As this is more of a CPU benchmark in our suite than a game, it is reflected in the method of data.
That's why I did a side test with Factorio on the CCD with V-Cache, which, as expected, boosted performance by nearly double.
I'm in the process of doing additional testing, so bare with me
Wow AnandTech, pulled out the slowest DDR5 money can buy for this review and didn't pair the Ryzen 7000 with "sweetspot" 6000 speed ram, and put 5 pages of productivity to 4 pages of gaming benchmarks for a "Gaming" CPU. Just WOW. The other site known for biased Intel reviews did a flip flop as well, what is the world coming to.
Isn't the point of testing missed if the testing is done in wildly suboptimal fashion that does not reflect real world configuration? On one hand you have overpowered cooling, PSU, MB and GPU, but on the other the real bottleneck for some core designs: RAM - is artificially kept slow? Either you run top of the line enthusiast spec or some basic discount shop bad config. The excuse "we've always been doing that" feels hollow. If you've been doing it wrong, you should correct that, not try to make that point of virtue. You've might as well tested all with 300W PSU and basic air cooler and claim the same, and in same vane such "CPU testing" wouldn't show what people are expected to see - how CPU will behave in a real life config.
Definitely looks like AMD is using this as a dry run for their rumored Zen 4c architecture. From the rumors I've seen, they were able to pack about two Zen 4c cores into the space of a standard Zen 4 core by repackaging the logic and removing half the cache. But that means to use them as super E-cores, they need a scheduler that can handle by cache, not frequency like Intel's does.
By launching the dual CCD CPUs first, they're able to see and work through those issues now, on a low risk platform before they go full heterogenius. Very shrewd on AMD's part.
Fully agree with you there! Interesting and thought provoking comment, thanks for sharing your thoughts on this matter! I too believe that some of the practices of AMD are often rather shrewd in their nature...
Not even close to what is going on. The 5800X3D was clearly the final chip for socket AM4, and yes, it was primarily focused on gaming since that is where that 3D stacked cache tends to benefit users more.
For the Zen4 versions, you will see the 7800X3D, which has one CCD, then the 7900X3D and 7950X3D. For the dual-CCD chips, because GAMES don't use more than eight cores, AMD did the best mix, apply the stacked cache to one CCD, leaving the other CCD without, just so it can be clocked higher. Again, games are pretty much the only chips that show the benefit to the extra cache, so why put the stacked cache on both CCDs?
Zen4c is going to be for servers that will use more cores, but don't need as much cache to operate. When more cores is more important than the very best performance per core, Zen4c makes more sense. There is nothing about stacked cache that applies to Zen4c.
Intel E-cores are pretty much garbage, less functional(they didn't include AVX-512 even when the performance cores did), but more cores DOES benefit certain types of workloads. Note that Intel laptop chips are using this almost to deceive consumers, dual performance core i7s with efficiency cores to make them not as bad will still be worse than any i7 should be. Intel used to sell a lot of low-priced i7 chips with only 2 cores/4 threads, so now, dual-performance cores with some efficiency cores to be not as bad as dual-core i7 chips used to be.
Now, the 7800X3D will be only a single CCD part, so the ONE CCD will have the 3D stacked cache on it. AMD is doing this because the volume of sales for the dual-CCD X3D chips is expected to be fairly low, while the 7800X3D will probably have a very high volume of sales. It's best not to encourage all of the stacked cache modules to go to the 7800X3D where the highest demand will be.
Thanks. For me, your testing suggests pretty strongly that I am best served by simply keeping my current 7800X3D rig — i.e., keep the current motherboard, 7800X3D, RAM, and SSDs but spend the money upgrading from an RTX 3080 to a 4080 or 4090. I WOULD like to see a test of MS Flight Simulator at 1440 and 4K before a final decision, but those should be forthcoming from the MFS forums soon enough.
Do you happen to work for AMD or one of its system partners? If not you most likely intended to write 5800X3D right? After all the 7800X3D hasn't been released yet, at least not to my knowledge at the time of writing.
Sorry, regarding the above post, I currently have a 5800X3D rig, NOT a 7800X3D, which, obviously, does not currently exist. Too bad this website doesn't allow editing of comments.
The 5800X3D is still a VERY GOOD processor for gaming. And given it's around $330 right now and can be paired with DDR4 memory/AM4 motherboards, for the price, it'll be hard to beat for gamers on a budget.
Of course, a discrete graphics card is required, but the more spent on a GPU, the better the gaming performance will be.
You said in the comments you're still doing testing. Will a new article be posted with your updated results or will this article be modified? These Factorio results are shockingly disappointing. Is there no way to force an application to start on the core with the 3D cache on it?
I certainly am, and they are running while I do my evening things. As the tidbit on the last page highlighted, Factorio performance SHOULD be higher, but AMD's PPM/3D V-Cache Performance Optimizer drivers didn't pick up the fact Factorio was running. This is primarily due to our benchmark that is running without a UI, and it's possibly why XBOX Game Bar didn't pick it up.
On that note, I'll be doing more testing with the 7950X3D set to Cache and Frequency to see if AMD's drivers got it wrong on more in our test suite than just Factorio and possibly Dwarf Fortress.
As for if it will be an article update or a fresh page, I would say the update is more likely, but this will be explored when I have the data and if it makes THAT much of an impact on performance.
We shall see, but first, I need to collect more data.
So far, all this has done is to sell me on the Ryzen 5 7600X. Maybe I'll wait for that 7800X3D before buying parts to upgrade from my 10th gen Intel CPU.
the driver to park CPU cores to get windows to use the correct cores when gaming seems like such a hack workaround. I think AMD should have stuck to homogeneous cores, or gone the intel route and have a big-small configuration that the OS has an easier time of handling.
The problem with big small is what is a big-core? Some workloads need higher frequency, others higher cache, and some a bit of both. I don’t understand why DirectX and Vulcan API calls couldn’t have been automatically routed to the V-cache CCD, and everything else to the frequency CCD. AMD clearly needed to spend more time optimizing the scheduler for their processors. Intel spent years building a new hardware scheduler and directly working with Microsoft to maximize performance.
How do you think an "automatically routed" system actually works? The operating system would need to handle that. It is actually up to Microsoft to "optimize the scheduler", but programs can try to help. You know Intel had to work with Microsoft, and still screwed it up with Alder Lake at launch where various things, including some game copy protection failed horribly.
It seems like they could just start all threads on the frequency CCD and if cache misses exceed a threshold then move to cache CCD. Im sure its not that simple, but surely cache miss metrics are more reliable than the Windows game bar..? After all some games do better with frequency than cache.
"Like the Ryzen 7 5800X3D, the soon-to-be-released Ryzen 7 7800X3D shares a turbo frequency of 5 GHz"
I wish my 5800X3D could turbo to 5GHz, even the 5800X will officially only turbo to 4.9GHz while my 5800X3D reports a 4.55 GHz PBO limit and never clocks above 4.45. Only my 5950X is supposed to reach 5.05GHz, never seen it actually go there.
In all of these cases HWinfo and Ryzen Master disagree on nearly every measurement.
I am afraid I muddied the waters too much with all the extra data...
Gavin, the Ryzen 7 5800X3D simply does not turbo to 5GHz, it's 4.5GHz per your own table in the 5800X3D review. HWinfo reports a PBO max of 4550MHz, but the highest ever reported clocks stick at 4450MHz, which is also the highest ever recorded effective clock on the very best cores.
tl;dr
Yes, because I'd not dare not answer your question, all my Ryzens are throttling at 90°C or at 110 Watts for the single CCD chips or 140 Watts for the dual CCD chips, whichever comes first, and that is exactly as per design and PPT settings for the Ryzen 5000, I believe.
I use big top down CPU air coolers in combination with lots of large front to back fans, pretty old school and designed more for longevity of all components than maximum cooling power as gaming isn't the primary mission. All my case fans and CPU coolers have been Noctua since 2006, unless they were out of stock. My 5800X3D has a 140mm single fan NH-C14S while the 5950X has be quiet! Dark Rock TF2 rated for 230 Watts, which is obviously exaggerated. Top blowers are becoming hard to get, but I love giving all those onboard compoments and them DIMMs a bit of a breeze, sensors say it works.
I'm not about to play with water in my computers, I got enough things to worry about, and the electricity bill is one of them: I try to aim for optimal efficiency near top performance.
Mainboards are a Gigabyte Aorus Ultra for the 5950X (a 5800X originally), an X570S UD for my son's 5800X3D, and an X570S Aorus Pro AX for my own 5800X3D. These are all what I guess you'd call "value optimized" boards, not an overclocker's choice, because they are designed as workstations with DDR4-3200 ECC RAM that might see an occasional bit of gaming (except for my son's rig, which uses non-ECC optimal speed DDR4-4000 instead).
For overclocking I don't go further than enabling PBO in the BIOS, because I've seen Linux crash with stuff like the curve optimizer and most of my machine run Linux as a primary OS.
Again, way too much data I guess, for a single understandable mistake you may want to fix.
5000 shouldn't thermally throttle. 7000 does more of that. If possible get a thermal couple on those heatsinks and verify you're getting the heat out of the cpu in to the sink. It almost sounds like a mounting or paste problem.
For reference, I run a 5950x (under water). It hangs around 235W / 5GHz all day long and never exceeds 60C.
When I learned about the asymmetric CCDs I was first put off and then actually a bit ecstatic, because it actually seemed a brilliant move!
Because from what I understand the clock penalty for the V-cache should not be constant as cores are loaded and clocks need to go down anyway to fit the TDP.
So if you have a 16 core compute load with a steady state around say 4GHz, there is a good chance both CCDs will actually clock very similarly, simply because they are tied to single digit Watts per core. In other words in that graph with max clock per active core count, the V-cache CCD will just snip off the top clocks when the cores would go to max heat and >10 Watts each, while the slower clocks made necessary by the extra cores may ease the heat dissipation disadvantage of the extra billions of transistors used for SRAM cells and loose the V-cache specific clock penalty as they are forced into the CMOS knee. It really gives you, as you say, the ability to choose the perfect up-to-8 core CPU for any workload variant, a clock or acache beast, while the difference between the 3D and non-3D 7950X variants at 9-16 cores of load should become negligible. And 16 core workloads tend to be long duration batches, where the only thing that really flows is coffee and any permutation of 3D vs. non-3D CCDs is less likely to behave very differently.
Yet I don't see that play out in your synthetic benchmark results, so either my theory is all wrong, or there is something amiss with the drivers/software. Pretty sure Andrei would have loved to have a go and do a really deep analysis on that behavior.
There are, of course, some borderline cases, where the extra V-cache will make a giant difference: I've heard some chip design simulations quoted. But even there any other design might just step outside the 96MB 3D variants can offer or remain inside the 32MB the normal dies manage, so I really doubt the numbers will ever point towards a dual V-cache CCD desktop variant.
My big machines are all AMD CPUs these days, still Nvida for the GPUs because of CUDA. And there AMD has maintained a constant pain in the bud with a frustrating and terrible restriction on all of their driver software: it refuses to run on Windows server.
I need to run that on my jack-of-all-trades machine (5800X3D currently, Xeon E3 before), which is a core 24x7 box running ECC RAM and RAID storage, yet relatively low idle power and noise.
With the 5800X3D and the X570 mainboard I was able to get nearly all drivers to install manually anyway, but the power plan won't fit on server editions.
And there is no xbox stuff or "game mode" either on Windows server, which to my chagrin makes it impossible to run Microsoft's Flight Simulator, too (after 190GB of download time): all other Steam, EPIC, Origin and Uplay titles do, including with VR...
The need to properly manage the allocation of processes to the core types is going to bite AMD, I'm afraid, because most users won't be nearly as brilliant or patient as Lisa Su & friends.
I'd probably want to go with Lasso for controlling that and use numactl on Linux, if I were to buy one of these chips anytime soon.
Which I probably won't because the performance gain from my current Ryzen 3 machines isn't really that big while 128GB in ECC DDR5 and a matching mainboard are eye watering yet gain less than the next gen CUDA card for my machine learning stuff.
Wish the charts would include the odd older CPU, as I reckon people nowadays are leaving it longer and longer to upgrade CPUs because GPUs mean so much more when it comes to gaming performance.
I'll wait benchmark with the cache preference in the bios. I'm not fan of these additional layers like microsoft game bar, some people disable it and it looks like it doesnt help for factorio anyway. If bios settings set to cache preference can fix most of problems i'm fine, not everyone use windows.
Your V-Ray bnenchmark scores are all over the place. A current 7950X scores on average 29k vsamples. How are you getting these values, and how can the reader know the method is proper, given you're sitting on double the real values?
Gavin, once again: your V-Ray benchmark score do *not* reflect what can be seen here (https://benchmark.chaos.com/v5/vray?search=7950x&a... by TWICE the amounts. How are you reaching these values? Do i have to ask you officially via email, or can you reply directly? I am *clearly* an intersted party under anonymity. If you read and delete the comment, you are well able to reply instead.
This correlates with the result/data I have in my testing in this review. You also can't compare results between different versions of the benchmark.
I'll run the latest version when I get a moment tomorrow and let you know. In the mean-time, feel free to email me if you wish to continue the discussion there. It's easier to keep track of via email than it is to trawl through comments.
Thanks!
P.S.: The only time I'll delete a comment I've seen is if it's spam.
Also, this single-sentence paragraph (below) appears on both page 4 and 5. Maybe it doesn't belong on page 4.
"In the encrypt/decrypt scenario, how data is transferred and by what mechanism is pertinent to on-the-fly encryption of sensitive data - a process by which more modern devices are leaning to for software security."
AMD's core efficiency is out of this world. The 16 core 7950X3D performs at the same power level as the 6 core 7600X. This is unheard of and unimaginable!
3D V has it's limitations. AMD cannot improve this on Gen 2 with any Clock boost or such. It has same downsides, the Base clock reduction and Max Clock reduction. And the Unlocked Multiplier lock.
This processor is only for those "Gamers" And it does not make sense on the 7950X3D at all. A loss of TDP power window, loss in every thing that scales with Cores + Clocks.
Also Ryzen 7000 / Zen 4 is ultra optimized by default, it has super low voltage. 1.2v max at such high insane 5.x GHz clock rate vs Zen 3. TSMC 5N is a massive gain and also Zen 4 optimization. Now the X3D runs at high voltage. This is opposite of Zen 3 X3D, as 5800X3D ran at 1.3v binned and stock was 1.4v. Now roles are flipped. Meaning Zen 4 is at it's maximum potential.
Ultimately the choice for any PC DIYer is to get Zen 4 over RPL because Intel LGA1700 socket is an engineering failure. You should not resort to modding with Contact Frame on a $700 Mobo. Period. Zen 4 has a limit on PCH Chipset speed, the X670 apart from that, no downsides. I'd pick 7950X over any Zen 4 processor for a Zen 5 upgrade. RPL Refresh is not going to change socket ofc and not gonna do massive changes either, at best DLVR, optimization on TDP and optimization to Base and Boost Clockspeed. Intel 7 is also at it's max, plus it's an EOL design. Look at ADL vs RPL literally they gave E cores garbage and added Cache to have "Marginal" boost in games. and E cores to accelerate MT workloads. Pathetic. And now RPL refresh literally another BS Single digit gain. Look at Zen 3 vs Zen 4. Ultimate lead.
That said, I'm only looking for 7800X3D because it has higher TDP and almost same Clock Rate. Still capped Multiplier, anyways AMD processors are not good for tinkerers that much since you cannot control Clock rate and cannot have fixed Clocks either. At best Curve Optimization and DRAM tuning. So if you are into that stick with Intel.
Forgot to mention. Why AMD is doing this much of BS Windows level Drivers ? I honestly expect AMD to engineer and in CPU solution like Intel Thread Director. Relying on Windows BS is all but nonsense. Esp Windows 10 is a mess with WaaS and Win11 its a disaster with all the shenanigans and downgrades to Win32 Shell.
LTSC is only worth on Windows 10 and forcing Xbox Game Bar mode is pathetic. Another red flag. Also in case if anyone does not know, AMD Zen 4 works on Windows 7 but with these changes to Chipset driver I think it might be a head ache not worth. Winraid forum has details how to Install 7 onto your new HW. I did with Intel Comet Lake 10th gen. Last good Windows OS and however Windows 10 LTSC 1809 / 21H2 are now better version for gaming and other modern software workloads. 11 is a failure and not worth time.
That's not a loss. CPUs are supposed to be advanced-enough with modern turbo and other modes that the anachronistic tinkering isn't needed.
It reminds me of the death of the manual shifter. DSGs have better fuel economy, the one major thing left in the manual advocate's list of talking points.
Overclocking is still useful for one thing: people who make money via the overclocking industry.
I don't like AMD clock behavior for Zen 3 so I did not pick it add the buggy IODie they had. Zen 4 is very fast and scales with Temp. So I can choose Zen 4 over RPL (as RPL has E core garbage and LGA socket engineering flaw). But Tinkering is fun to me, controlling 100% of your CPU is interesting and with Intel you get high clock rate stick for all workloads.
Initially I preferred Intel overclocking and was frustrated with poor (or negative) gains on Zen3. It took me a while to get my process down for curve optimization overclocking, but once I did I had great fun in overclocking Zen 3. It is by far the most elegant overclocking system and favored core(s) is scheduled well by Windows. Contrast that with Intel's latest which I have not found to be as satisfying to tinkering with and P/E core scheduling isn't perfect.
I haven't put hands on 7000x3d, but I'm not looking forward to another heterogenous core design. I will hold out hope that AMD can hit high clocks with extra cache for the 8000 series. It is my feeling that the 7nm cache slice doesn't clock as well as the 5nm ccd and requires more voltage - which are holding back performance in order for AMD to save a couple of bucks.
Yes V-cache in consumer uses target gamers. It's silly to think people that care about games, are only gamers. The 1st thing to accept that is that a $700 CPU already isn't for a lot of people. Nor was a $800 5950X. You can be a gamer, with a wide range of computer uses that makes the density make sense. What do you do when actually have use for a CPU like a 5950X, and play games where the 5800X3D basically demolishes it in a meaningful way? MSFS, Star CItizen, and DCS particularly in VR, benefit tremendously from the cache. A 5950X can be a bottleneck to even a 6800XT in those cases. You'd probably want a v-cache part, without giving up your threads. The 7950X3D exists for those users.
I find Anandtech very good, but honestly this review feels like THE INTENT was to handicap v cache chips. They used the worse ram they could find. Didn't even use recommended. Kinda disappointing
We test at JEDEC settings as a matter of principle and consistency. If I used DDR5-6000 on the X3D (sweet spot according to AMD) but then I used DDR5-7000 on Intel's 13th Gen, the results wouldn't be comparable.
Using JEDEC settings allows us to consistently measure data via the manufacturer's specifications. Using XMP/EXPO is overclocking, and we've, for as long as I can remember, used JEDEC, and we will continue to do so.
The 3D chips appear to be far less memory-sensitive than the standard Zen 4 offerings. The handicapping generally appears to be down to a suboptimal method of detecting whether a "game" is running; I'm wondering whether cache usage or even application profiles would've made for a better metric, that way your CS:GOs and other high FPS games would get shunted off to the higher frequency cores where they belong.
AMD saw that people are buying Intel chips, with those silly efficiency cores, which are useless in gaming, so they did the same. 7950x3D has 8 parked cores in games. This means that Intel's higher frequency=AMD v-cache. You can either process more information faster, or you can keep it close by in that v-cache buffer, for faster access. Results are the same. Difference is, the latter sucks up less energy. I bet the 7800x3D is going to perform the same in games like the 7950x3D, which is the same as the 13900K. The nice thing is the cost on that chip.
This feels like there are gapping holes left to be examined, specifically with the scheduler/drivers and Windows 11.
If you do these tests using manual BIOS settings, what changes?
This article is not helping me make a buying decision when I have this article only to compare current CPU offerings and there appear to be issues with the CPU software.
I'm still wondering how well would this cpu perform having heavy load of both a game and non-game running at the same time - most typical of the latter being e.g. OBS using x264 at medium or slow presets.
Would the threads be assigned correctly ? Or would that be a mess as both CCDs would be unparked.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
122 Comments
Back to Article
Flunk - Monday, February 27, 2023 - link
Looking at all these charts, it's not hard to come to the conclusion that if you're cost-constrained, and the only stressful thing you're doing is gaming that the best strategy is to buy a 7600x and put all of your budget towards your GPU. But if you need to do something that can use a lot of threads (compiling, encoding, etc), the 7950x3d is better, by a small amount, in most games.Hifihedgehog - Monday, February 27, 2023 - link
Or the 5800X3D. AMD made it too good. I think 3D Cache will not be worthwhile until 8000 or 9000 series Ryzen when they get the clockspeed throttling resolved.Otritus - Monday, February 27, 2023 - link
3D V-Cache will absolutely be worth in Ryzen 7000. Just wait for the 7800X3D that has a homogenous core design. I had suspicions that AMD wouldn’t figure out how to properly optimize for the dual-CCD V-Cache designs, and that seems to have been correct. Hopefully, AMD gets this fixed for Zen 5 because V-Cache is an amazing technology when it works.AvidGamer - Monday, February 27, 2023 - link
I fully agree with you there, I too shared the same suspicion and so far it seems to have come true.Another good reason to keep my good and trusted, lovely Intel 13900 KS overclocked to 6.4 GHZ (yes that is for all P cores simultaneously and of course the E-(core)waste is turned off, don't ask, I am running a high flow industrial liquid cooling system that keeps temperatures precisely 2 degrees above the respective dew points; yes it is way larger than a full tower by far and does not rely on those commonly found small crappy radiators and noctua fans as it is used to cool parts of my laboratory equipment, thus I've just added an additional loop (with individual target temperature control) for my processor and graphics card, a trivial matter by the way) and to skip this only half hearted implementation of the AMD 3D-Vcache technology.
IMHO the 7950X3D and 7900X3D have been intentionally crippled for various product placement and business strategy related reasons and I absolutely abhor such practices and condemn them in the strongest way possible. AMD could and should have equipped the 7950X3D with its 3D Vertical Cache technology on both chiplets instead of turning it into an undead zombie like hybrid of neither fish nor fowl. I really despise such artificial hindrances to what could have been a fully functional and well rounded product. In fact I would gladly have paid 1000$ for a hypothetical 7950X2*3D if I would have gotten the chance to buy something like it with more cache on all chiplets, what a shame and a waste first and foremost. Anyway, since my employer foots the bill I've already placed my order (with my employer, I don't know when it gets fulfilled by Intel) for a 4 socket system containing 4 Intel Sapphire Rapids 8490H processors (with all accelerators enabled) that will hopefully help me to get over this enormous disappointment AMD has caused me here. Yeah, if AMD had delivered a solid product in the form of an 7950X2*3D I might have opted for an Epyc based system, but since they decided to basically artificially cripple what could have become a very promising product it is bye bye Epyc 9654 instead and rightly so! Thankfully core density on a per system basis with multiple sockets is higher with intel platinum processors anyway.
Bruzzone - Monday, February 27, 2023 - link
I concur 1 x XCCD seems the way to go for AMD optimized titles only and on price performance however Epyc LC or 8490H 4-way or 9654 2P for containerized / virtualized / partitioned are entirely different realms. Have fun with the Intel SR applied science project. mbOtritus - Monday, February 27, 2023 - link
Dannyzreviews actually found that e-cores on average increases performance in games. It’s like -5 to 10% performance delta and something like 3% faster on average if I recall correctly. That video came out on YouTube fairly recently so give it a watch.Samus - Tuesday, February 28, 2023 - link
I noticed the same thing while overclocking. Disabling e-cores to get higher clocked p-cores actually hurt overall FPS in Battlefield 2042, a highly CPU-bound game. Unfortunately e-cores do not overclock well, and they are more power hungry than you'd think which limits p-core overclocking potential. Alder Lake is especially dicey with the silicon lottery I hear. I somewhat lucked out but there are tons of people who have identical stepping 12700K's running a full 0.100v higher than mine - the microcode or something in the chip seems to dictate a stock + offset! Mine never exceeds 1.300v stock while many are 1.35+ on the same motherboard, BIOS and settings.Targon - Tuesday, February 28, 2023 - link
Think about the reason why that might happen, you have background tasks, and for some games that use 6-8 cores(there are a few of them around), background programs DO get in the way. So, 8 cores is good for games, but having extra cores will take care of that background stuff. This is why I went with a Ryzen 9 7900X, so I have the extra four cores for anything in the background that might get in the way of games or whatever else I may be doing.octra - Friday, March 10, 2023 - link
But the 7900X has the same 6+6 configuration as the 3D CPU. So those extra cores are not on the same CCD. Any game using more than 6 cores will have use both CCDs at the same time.Dizoja86 - Monday, February 27, 2023 - link
Wow, AvidGamer. I've read some pretentious posts on Anandtech in my time, but that was really something else. Almost worthy of its own copypasta.Kuhar - Tuesday, February 28, 2023 - link
@Dizoja86 nice comment :) I was waiting for one like it!mikato - Tuesday, February 28, 2023 - link
LOL, fully agree. Is that for real?Gastec - Wednesday, March 1, 2023 - link
If they would give us the very best of their tech at an acceptable price then the consumers would not upgrade every X short interval of time, they would not consume as much as desired, and profits would dwindle. The spice must flow.escksu - Wednesday, March 1, 2023 - link
I don't see that and intentionally cripple the 7950x3d.And has to balance out price Vs market demand and finishing returns. For end users, the main benefit from the extra cache is mostly just gaming. Dumping more cache doesn't necessarily means games will run faster.. then we have to look at thermals as well.
Also, even though you are willing to pay $1000, it doesn't mean everyone will. AMD doesn't make decisions based on what a few individuals want.
Samus - Tuesday, February 28, 2023 - link
No matter how you look at this, improperly optimized or not, it's impressive AMD pulled this off with reliable performance gains. The optimizations are all in software\drivers at this point, which is strange to say for a CPU that hasn't had any 'architectural' modification unless one were to consider L3 cache part of the 'architecture.'But I think we all agree this is going to be nuts when they stack V-Cache on both CCD's, presumably using a cheaper manufacturing process to possible keep costs identical or lower.
The trick now is going to be getting devs to optimize for large L3 caches. That shouldn't be hard due to AMD's presence in the console market, possibly pitching this as a next-gen console design win, but at the same time they have a fraction of the marketshare Intel has and traditionally devs have slept with Intel while AMD is on the sofa.
Blueskull - Thursday, March 23, 2023 - link
AMD knows exactly how to add X3D to all CCDs, just like the existing 7773X and the upcoming 9004X series. The problem is, however, that those server chips are NOT designed for gaming, and don't have crazy high clock rates. For gaming, you need high clock on at least some cores, and adding X3D severely limits clock speed. This is the balance they try to strike.The 50W less TDP penalty is what you pay for adding X3D to one CCD, and if you add X3D to both, you will have to pay 100W of TDP penalty. This makes the total power budget a mere 70W, and this is probably not good for single core speed.
Though 35W/CCD is kinda low, but if you put 12 of those, it still has a total TDP in north of 400W, what's how Genoa-X gets its performance. On a desktop platform for gaming? This could be really bad.
That being said, 35W-rated laptop Ryzens can still sustain 3.xGHz, so that is not low by any standard. Heck, my laptop runs at less than 2.5GHz sustained, and I never had a problem running any productivity tools, though clearly I do not game. This lower clock impacts server applications by very little, so the "all X3D" method works for this market.
Getting higher TDP while having X3D is kinda difficult, and I don't think they are getting there any soon. The problem is the thickness of that extra stacked chip, and that thickness translated to thermal resistance. Moreover, the thermal gradient on the X3D chip causes it to expand more near the CCD and less near the IHS, giving it a trapezoidal deformation. This applies tremendous amount of stress on the hard material (in the level of hundreds of MPa to a few GPa), causing both the CCD and X3D to fail prematurely.
So, mother nature doesn't seem to like the concept of X3D when single core performance is important, and AMD must figure out a way to solve this.
DanNeely - Monday, February 27, 2023 - link
Buying a mid-range CPU and putting the savings into the GPU has been the most short term bang for the buck in a gaming system for years. The main risk if you're someone who keeps the base system and swaps in faster GPUs every other year or so is that the mid-range CPU might not age well.A fairly recent example is that while fine at the time, intel's 4 core 4 thread i5 processors ended up becoming CPU bottlenecks several years before the 4/8 core i7s from their generation did.
CaptRiker - Tuesday, February 28, 2023 - link
still running a system from 2015intel hedt 5960x (8c/16t) oc'd to 4ghz
32gig pc-2166 memory
asus rampage V extreme X99 mobo
thermalake 1200watt ps
all 7 yrs old.. was running dual 980 ti's in sli when I put it together
then switched out to a 2080 ti 4 years ago.. then last month 2080 ti died so now
I'm sporting a brand new gigabyte aero oc 4080
running pretty well atm (win 10 pro w/all latest patches and feature updates)
I did recently mothball my original Intel 750 1.2tb nvme ssd card (was very close to max endurance). put in a 2tb Kingston fury nvme ssd (running in 3.0 mode since mobo only has single 3.0 m.2 slot)
Gastec - Wednesday, March 22, 2023 - link
And why do we need to know all that? That you've bough expensive video cards, that you are "sporting"?Kangal - Tuesday, February 28, 2023 - link
Yes, but that only applies to Intel.AMD systems are built with the future in-mind.
For instance, if you hopped on AM4 with a low budget and only built an r3-1400 and threw the rest at a GTX 1060 you got a good deal. Better than an r5-1600 and GTX 1050. And you can upgrade both the CPU and GPU without having to change other components like the Case, Motherboard, Cooler, PSU, etc etc.
For gaming, there's not much gains to be had from upgrading the CPU frequently, unlike with GPU. For that reason, it doesn't make sense to upgrade to Zen+ and regular Zen3. So I'd envision from the r3-1400 to the r5-3600 then to the r7-5800x3D. Meanwhile GPU upgrade can goto GTX 1660-Super, or RTX 2060-Super, then RTX 3060, lastly RTX 4060Ti. That's the more value oriented way of Gaming and upgrading from 2017 to 2021, a 5-Year period akin to a console generation.
Gastec - Wednesday, March 1, 2023 - link
ROFL! You start with about $420-450 worth of a system and end up paying THOUSANDS of dollars on video cards upgrades over 6-7 years. All that during the crypto craze, the scalpers paradise and the pandemic nuttery.Kangal - Friday, March 3, 2023 - link
What are you even on about?It's perfectly common for someone to start with a midrange system, and make upgrades down the path. Sometimes it's a shortage, or pricing, or budgetary restrictions, or even lack of knowledge.
The initial PC may have only been an entry level USD $600 build. Each CPU upgrade would've costed $200 and $400, and the old ones possibly bringing $50 and $100 back (total $450). Each GPU upgrade would've costed $200, $400, $350, and $500. The old cards would've sold for $50, $100, $150, $250 back (total $900).
Which means a savvy shopper would've spent around $1,300 on upgrades in that +5 Year period. Total coming in at USD $2,000. Which is pretty good value for money.
Someone stupid, as you implied, would blow that budget on a single upgrade. Because they're buying from scalpers and not being responsible with their budget. During the past 3 years, it was very difficult to get decent parts AND cheaply, but it wasn't impossible.
flydian - Monday, February 27, 2023 - link
Agreedshinsobeam - Tuesday, February 28, 2023 - link
The 7600x and i13600k are just such good value this generation it's hard to recommend anything else.Targon - Tuesday, February 28, 2023 - link
Having only six cores is going to be limiting in a number of games going forward, so I'd say going to an eight core CPU is a safer bet for those who only game, or be prepared to replace your CPU with an eight core or better CPU in the next few years.That ability to replace the CPU with a newer generation is going to remain an advantage for AMD, since with Intel, you can count on any motherboard only being good for 1-2 generations.
Gastec - Wednesday, March 1, 2023 - link
Give us MOAR CORZ!escksu - Wednesday, March 1, 2023 - link
I just want to say that you are really cost-contrained, you might not even want to get Ryzen 7000 series... The older 5000 series is still selling very well...The performance gap between 7000 and 5000 series are evident only if you have a graphics card capable enough to produce that difference. If you are using cards like 3060, the difference becomes extremely tiny since GPU is the bottleneck.
For gamers on a budget, its better to scale back on the CPU and use that difference for a faster GPU. Eg, 5600 + 3070 cost around the same as 7600x + 3060 (might be more expensive after factoring in board and RAM). However 5600 + 3070 is clearly faster in gaming.
Hulk - Monday, February 27, 2023 - link
AMD has been on such a roll lately. This seems to be quite an underperformer. It's slower and more expensive than the 7950X in anything application related and trades blows in games. I don't think a few extra frames at 1080p in some titles is a when frame rates are already 150+ is going to matter. Seems like a lot of intellectual and manufacturing effort for very little return. I had thought the faster memory subsystem and larger cache of Zen 4 would make less of a difference than with Zen 3. This review seems to corroborate that opinion.Dante Verizon - Monday, February 27, 2023 - link
In many games where the high refresh rate matters the difference is double digits as high as 62%, so what the hell are you talking about? lolHifihedgehog - Monday, February 27, 2023 - link
Hulk smash... -ed his brain.Hulk - Monday, February 27, 2023 - link
10% average at 1440p. https://www.tomshardware.com/reviews/amd-ryzen-9-7...That's over the 7950X. 5% over 13900K on average. And then slower in all applications.
Nice cherry picking.
But hey, if you need to get from 178fps to 205 that bad then go for it and lose speed everywhere else where you'll actually notice it. lol
Cooe - Wednesday, March 1, 2023 - link
You are regularly GPU bound at 1440p you freaking dingus... 🤦 But the R9 7950X3D will be able to utilize more of a future high end GPU upgrade for a 1440p or 4K gaming rig than if you went with a slower gaming chip. Why? Because at 1440p and ESPECIALLY 4K IT HAS SIGNIFICANT LEFTOVER CPU HEADROOM! (This might be a hard concept for your peanut brain to grasp).If all you care about is high resolution gaming performance with current GPU's just get a i3-12100F and STFU while recognizing you'll likely need to replace it in when faster GPU's come out.
CPU's are benchmarked for gaming at 1080p for a reason Mr. Tech Illiterate. 🤦 It's to remove GPU bottlenecking as much as possible so that you're ACTUALLY TESTING THE CPU vs your graphics card!
Gastec - Wednesday, March 1, 2023 - link
A $5000 high-end GPU, mark my words!Gastec - Wednesday, March 1, 2023 - link
1080p (p from pitiful) FTW, until the end of time! What's that doomsdays clock sayin' nowadays, 2 nanoseconds to midnight, or what?haukionkannel - Monday, February 27, 2023 - link
Just like 5800x vs 5800x3d!So very impressive in gaming and less in productivity.
LonnieG - Monday, February 27, 2023 - link
Look at the power consumption reduction. This is a real cost. Waste heat from the power supply, extra HVAC loads, more fan speed / noise. These factors are real and have a monetary value as well.DanNeely - Monday, February 27, 2023 - link
The X3D has significantly more hardware running in a lower power budget. As a result, max boost not withstanding I suspect that it's generally running at lower clock speeds than the conventional part.From an engineering standpoint I'd be interested in seeing what the comparison looks like with them forced to run at identical clocks.
For more even real world results, can the X3D part have it's power level/clocks be increased to match where the non-3d part operates at? I know the 5800X3D had minimal overclockability, the rumor mill for the current generation has gone back and forth repeatedly; I'm unsure what the current/final status for it is.
Byte - Monday, February 27, 2023 - link
The limit here is def the 3D cache and its 89C temp TjmaxxTargon - Tuesday, February 28, 2023 - link
The X3D parts have a lower TDP rating, so yes, they run at a lower clock speed.Dante Verizon - Monday, February 27, 2023 - link
You should change the setting to prioritize caching on the problematic results so we have both data...Gavin Bonshor - Monday, February 27, 2023 - link
I'm currently in the middle of testing data in all three modes (Auto, Cache, and Frequency mode)Dante Verizon - Monday, February 27, 2023 - link
Radeon RX 6950 XT - There will be no significant differences using this GPUMakaveli - Monday, February 27, 2023 - link
The GPU is fine the bigger issue for me is those terrible memory speeds being used. Nobody buys jedec memory for their builds unless you are an OEM.Ryan Smith - Monday, February 27, 2023 - link
This discussion comes up now and then. But right now our stance remains unchanged: when AMD is willing to warranty memory overclocking, we'll start using it as a base setting.Otherwise, it's disingenuous to test a CPU in a state that's outside its normal operating parameters, and which would result in an RMA being rejected if anything happened to it.
Otritus - Monday, February 27, 2023 - link
I think you should put this note inside the review and future reviews because it is a valid reason for why it is being benchmarked as such.lopri - Wednesday, March 1, 2023 - link
Agreed 100%Oxford Guy - Wednesday, March 1, 2023 - link
Has AMD or Intel ever denied a warranty claim because someone used an XMP profile?I'd love to see an article about that!
blkspade - Wednesday, March 22, 2023 - link
Intel RMA survey actually asks if you used XMP as grounds to deny your RMA. They count it as oveclocking. AMD on the other hand doesn't even ask about memory profiles. I've had to do a RMA for both an i7 7700 and a R7 2700.nandnandnand - Monday, February 27, 2023 - link
Looks like a messy faceplant with the scheduler issues.RHamel - Monday, February 27, 2023 - link
Yes, the scheduler clearly isn't smart enough to assign threads to the optimal CCD with any consistency. AMD's "best of both worlds" design ends up being the worst of both worlds about half the time.They should have just put the 3D V-Cache on both CCDs and avoided this whole mess; anyone putting this (instead of a 7950X) in a workstation/HEDT obviously is running some kind of cache-limited workload and would prefer extra L3 on all cores to this design even if the scheduler worked perfectly.
AvidGamer - Monday, February 27, 2023 - link
Fully agree with the content and all arguments made in the above comment. Very good points! I too would have preferred a fully functional product instead of such an IMHO half assed approach which after all is said and done excels at basically nothing compared to all the other already existing good options from Intel and AMD. Shame on you AMD for what you have done there! IMHO that's just disgusting! I can't believe I've waited months for something like this, never again, urghhh...Gastec - Wednesday, March 1, 2023 - link
I've waited years to upgrade my i7-860, PCIe 2.0 system. I'm still waiting, now for the Ryzen 7900X3D to pop out. I'm pretty sure some other bad things will happen till then, that will prevent me to "upgrade". Hopefully a nuke.Tom Sunday - Monday, March 20, 2023 - link
Yes I am sure as well that some other not so good things will happen preventing me to "upgrade." Not withstanding our seemingly never-ending financial and inflationary conditions. The 7950X3D only excelling in gaming but not in content creation is disappointing to say the least. Many actually make money with content creation to pay for their casual gaming luxuries. For the 7950X3D a MSRP of USD$699 is an expensive trip for just gaming. Especially since the greater majority of gamers are no exactly living in the higher-end of the food chain nor are they the primary bread-winner for their family. What was AMD thinking? The only good news is that AMD usually starts dropping their intro-prices on all of their products virtually within a few months. So for me now sitting pretty and free in Mom’s basement, there may be hope after all and for now keeping on playing 'Wolfenstein' in all of its glory and a train ride to Berlin!Jp7188 - Wednesday, March 8, 2023 - link
3D V cache and 5.7+GHz across all cores would have been ideal, but it seems apparent it was not possible to achieve in this generation. Given the choice was cache or speed I think AMD did the best they could.Dribble - Thursday, March 2, 2023 - link
These cpu's won't be popular enough to get it fixed, chances are it will always be a mess.Papaspud - Monday, February 27, 2023 - link
Looks to me like any good processor from the last 1-2 years will do just fine. The gains are getting smaller, generational wise.meacupla - Tuesday, February 28, 2023 - link
Yes and no.If you play a lot of simulation games, particularly games like: factorio, Timberborn, Infraspace, Transport Fever 2, etc. and to a lesser degree Civ6, and Total War, those games will benefit substantially when the game asks the CPU to do all the pathfinding calculations.
For those games, it's easy to have 120fps at the start of the game, but as soon as the game has thousands of units asking for pathfinding at the same time, you will quickly find yourself with 100% CPU usage, 10% GPU usage, and <15fps.
These games benefit greatly from that massive cache on X3D.
Gastec - Wednesday, March 1, 2023 - link
Or in other words wait until someone else tests the CPU with the video games you want to play, before buying it. Wait, wait, wait. Work, work, work.cruiseliu - Monday, February 27, 2023 - link
7950X3D falls behind 7950X in simulation tests, while 5800X3D outperforms 7950X. That is strange.Could you please clarify which CCD is used in these tests?
Gavin Bonshor - Monday, February 27, 2023 - link
I'm looking into it. I'm testing in all three modes in our compute tests to see where AMD's PPM and 3D V-Cache driver gets it right and where it gets it wrong (if at all).cruiseliu - Monday, February 27, 2023 - link
Thanks, looking forward to see updates.Otritus - Monday, February 27, 2023 - link
AMD does not have an intelligent scheduler built into their CPUs to handle the heterogeneous design like Intel. As a result that have a driver that is basically just: if a game is detected then only schedule on the V-Cache CCD. This driver is imperfect as is for games as seen with Factorio, and clearly not optimized for other workloads. The Windows 11 scheduler will typically park threads on the higher frequency CCD because the driver is supposed to tell it not to. As a result, the driver isn’t activating causing threads to be scheduled on the higher frequency CCD and performance is worse than the 7950X because the X3D has a lower power limit and all-core turbo.cruiseliu - Monday, February 27, 2023 - link
Yeah I know the scheduler is basically "if game then cache else frequency".But Dwarf Fortress and Factorio are also games, though not 3D games. So it seems hard to say which CCD will be used.
Gavin Bonshor - Monday, February 27, 2023 - link
Yeah, that's right about the 'scheduler.' It hinges on Xbox Game Bar feeding the information to the drivers if A. If it's a game, park one CCD and enable V-Cache CCD or B. Not a game, then act as normal.In relation to Factorio, I know our version in our suite is without the UI, which is possibly why it wasn't flagged. As this is more of a CPU benchmark in our suite than a game, it is reflected in the method of data.
That's why I did a side test with Factorio on the CCD with V-Cache, which, as expected, boosted performance by nearly double.
I'm in the process of doing additional testing, so bare with me
Bruzzone - Monday, February 27, 2023 - link
7X 3D 1P + 1P offload engine seems underwhelming on initial assessment. mbnightbird321 - Monday, February 27, 2023 - link
Wow AnandTech, pulled out the slowest DDR5 money can buy for this review and didn't pair the Ryzen 7000 with "sweetspot" 6000 speed ram, and put 5 pages of productivity to 4 pages of gaming benchmarks for a "Gaming" CPU. Just WOW. The other site known for biased Intel reviews did a flip flop as well, what is the world coming to.Gavin Bonshor - Monday, February 27, 2023 - link
We test at JEDEC settings; that's just how we do it. Check ANY of our CPU review content going back multiple years.eloyard - Friday, March 3, 2023 - link
Isn't the point of testing missed if the testing is done in wildly suboptimal fashion that does not reflect real world configuration? On one hand you have overpowered cooling, PSU, MB and GPU, but on the other the real bottleneck for some core designs: RAM - is artificially kept slow? Either you run top of the line enthusiast spec or some basic discount shop bad config. The excuse "we've always been doing that" feels hollow. If you've been doing it wrong, you should correct that, not try to make that point of virtue. You've might as well tested all with 300W PSU and basic air cooler and claim the same, and in same vane such "CPU testing" wouldn't show what people are expected to see - how CPU will behave in a real life config.cruiseliu - Monday, February 27, 2023 - link
Theoretically EXPO and XMP are overclocking.Though most of us treat them as standards nowadays...
Ryan Smith - Monday, February 27, 2023 - link
More specifically, they will invalidate your CPU warranty. That is our single largest issue with XMP/EXPO right now.Oxford Guy - Wednesday, March 1, 2023 - link
Has Intel or AMD ever invalidated a warranty for turning on XMP?Please post an article about this to answer that question.
cheshirster - Monday, February 27, 2023 - link
Always has been )They are risking with their "access to the body" by showing AMD on top.
Gastec - Wednesday, March 1, 2023 - link
Well, the world is not feeding trolls as much as it used to do, because of disruptions in the "serials" supply blockchains.HarryVoyager - Monday, February 27, 2023 - link
Definitely looks like AMD is using this as a dry run for their rumored Zen 4c architecture. From the rumors I've seen, they were able to pack about two Zen 4c cores into the space of a standard Zen 4 core by repackaging the logic and removing half the cache. But that means to use them as super E-cores, they need a scheduler that can handle by cache, not frequency like Intel's does.By launching the dual CCD CPUs first, they're able to see and work through those issues now, on a low risk platform before they go full heterogenius. Very shrewd on AMD's part.
nandnandnand - Monday, February 27, 2023 - link
Shrewd, but only if it works in the end. And the buyers of these high-end gaming parts are the beta testers.AvidGamer - Monday, February 27, 2023 - link
That says a lot about the perception and respect AMD has for its ordinary, loyal customers, using them as Beta testers, what a deplorable custom!nandnandnand - Monday, February 27, 2023 - link
It's Alder Lake all over again, but with an apparently dumb method of optimizing for specific programs.We will have to live with it though. big.LITTLE in some form is here to stay over at Intel, and we've seen rumors of Zen 5 + Zen 4C on desktop.
Gastec - Wednesday, March 1, 2023 - link
It's only $700 bucks, come on! Pocket change for successful influencers!AvidGamer - Monday, February 27, 2023 - link
Fully agree with you there! Interesting and thought provoking comment, thanks for sharing your thoughts on this matter! I too believe that some of the practices of AMD are often rather shrewd in their nature...Targon - Tuesday, February 28, 2023 - link
Not even close to what is going on. The 5800X3D was clearly the final chip for socket AM4, and yes, it was primarily focused on gaming since that is where that 3D stacked cache tends to benefit users more.For the Zen4 versions, you will see the 7800X3D, which has one CCD, then the 7900X3D and 7950X3D. For the dual-CCD chips, because GAMES don't use more than eight cores, AMD did the best mix, apply the stacked cache to one CCD, leaving the other CCD without, just so it can be clocked higher. Again, games are pretty much the only chips that show the benefit to the extra cache, so why put the stacked cache on both CCDs?
Zen4c is going to be for servers that will use more cores, but don't need as much cache to operate. When more cores is more important than the very best performance per core, Zen4c makes more sense. There is nothing about stacked cache that applies to Zen4c.
Intel E-cores are pretty much garbage, less functional(they didn't include AVX-512 even when the performance cores did), but more cores DOES benefit certain types of workloads. Note that Intel laptop chips are using this almost to deceive consumers, dual performance core i7s with efficiency cores to make them not as bad will still be worse than any i7 should be. Intel used to sell a lot of low-priced i7 chips with only 2 cores/4 threads, so now, dual-performance cores with some efficiency cores to be not as bad as dual-core i7 chips used to be.
Now, the 7800X3D will be only a single CCD part, so the ONE CCD will have the 3D stacked cache on it. AMD is doing this because the volume of sales for the dual-CCD X3D chips is expected to be fairly low, while the 7800X3D will probably have a very high volume of sales. It's best not to encourage all of the stacked cache modules to go to the 7800X3D where the highest demand will be.
demian_thorne - Wednesday, March 1, 2023 - link
It is typical under articles and threads like this to hear and see all kinds of extreme opinions based on everyones personal pet peeves...I think your comment was the most sensible of them all ! Very well done !
milleron - Monday, February 27, 2023 - link
Thanks. For me, your testing suggests pretty strongly that I am best served by simply keeping my current 7800X3D rig — i.e., keep the current motherboard, 7800X3D, RAM, and SSDs but spend the money upgrading from an RTX 3080 to a 4080 or 4090. I WOULD like to see a test of MS Flight Simulator at 1440 and 4K before a final decision, but those should be forthcoming from the MFS forums soon enough.AvidGamer - Monday, February 27, 2023 - link
Do you happen to work for AMD or one of its system partners? If not you most likely intended to write 5800X3D right? After all the 7800X3D hasn't been released yet, at least not to my knowledge at the time of writing.milleron - Monday, February 27, 2023 - link
Sorry, regarding the above post, I currently have a 5800X3D rig, NOT a 7800X3D, which, obviously, does not currently exist. Too bad this website doesn't allow editing of comments.Gavin Bonshor - Monday, February 27, 2023 - link
The 5800X3D is still a VERY GOOD processor for gaming. And given it's around $330 right now and can be paired with DDR4 memory/AM4 motherboards, for the price, it'll be hard to beat for gamers on a budget.Of course, a discrete graphics card is required, but the more spent on a GPU, the better the gaming performance will be.
rb86 - Monday, February 27, 2023 - link
You said in the comments you're still doing testing. Will a new article be posted with your updated results or will this article be modified? These Factorio results are shockingly disappointing. Is there no way to force an application to start on the core with the 3D cache on it?Gavin Bonshor - Monday, February 27, 2023 - link
I certainly am, and they are running while I do my evening things. As the tidbit on the last page highlighted, Factorio performance SHOULD be higher, but AMD's PPM/3D V-Cache Performance Optimizer drivers didn't pick up the fact Factorio was running. This is primarily due to our benchmark that is running without a UI, and it's possibly why XBOX Game Bar didn't pick it up.On that note, I'll be doing more testing with the 7950X3D set to Cache and Frequency to see if AMD's drivers got it wrong on more in our test suite than just Factorio and possibly Dwarf Fortress.
As for if it will be an article update or a fresh page, I would say the update is more likely, but this will be explored when I have the data and if it makes THAT much of an impact on performance.
We shall see, but first, I need to collect more data.
flydian - Monday, February 27, 2023 - link
So far, all this has done is to sell me on the Ryzen 5 7600X. Maybe I'll wait for that 7800X3D before buying parts to upgrade from my 10th gen Intel CPU.meacupla - Monday, February 27, 2023 - link
the driver to park CPU cores to get windows to use the correct cores when gaming seems like such a hack workaround. I think AMD should have stuck to homogeneous cores, or gone the intel route and have a big-small configuration that the OS has an easier time of handling.Otritus - Monday, February 27, 2023 - link
The problem with big small is what is a big-core? Some workloads need higher frequency, others higher cache, and some a bit of both. I don’t understand why DirectX and Vulcan API calls couldn’t have been automatically routed to the V-cache CCD, and everything else to the frequency CCD. AMD clearly needed to spend more time optimizing the scheduler for their processors. Intel spent years building a new hardware scheduler and directly working with Microsoft to maximize performance.Targon - Tuesday, February 28, 2023 - link
How do you think an "automatically routed" system actually works? The operating system would need to handle that. It is actually up to Microsoft to "optimize the scheduler", but programs can try to help. You know Intel had to work with Microsoft, and still screwed it up with Alder Lake at launch where various things, including some game copy protection failed horribly.Jp7188 - Wednesday, March 8, 2023 - link
It seems like they could just start all threads on the frequency CCD and if cache misses exceed a threshold then move to cache CCD. Im sure its not that simple, but surely cache miss metrics are more reliable than the Windows game bar..? After all some games do better with frequency than cache.Hectandan - Monday, February 27, 2023 - link
Nah I really hope AMD doesn't go with big-small. The penalty of a wrong assignment is too bigabufrejoval - Monday, February 27, 2023 - link
You may want to fix this sentence:"Like the Ryzen 7 5800X3D, the soon-to-be-released Ryzen 7 7800X3D shares a turbo frequency of 5 GHz"
I wish my 5800X3D could turbo to 5GHz, even the 5800X will officially only turbo to 4.9GHz while my 5800X3D reports a 4.55 GHz PBO limit and never clocks above 4.45. Only my 5950X is supposed to reach 5.05GHz, never seen it actually go there.
In all of these cases HWinfo and Ryzen Master disagree on nearly every measurement.
Gavin Bonshor - Tuesday, February 28, 2023 - link
What motherboard and cooler are you using? Are you thermally throttling?abufrejoval - Tuesday, February 28, 2023 - link
I am afraid I muddied the waters too much with all the extra data...Gavin, the Ryzen 7 5800X3D simply does not turbo to 5GHz, it's 4.5GHz per your own table in the 5800X3D review. HWinfo reports a PBO max of 4550MHz, but the highest ever reported clocks stick at 4450MHz, which is also the highest ever recorded effective clock on the very best cores.
tl;dr
Yes, because I'd not dare not answer your question, all my Ryzens are throttling at 90°C or at 110 Watts for the single CCD chips or 140 Watts for the dual CCD chips, whichever comes first, and that is exactly as per design and PPT settings for the Ryzen 5000, I believe.
I use big top down CPU air coolers in combination with lots of large front to back fans, pretty old school and designed more for longevity of all components than maximum cooling power as gaming isn't the primary mission. All my case fans and CPU coolers have been Noctua since 2006, unless they were out of stock. My 5800X3D has a 140mm single fan NH-C14S while the 5950X has be quiet! Dark Rock TF2 rated for 230 Watts, which is obviously exaggerated. Top blowers are becoming hard to get, but I love giving all those onboard compoments and them DIMMs a bit of a breeze, sensors say it works.
I'm not about to play with water in my computers, I got enough things to worry about, and the electricity bill is one of them: I try to aim for optimal efficiency near top performance.
Mainboards are a Gigabyte Aorus Ultra for the 5950X (a 5800X originally), an X570S UD for my son's 5800X3D, and an X570S Aorus Pro AX for my own 5800X3D. These are all what I guess you'd call "value optimized" boards, not an overclocker's choice, because they are designed as workstations with DDR4-3200 ECC RAM that might see an occasional bit of gaming (except for my son's rig, which uses non-ECC optimal speed DDR4-4000 instead).
For overclocking I don't go further than enabling PBO in the BIOS, because I've seen Linux crash with stuff like the curve optimizer and most of my machine run Linux as a primary OS.
Again, way too much data I guess, for a single understandable mistake you may want to fix.
Jp7188 - Wednesday, March 8, 2023 - link
5000 shouldn't thermally throttle. 7000 does more of that. If possible get a thermal couple on those heatsinks and verify you're getting the heat out of the cpu in to the sink. It almost sounds like a mounting or paste problem.For reference, I run a 5950x (under water). It hangs around 235W / 5GHz all day long and never exceeds 60C.
ahenriquedsj - Monday, February 27, 2023 - link
Well Done, AMD!abufrejoval - Monday, February 27, 2023 - link
When I learned about the asymmetric CCDs I was first put off and then actually a bit ecstatic, because it actually seemed a brilliant move!Because from what I understand the clock penalty for the V-cache should not be constant as cores are loaded and clocks need to go down anyway to fit the TDP.
So if you have a 16 core compute load with a steady state around say 4GHz, there is a good chance both CCDs will actually clock very similarly, simply because they are tied to single digit Watts per core. In other words in that graph with max clock per active core count, the V-cache CCD will just snip off the top clocks when the cores would go to max heat and >10 Watts each, while the slower clocks made necessary by the extra cores may ease the heat dissipation disadvantage of the extra billions of transistors used for SRAM cells and loose the V-cache specific clock penalty as they are forced into the CMOS knee. It really gives you, as you say, the ability to choose the perfect up-to-8 core CPU for any workload variant, a clock or acache beast, while the difference between the 3D and non-3D 7950X variants at 9-16 cores of load should become negligible. And 16 core workloads tend to be long duration batches, where the only thing that really flows is coffee and any permutation of 3D vs. non-3D CCDs is less likely to behave very differently.
Yet I don't see that play out in your synthetic benchmark results, so either my theory is all wrong, or there is something amiss with the drivers/software. Pretty sure Andrei would have loved to have a go and do a really deep analysis on that behavior.
There are, of course, some borderline cases, where the extra V-cache will make a giant difference: I've heard some chip design simulations quoted. But even there any other design might just step outside the 96MB 3D variants can offer or remain inside the 32MB the normal dies manage, so I really doubt the numbers will ever point towards a dual V-cache CCD desktop variant.
My big machines are all AMD CPUs these days, still Nvida for the GPUs because of CUDA. And there AMD has maintained a constant pain in the bud with a frustrating and terrible restriction on all of their driver software: it refuses to run on Windows server.
I need to run that on my jack-of-all-trades machine (5800X3D currently, Xeon E3 before), which is a core 24x7 box running ECC RAM and RAID storage, yet relatively low idle power and noise.
With the 5800X3D and the X570 mainboard I was able to get nearly all drivers to install manually anyway, but the power plan won't fit on server editions.
And there is no xbox stuff or "game mode" either on Windows server, which to my chagrin makes it impossible to run Microsoft's Flight Simulator, too (after 190GB of download time): all other Steam, EPIC, Origin and Uplay titles do, including with VR...
The need to properly manage the allocation of processes to the core types is going to bite AMD, I'm afraid, because most users won't be nearly as brilliant or patient as Lisa Su & friends.
I'd probably want to go with Lasso for controlling that and use numactl on Linux, if I were to buy one of these chips anytime soon.
Which I probably won't because the performance gain from my current Ryzen 3 machines isn't really that big while 128GB in ECC DDR5 and a matching mainboard are eye watering yet gain less than the next gen CUDA card for my machine learning stuff.
Tunnah - Tuesday, February 28, 2023 - link
Wish the charts would include the odd older CPU, as I reckon people nowadays are leaving it longer and longer to upgrade CPUs because GPUs mean so much more when it comes to gaming performance.mikato - Tuesday, February 28, 2023 - link
I noticed that Ryzen 3000 is mentioned several times, but not included in any of the charts. It would've been nice to see one of those in there.kayak - Tuesday, February 28, 2023 - link
I'll wait benchmark with the cache preference in the bios. I'm not fan of these additional layers like microsoft game bar, some people disable it and it looks like it doesnt help for factorio anyway. If bios settings set to cache preference can fix most of problems i'm fine, not everyone use windows.iRacer - Tuesday, February 28, 2023 - link
Your V-Ray bnenchmark scores are all over the place.A current 7950X scores on average 29k vsamples.
How are you getting these values, and how can the reader know the method is proper, given you're sitting on double the real values?
iRacer - Tuesday, February 28, 2023 - link
Gavin, once again: your V-Ray benchmark score do *not* reflect what can be seen here (https://benchmark.chaos.com/v5/vray?search=7950x&a... by TWICE the amounts.How are you reaching these values?
Do i have to ask you officially via email, or can you reply directly?
I am *clearly* an intersted party under anonymity.
If you read and delete the comment, you are well able to reply instead.
iRacer - Tuesday, February 28, 2023 - link
nevermind, nothing was deleted, but the "link" above the post doesn't work and leads to a 404.iRacer - Tuesday, February 28, 2023 - link
*if used right after posting.Gavin Bonshor - Thursday, March 2, 2023 - link
Hi iRacer. I'm certainly not ignoring you.I'm pretty sure I replied to a similar comment in another review.
You are more than welcome to reach out to me via email for a discussion, but we are using V-Ray version 4.10.06
As per our 5950X review/testing as per Dr. Ian Cuttress here: https://www.anandtech.com/show/16214/amd-zen-3-ryz...
This correlates with the result/data I have in my testing in this review. You also can't compare results between different versions of the benchmark.
I'll run the latest version when I get a moment tomorrow and let you know. In the mean-time, feel free to email me if you wish to continue the discussion there. It's easier to keep track of via email than it is to trawl through comments.
Thanks!
P.S.: The only time I'll delete a comment I've seen is if it's spam.
mikato - Tuesday, February 28, 2023 - link
"the Ryzne 9 7950X3D doesn't quite hit"Also, this single-sentence paragraph (below) appears on both page 4 and 5. Maybe it doesn't belong on page 4.
"In the encrypt/decrypt scenario, how data is transferred and by what mechanism is pertinent to on-the-fly encryption of sensitive data - a process by which more modern devices are leaning to for software security."
Thank you for your great coverage.
Ket_MANIAC - Wednesday, March 1, 2023 - link
AMD's core efficiency is out of this world. The 16 core 7950X3D performs at the same power level as the 6 core 7600X. This is unheard of and unimaginable!Silver5urfer - Wednesday, March 1, 2023 - link
Late to commment, but I already mentioned.3D V has it's limitations. AMD cannot improve this on Gen 2 with any Clock boost or such. It has same downsides, the Base clock reduction and Max Clock reduction. And the Unlocked Multiplier lock.
This processor is only for those "Gamers" And it does not make sense on the 7950X3D at all. A loss of TDP power window, loss in every thing that scales with Cores + Clocks.
Also Ryzen 7000 / Zen 4 is ultra optimized by default, it has super low voltage. 1.2v max at such high insane 5.x GHz clock rate vs Zen 3. TSMC 5N is a massive gain and also Zen 4 optimization. Now the X3D runs at high voltage. This is opposite of Zen 3 X3D, as 5800X3D ran at 1.3v binned and stock was 1.4v. Now roles are flipped. Meaning Zen 4 is at it's maximum potential.
Ultimately the choice for any PC DIYer is to get Zen 4 over RPL because Intel LGA1700 socket is an engineering failure. You should not resort to modding with Contact Frame on a $700 Mobo. Period. Zen 4 has a limit on PCH Chipset speed, the X670 apart from that, no downsides. I'd pick 7950X over any Zen 4 processor for a Zen 5 upgrade. RPL Refresh is not going to change socket ofc and not gonna do massive changes either, at best DLVR, optimization on TDP and optimization to Base and Boost Clockspeed. Intel 7 is also at it's max, plus it's an EOL design. Look at ADL vs RPL literally they gave E cores garbage and added Cache to have "Marginal" boost in games. and E cores to accelerate MT workloads. Pathetic. And now RPL refresh literally another BS Single digit gain. Look at Zen 3 vs Zen 4. Ultimate lead.
That said, I'm only looking for 7800X3D because it has higher TDP and almost same Clock Rate. Still capped Multiplier, anyways AMD processors are not good for tinkerers that much since you cannot control Clock rate and cannot have fixed Clocks either. At best Curve Optimization and DRAM tuning. So if you are into that stick with Intel.
Silver5urfer - Wednesday, March 1, 2023 - link
Forgot to mention. Why AMD is doing this much of BS Windows level Drivers ? I honestly expect AMD to engineer and in CPU solution like Intel Thread Director. Relying on Windows BS is all but nonsense. Esp Windows 10 is a mess with WaaS and Win11 its a disaster with all the shenanigans and downgrades to Win32 Shell.LTSC is only worth on Windows 10 and forcing Xbox Game Bar mode is pathetic. Another red flag. Also in case if anyone does not know, AMD Zen 4 works on Windows 7 but with these changes to Chipset driver I think it might be a head ache not worth. Winraid forum has details how to Install 7 onto your new HW. I did with Intel Comet Lake 10th gen. Last good Windows OS and however Windows 10 LTSC 1809 / 21H2 are now better version for gaming and other modern software workloads. 11 is a failure and not worth time.
Oxford Guy - Wednesday, March 1, 2023 - link
'AMD processors are not good for tinkerers'That's not a loss. CPUs are supposed to be advanced-enough with modern turbo and other modes that the anachronistic tinkering isn't needed.
It reminds me of the death of the manual shifter. DSGs have better fuel economy, the one major thing left in the manual advocate's list of talking points.
Overclocking is still useful for one thing: people who make money via the overclocking industry.
Silver5urfer - Thursday, March 2, 2023 - link
I don't like AMD clock behavior for Zen 3 so I did not pick it add the buggy IODie they had. Zen 4 is very fast and scales with Temp. So I can choose Zen 4 over RPL (as RPL has E core garbage and LGA socket engineering flaw). But Tinkering is fun to me, controlling 100% of your CPU is interesting and with Intel you get high clock rate stick for all workloads.GPUs are boring because of that.
Jp7188 - Thursday, March 9, 2023 - link
Initially I preferred Intel overclocking and was frustrated with poor (or negative) gains on Zen3. It took me a while to get my process down for curve optimization overclocking, but once I did I had great fun in overclocking Zen 3. It is by far the most elegant overclocking system and favored core(s) is scheduled well by Windows. Contrast that with Intel's latest which I have not found to be as satisfying to tinkering with and P/E core scheduling isn't perfect.I haven't put hands on 7000x3d, but I'm not looking forward to another heterogenous core design. I will hold out hope that AMD can hit high clocks with extra cache for the 8000 series. It is my feeling that the 7nm cache slice doesn't clock as well as the 5nm ccd and requires more voltage - which are holding back performance in order for AMD to save a couple of bucks.
blkspade - Wednesday, March 22, 2023 - link
Yes V-cache in consumer uses target gamers. It's silly to think people that care about games, are only gamers. The 1st thing to accept that is that a $700 CPU already isn't for a lot of people. Nor was a $800 5950X. You can be a gamer, with a wide range of computer uses that makes the density make sense. What do you do when actually have use for a CPU like a 5950X, and play games where the 5800X3D basically demolishes it in a meaningful way? MSFS, Star CItizen, and DCS particularly in VR, benefit tremendously from the cache. A 5950X can be a bottleneck to even a 6800XT in those cases. You'd probably want a v-cache part, without giving up your threads. The 7950X3D exists for those users.Jdogdarkness - Wednesday, March 1, 2023 - link
I find Anandtech very good, but honestly this review feels like THE INTENT was to handicap v cache chips. They used the worse ram they could find. Didn't even use recommended. Kinda disappointingGavin Bonshor - Thursday, March 2, 2023 - link
We test at JEDEC settings as a matter of principle and consistency. If I used DDR5-6000 on the X3D (sweet spot according to AMD) but then I used DDR5-7000 on Intel's 13th Gen, the results wouldn't be comparable.Using JEDEC settings allows us to consistently measure data via the manufacturer's specifications. Using XMP/EXPO is overclocking, and we've, for as long as I can remember, used JEDEC, and we will continue to do so.
silverblue - Friday, March 3, 2023 - link
The 3D chips appear to be far less memory-sensitive than the standard Zen 4 offerings. The handicapping generally appears to be down to a suboptimal method of detecting whether a "game" is running; I'm wondering whether cache usage or even application profiles would've made for a better metric, that way your CS:GOs and other high FPS games would get shunted off to the higher frequency cores where they belong.MrPhilo - Friday, March 3, 2023 - link
Questionable RAM speed was used for both Intel and AMD, especially AMD.GruiaL - Sunday, March 5, 2023 - link
AMD saw that people are buying Intel chips, with those silly efficiency cores, which are useless in gaming, so they did the same.7950x3D has 8 parked cores in games. This means that Intel's higher frequency=AMD v-cache. You can either process more information faster, or you can keep it close by in that v-cache buffer, for faster access. Results are the same.
Difference is, the latter sucks up less energy.
I bet the 7800x3D is going to perform the same in games like the 7950x3D, which is the same as the 13900K. The nice thing is the cost on that chip.
achinhorn - Monday, March 6, 2023 - link
This feels like there are gapping holes left to be examined, specifically with the scheduler/drivers and Windows 11.If you do these tests using manual BIOS settings, what changes?
This article is not helping me make a buying decision when I have this article only to compare current CPU offerings and there appear to be issues with the CPU software.
soltys - Monday, March 6, 2023 - link
I'm still wondering how well would this cpu perform having heavy load of both a game and non-game running at the same time - most typical of the latter being e.g. OBS using x264 at medium or slow presets.Would the threads be assigned correctly ? Or would that be a mess as both CCDs would be unparked.