Without Intel, it is dead in the water for now at mainstream. Better memory interfaces are necessary of course, the current memory interfaces look straight from the 80s. But for device interfaces... PCIe 4 is around the corner, probably in the very next Intel and AMD architectures.
"PCIe 4 is around the corner, probably in the very next Intel and AMD architectures."
Issue is that PCIe 5 is just 1-2 years behind it and current indication is that we will go directly to PCIe 5. Only thing that would benefit from PCIe 4 is the CPU-chipset connection. On the other hand intel/amd could just offer 8 or 16 lanes to the chipset and that issue would be gone as well.
GPUs themselves aren't even limited at 8xPCIe 3.0 at least not for gaming or other consumer tasks. Thing is since at least 5 years hardware is mostly good enough for the average user. The market will separate much stronger than previously into consumer and professional/server parts. The later likes powerful gpus and accelerators that need fast links to the CPU, fast and/or persistent memory and so forth. All this bleeding edge stuff has 0 benefits for the consumer.
"All this bleeding edge stuff has 0 benefits for the consumer." Have you ever heard of M.2 NVME SSDs, particularly when you use 3 or 4 of them in RAID? What about networking or dual/triple GPUs for gaming, rendering or video editing work? Or using 8 to 10 GPUs for mining, by using adapters and providing them with just x4 or x2 PCIe 3.0 links each? All the above are starved for PCIe 3.0 links, and with PCIe 4.0 you can use half the links for the same I/O speed, the same number of links to double your I/O speed or any combination between these two. You can go even further with PCIe 5.0 (1/4 of links for the same speed etc), but it still unclear when that will be commercially available. Fewer links means more simplified motherboard design (fewer traces) and potentially simpler CPU PCIe controllers. If we used PCIe 4.0 or even 5.0 *today* strictly with 1/2 and 1/4 the number of links respectively nobody would call them "bleeding edge" since the speed they would provide would be the same. So what is "bleeding edge" is largely a matter of perspective, and is meaningless without context.
I'm guessing Intel doesn't like this because Gen-Z was aiming for having an "open and royalty-free" standard. Plus, since Gen-Zs standard is broad, so not just for PCIe, but could be used in similar place as Omni-Path, QPI, DMI, etc, they don't like it. I suppose it's in Intel's interest to continue market segmentation / product stratification through bus types, etc, when possible.
and there is Nvidia they seem to want complete NON open ways, they do not want to ceed control of anything, they want to be the only vendor or control the standard, Intel is very much the same way
They want to be a cookie factory, push out the product nothing more, not make the product better (unless absolutely have to)
for the first persons comment on without Intel dead in the water...they may still be number 1 producer for computer chips, they are far from the only one selling mega volumes worth of them annually.AMD is selling a crap ton of chips (Ryzen comes to mind) Threadripper, their mobile etc etc.
Point is, many other makers are often times WAY more willing and wanting to work on open standards vs bottling everything up for their own benefit (Intel, Nvidia, Apple, are birds of a feather wanting proprietary BS for no real good reason in many cases)
Work together to make it better, maybe things can be tweaked to lower cost, increase performance, reduce power etc, there is not a single perfect company (it is impossible) so consortiums such as this can be a very great thing if they have the support and guidance to "make it happen"
Why should NVIDIA spend the money, manpower, or donate IP to join the consortium (whatever the requirements are)/when at this point they don't seem to get much out of it? They don't make host devices (CPUs), memory, or deal with networking, so they are entirely on the periphery of the standard. If it becomes a viable standard I'd imagine they'd support it just like they support PCI-express. Without Intel using it, however, it helps them very little.
BTW, if this becomes a standard that Intel supports I can't see how NVIDIA wouldn't jump at the opportunity to use it. NVIDIA can't get a high speed connection to main memory on Intel's platform currently.
Oh, puh-leez. They are openly hostile to standards. They are stuck at OpenCL 1.2, and don't support any version of it on Tegra. There's no technical justification for that.
They also made NVLink, rather than embracing any open alternative or even just using the draft PCIe 4.
They are also passively hostile towards the open source community, in that all the libraries needed to use their products effectively are proprietary and they give very little assistance (currently none) to the team writing open source linux drivers for their hardware.
I'm not a Nvidia hater (I have some of their products and we buy more at my job), but it's only fair to call out their bad behavior.
I'd love to see a more detailed analysis of the standard. I'm not sure what the standard allows exactly, but the possibilities seem tantalising. Having several types of RAM in the system (such as DDR4 and GDDR6), adding support for a new type of RAM via an expansion card, giving GPU and CPU pretty much direct access to each other's RAM, ...
You can already do that (CPU & GPU accessing each other's RAM). Try to step outside of that PC-centric mindset. This standard is really about cloud.
IMO, what's most interesting about Gen-Z is that it's like a hybrid between a networking standard and an internal peripheral bus. This sentence says it all:
"Gen-Z had a multi-vendor demo of four servers sharing access to two pools of memory through a Gen-Z switch."
Enabling machines to share storage is a pretty big deal. Having banks of remote storage is a pretty big deal.
Oh, and because of that, I don't see it really competing with PCIe 4/5. In the near term, those will probably rule the inside of machines, where latency and bandwidth are of prime importance, while Gen-Z becomes more of a rack-scale interconnect standard.
Then, Intel can go take their OmniPath and suck it.
Stepping outside the PC-centric mindset is precisely what I don't want. I want to see what this can bring to consumer PC's, and how PC's can develop. I still see Anandtech predominantly as an enthusiast site, and I want to get that angle.
And yes, CPU & GPU may be able to access each other's RAM, but not in an effective manner. PCIe is very slow as a RAM interface.
Just because you are trying to see it from a PC perspective doesn't mean this tech will ever make it to your PC. There's a lot of tech which falls in that category - a trend which is only going to continue as the PC market shrinks while the cloud market grows.
If you've been reading this site for very long, you'll know they cover plenty of cloud & mobile stories - not just consumer PCs.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
23 Comments
Back to Article
peevee - Tuesday, February 13, 2018 - link
Without Intel, it is dead in the water for now at mainstream.Better memory interfaces are necessary of course, the current memory interfaces look straight from the 80s.
But for device interfaces... PCIe 4 is around the corner, probably in the very next Intel and AMD architectures.
Pork@III - Tuesday, February 13, 2018 - link
PCI-SIG...They were sleeping too much time for the impoverished relics of PCI-E 3.0rahvin - Tuesday, February 13, 2018 - link
Though Intel's absence will slow down adoption Intel's been dragged kicking and screaming into standards before.beginner99 - Wednesday, February 14, 2018 - link
"PCIe 4 is around the corner, probably in the very next Intel and AMD architectures."Issue is that PCIe 5 is just 1-2 years behind it and current indication is that we will go directly to PCIe 5. Only thing that would benefit from PCIe 4 is the CPU-chipset connection. On the other hand intel/amd could just offer 8 or 16 lanes to the chipset and that issue would be gone as well.
GPUs themselves aren't even limited at 8xPCIe 3.0 at least not for gaming or other consumer tasks. Thing is since at least 5 years hardware is mostly good enough for the average user. The market will separate much stronger than previously into consumer and professional/server parts. The later likes powerful gpus and accelerators that need fast links to the CPU, fast and/or persistent memory and so forth. All this bleeding edge stuff has 0 benefits for the consumer.
Pork@III - Wednesday, February 14, 2018 - link
fuck gamers minds gpu is not only device connected from pci-e.peevee - Tuesday, February 20, 2018 - link
Exactly. m.2 SSDs are pushing 4-lane speed for some time now, and it is not like anybody going to give them more.willis936 - Wednesday, February 14, 2018 - link
"0 benefits for the consumer"ye I really like my $500 SSD upgrade in my laptop being limited by an interconnect.
Santoval - Saturday, February 17, 2018 - link
"All this bleeding edge stuff has 0 benefits for the consumer."Have you ever heard of M.2 NVME SSDs, particularly when you use 3 or 4 of them in RAID? What about networking or dual/triple GPUs for gaming, rendering or video editing work? Or using 8 to 10 GPUs for mining, by using adapters and providing them with just x4 or x2 PCIe 3.0 links each?
All the above are starved for PCIe 3.0 links, and with PCIe 4.0 you can use half the links for the same I/O speed, the same number of links to double your I/O speed or any combination between these two.
You can go even further with PCIe 5.0 (1/4 of links for the same speed etc), but it still unclear when that will be commercially available. Fewer links means more simplified motherboard design (fewer traces) and potentially simpler CPU PCIe controllers. If we used PCIe 4.0 or even 5.0 *today* strictly with 1/2 and 1/4 the number of links respectively nobody would call them "bleeding edge" since the speed they would provide would be the same. So what is "bleeding edge" is largely a matter of perspective, and is meaningless without context.
peevee - Tuesday, February 20, 2018 - link
"current indication is that we will go directly to PCIe 5"Which indication is that? And why? PCIe 4 is standardized, they can produce stuff now.
Intel needs PCIe4 ASAP, given how few PCIe lines their mainstream chips support...
mode_13h - Wednesday, February 21, 2018 - link
AMD needs PCIe 4 ASAP, in order to answer NVLink. The infinity fabric underpinning their multi-die CPU setups would also benefit.tuxRoller - Wednesday, February 14, 2018 - link
You....didn't read the article.CheapSushi - Tuesday, February 13, 2018 - link
I'm guessing Intel doesn't like this because Gen-Z was aiming for having an "open and royalty-free" standard. Plus, since Gen-Zs standard is broad, so not just for PCIe, but could be used in similar place as Omni-Path, QPI, DMI, etc, they don't like it. I suppose it's in Intel's interest to continue market segmentation / product stratification through bus types, etc, when possible.Dragonstongue - Wednesday, February 14, 2018 - link
and there is Nvidia they seem to want complete NON open ways, they do not want to ceed control of anything, they want to be the only vendor or control the standard, Intel is very much the same wayThey want to be a cookie factory, push out the product nothing more, not make the product better (unless absolutely have to)
for the first persons comment on without Intel dead in the water...they may still be number 1 producer for computer chips, they are far from the only one selling mega volumes worth of them annually.AMD is selling a crap ton of chips (Ryzen comes to mind) Threadripper, their mobile etc etc.
Point is, many other makers are often times WAY more willing and wanting to work on open standards vs bottling everything up for their own benefit (Intel, Nvidia, Apple, are birds of a feather wanting proprietary BS for no real good reason in many cases)
Work together to make it better, maybe things can be tweaked to lower cost, increase performance, reduce power etc, there is not a single perfect company (it is impossible) so consortiums such as this can be a very great thing if they have the support and guidance to "make it happen"
Yojimbo - Wednesday, February 14, 2018 - link
Why should NVIDIA spend the money, manpower, or donate IP to join the consortium (whatever the requirements are)/when at this point they don't seem to get much out of it? They don't make host devices (CPUs), memory, or deal with networking, so they are entirely on the periphery of the standard. If it becomes a viable standard I'd imagine they'd support it just like they support PCI-express. Without Intel using it, however, it helps them very little.Yojimbo - Wednesday, February 14, 2018 - link
BTW, if this becomes a standard that Intel supports I can't see how NVIDIA wouldn't jump at the opportunity to use it. NVIDIA can't get a high speed connection to main memory on Intel's platform currently.rahvin - Thursday, February 15, 2018 - link
Nvidia most certainly does everything you listed that they don't. Either your ignorant of their actual products or you're trolling.mode_13h - Saturday, February 17, 2018 - link
Oh, puh-leez. They are openly hostile to standards. They are stuck at OpenCL 1.2, and don't support any version of it on Tegra. There's no technical justification for that.They also made NVLink, rather than embracing any open alternative or even just using the draft PCIe 4.
They are also passively hostile towards the open source community, in that all the libraries needed to use their products effectively are proprietary and they give very little assistance (currently none) to the team writing open source linux drivers for their hardware.
I'm not a Nvidia hater (I have some of their products and we buy more at my job), but it's only fair to call out their bad behavior.
ET - Wednesday, February 14, 2018 - link
I'd love to see a more detailed analysis of the standard. I'm not sure what the standard allows exactly, but the possibilities seem tantalising. Having several types of RAM in the system (such as DDR4 and GDDR6), adding support for a new type of RAM via an expansion card, giving GPU and CPU pretty much direct access to each other's RAM, ...mode_13h - Wednesday, February 14, 2018 - link
You can already do that (CPU & GPU accessing each other's RAM). Try to step outside of that PC-centric mindset. This standard is really about cloud.IMO, what's most interesting about Gen-Z is that it's like a hybrid between a networking standard and an internal peripheral bus. This sentence says it all:
"Gen-Z had a multi-vendor demo of four servers sharing access to two pools of
memory through a Gen-Z switch."
Enabling machines to share storage is a pretty big deal. Having banks of remote storage is a pretty big deal.
mode_13h - Wednesday, February 14, 2018 - link
Oh, and because of that, I don't see it really competing with PCIe 4/5. In the near term, those will probably rule the inside of machines, where latency and bandwidth are of prime importance, while Gen-Z becomes more of a rack-scale interconnect standard.Then, Intel can go take their OmniPath and suck it.
ET - Thursday, February 15, 2018 - link
Stepping outside the PC-centric mindset is precisely what I don't want. I want to see what this can bring to consumer PC's, and how PC's can develop. I still see Anandtech predominantly as an enthusiast site, and I want to get that angle.And yes, CPU & GPU may be able to access each other's RAM, but not in an effective manner. PCIe is very slow as a RAM interface.
mode_13h - Saturday, February 17, 2018 - link
Just because you are trying to see it from a PC perspective doesn't mean this tech will ever make it to your PC. There's a lot of tech which falls in that category - a trend which is only going to continue as the PC market shrinks while the cloud market grows.If you've been reading this site for very long, you'll know they cover plenty of cloud & mobile stories - not just consumer PCs.
GreenReaper - Friday, September 7, 2018 - link
If only they'd picked a unique name. Don't they know that Generation Z is already a thing?