Awesome. I can see targets for these already. These uATX boards with SFP cages would make perfect wirespeed 10GbE routers with a bit of DPDK sauce. Or an excellent SFF Ceph node.
Well, 10GBASE-T switches are becoming cheaper and more power efficient. NBASE-T is awesome as an intermediate step, but it doesn't seem to be getting enough market traction - I don't think you can buy a NBASE-T switch in the market right now. And, I fear by the time those NBASE-T switches and gear become common, 10GBASE-T would become quite affordable for consumers.
NBase-T hardware is out there; Cisco is currently shipping NBase-T hardware (in the form of the Catalyst 3850 model WS-C3850-24XU). Sure, the price point is...well...punishing, but it's out there. Now, as to why NBase-T exists, it's less the switch side and more the client side. On the switch side, because NBase-T fills in the gaps between 1 and 10Gbps, I expect future 10GbE-capable switches to all support 2.5 and 5Gbps speeds as it doesn't involve any major engineering challenges to do so. And, as you note, 10GBase-T hardware is rapidly coming down in price. Now, on the client side, there is a need for intermediate steps. The primary driver for NBase-T is wireless APs - 802.11ac wave 2 hardware needs more than 1Gbps, but 10GbE is substantially overkill. Additionally, since enterprise APs are PoE powered, there's a maximum power budget of 30W for an AP (802.3at standard), so putting in a 10Gbps-capable Ethernet controller into that AP would end up pushing it beyond what a PoE+ switch could deliver. However, a 2.5 or 5Gbps controller could fit into the power budget and allow the AP to provide maximum throughput.
SO-DIMMs exist, though they can sometimes be a bit crowded for the high-capacity registered modules that get used with servers. You won't see anything significantly smaller than that, because it would force the memory bus to be narrower. DIMM slots have to be able to provide a lot more bandwidth than any SSD, and they have to do it with a simple enough connection that latency isn't adversely affected by high-level packet oriented protocols. That means they need a lot of carefully laid out wires.
They've been talking about it, and trying various solutions from RDRAM to FBDIMMs for years to bring a high speed serial interface to DDR memory.
The problem is traditional memory controller interfaces are parallel, which is slow, so you need a lot of connections to keep the IO high. A few wide serial interfaces (or 8 wide like what RDRAM used) would have similar throughput with less pins, but it is complex for a lot of reasons. The traces all have to be close to the same distance because at the speeds these serial interfaces work at, the difference between the physical distance (locations) of the DIMMs actually matters for timing the signal. This makes manufacturing motherboards additionally complex to trace out. There also has to be a termination, although some technologies allow for self termination via fused-termination detection, which is what a SAS multiplexer does, but granted, DIMM slots are a lot difference to engineer than a multi-connection cable. Lastly is the price.
In the end, this really comes down to JEDEC, and because of the fear of RAMBUS trolling memory makers, I think they have artificially distanced themselves from serial interfaces. Should it be a surprise trolling is holding back technology?
@Samus: "The problem is traditional memory controller interfaces are parallel, which is slow, so you need a lot of connections to keep the IO high."
Getting cause and effect backwards. It's not that you need a lot of IO because it is slow, it is you have to slow it down to maintain synchronization with such a large number of connections. The distinction is small in practice, but critical.
@Samus: "The traces all have to be close to the same distance because at the speeds these serial interfaces work at, the difference between the physical distance (locations) of the DIMMs actually matters for timing the signal."
This is more of an issue for parallel interfaces than serial interfaces. The move to PCIe from PCI reduced board complexity significantly. Rather than rely on all bits arriving at the same time (as in a parallel interface), each link is used as an independent path. You can imagine there is some overhead associated with keeping data in order.
Keep in mind that to replace a 128bit memory bus with a single line, you would need to run 128 times faster. Parallel buses are slower, but not that slow. If you use multiple links (PCIe), then you incur more latency as you now have to make sure packets are reordered. Further, serial protocols incur additional overhead as speed increases just to make sure the sending and receiving ends are properly synchronized. PCIe 1.1 uses a relatively simple encoding scheme that sends 10 bits for every 8 bits of data. The extra bits are lost bandwidth for the purpose of making sure the endpoints are synchronized. Another potential issue is that the power use and consequently heat of a link eventually starts to rise faster than the speed of the connection.
There are situations (plenty of them) where this extra overhead and latency are less of an issue to the overall throughput than the slowdown parallel interfaces have to incur to make sure the bits arrive at the same time. There is also the fact that it is impractical to make a parallel interface full duplex, while it is quite common in serial links. The longer the run the worse the turnaround time for half duplex connections. These longer runs where it is harder to keep lines equal length and turnaround times are poor (I.E. PCIe) are typically best kept serial. There is also a practical physical limit of how wide you can go on an interface, though HBM just redefined what that limit is for some use cases.
@Samus: "In the end, this really comes down to JEDEC, and because of the fear of RAMBUS trolling memory makers, I think they have artificially distanced themselves from serial interfaces. Should it be a surprise trolling is holding back technology?"
I think JEDEC deserves a little more credit than you give them here. That said, there is no denying the effects RAMBUS has had on the industry. Your assessment of the effects of patent trolls on innovation is spot on. Patents were originally intended to be used sparsely and for the specific purpose of protecting the inventor's investment by allowing them to sell their product without fear of others cloning it without the research investment. Too many companies today exist with a strangle hold on some technology, but no product to sell with said technology.
Can anyone please, please tell me how would the Xeon D-1540 compare to my current 4790K in a Cinebench Multi- threaded test? I checked the original review link but there was no Cinebench test there. I realize these CPUs have different target markets in mind but still...I do lots of 3D rendering and would like to buy 2-3 of these Xeon D-1540 motherboards as render nodes. Am I right to think that the Xeon D's 8Cx2GHz are equally as powerful as 4790K's 4Cx4GHz? Thank you very much.
As far as I know those Xeon D are very exxpensive. Consider socket 2011-3 6-core CPUs as alternative as well. Power efficeincy is OK if you eco-tune them, although obviously not as good as for Xeon D.
I have a Supermicro Xeon D 1540 board; it is not all _that_ power efficient, it takes 75W at the plug when running floating-point-intensive jobs on all eight cores. I agree that for most uses a 6-core Haswell-E is a better way to go - similar or lower price, twice the memory bandwidth.
Thanks for the input to you too, guys. Kind of disappointing then- I was hoping these wouldn't cost too much, not to mention the 45W envelope. If it goes up to 75W though, that is not so much different from the 88W of my 4790K... oh well....
I happen also to have an i7-4790K system, which draws about 120W at the plug under heavy load.
Of course these figures are neither here nor there - the cost-optimised 4790K will cost its purchase price in electricity in five years, the more efficient and more expensive Xeon D will cost its purchase price in electricity in something like fourteen years.
I'm confused about the 10 GbE Base-T ports. Aren't these provided on-die by the Xeon-D? Why then would the controller be different for one chip vs another? Or did Asrock put down a separate controller? That seems unnecessarily expensive given the on-die controllers in Xeon-D
Now, the uATX boards are more interesting. All the uATX boards actually integrate the Cortina Systems CS4227 [ http://www.inphi.com/products/cs4227-cs4223-cs4343... - Cortina was acquired by Inphi in mid-2014 ]. This is a PHY connected to the 10G KR ports of the Xeon-D SiP. The CS4227 chip is present even in the board which doesn't have the SFP+ ports.
In order to get 10GBASE-T ports on the uATX boards, there is no option but to use a separate 10GBASE-T controller like the X540 or X550. and connect it to the PCIe 3.0 lanes from the Xeon-D.
So that means that the uATX boards with copper 10G actually have separate NIC on PCIe instead of using the Xeon-D on-die 10G interfaces? that definitely seems less cost effective
Liking the computing power and density that asrock are offering with their Xeon D miniboards, two questions jump to mind however, one as someone else raised is what the final price will be (am I looking at $1000+ NZ for the D1540D4U-2L+ more for 10G?
The other question is about the reliability of the Asrock Rack boards, can people offer their 5c about their experiences with how reliable Rack boards are? The Newegg comments for the C2750D4L vary quite a bit in the experiences people have had with that particular board...
Yes, they are all listed as RDIMM's in the specs which (should/always?) imply ECC. So no UDIMM's here, unless that is optional. UDIMM's of course come in two flavors: ECC and not.
The Supermicro Xeon D boards have been out for a little while now... anyone have any idea when these ASRock boards will actually reach the retail market?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
37 Comments
Back to Article
Wardrop - Friday, October 30, 2015 - link
Cool, may make for good multi-purpose NAS boards.ddriver - Sunday, November 1, 2015 - link
It is overkill for nas IMO, multipurpose or not.aebiv - Sunday, November 1, 2015 - link
Depends on your type of NAS.This is ideal for a FreeNAS setup.
Shadow7037932 - Monday, November 2, 2015 - link
Depends on the setup. If you're running ZFS this could work quite well.twnznz - Friday, October 30, 2015 - link
Awesome. I can see targets for these already. These uATX boards with SFP cages would make perfect wirespeed 10GbE routers with a bit of DPDK sauce. Or an excellent SFF Ceph node.iwod - Friday, October 30, 2015 - link
The SSD are shrinking in physical size on server as well. When are we going to see that for DRAM. It seems those are wasting quite a lot of space.iwod - Friday, October 30, 2015 - link
And I wonder why the sever Atom support NBASE-T 2.5Gbps but not the Xeon-D,ganeshts - Friday, October 30, 2015 - link
Well, 10GBASE-T switches are becoming cheaper and more power efficient. NBASE-T is awesome as an intermediate step, but it doesn't seem to be getting enough market traction - I don't think you can buy a NBASE-T switch in the market right now. And, I fear by the time those NBASE-T switches and gear become common, 10GBASE-T would become quite affordable for consumers.Blackfell - Sunday, November 1, 2015 - link
NBase-T hardware is out there; Cisco is currently shipping NBase-T hardware (in the form of the Catalyst 3850 model WS-C3850-24XU). Sure, the price point is...well...punishing, but it's out there. Now, as to why NBase-T exists, it's less the switch side and more the client side. On the switch side, because NBase-T fills in the gaps between 1 and 10Gbps, I expect future 10GbE-capable switches to all support 2.5 and 5Gbps speeds as it doesn't involve any major engineering challenges to do so. And, as you note, 10GBase-T hardware is rapidly coming down in price. Now, on the client side, there is a need for intermediate steps. The primary driver for NBase-T is wireless APs - 802.11ac wave 2 hardware needs more than 1Gbps, but 10GbE is substantially overkill. Additionally, since enterprise APs are PoE powered, there's a maximum power budget of 30W for an AP (802.3at standard), so putting in a 10Gbps-capable Ethernet controller into that AP would end up pushing it beyond what a PoE+ switch could deliver. However, a 2.5 or 5Gbps controller could fit into the power budget and allow the AP to provide maximum throughput.nils_ - Thursday, November 5, 2015 - link
It's also important to note that you may not be able to use 10GBase-T with existing cabling, but 2.5/5 might work.Billy Tallis - Saturday, October 31, 2015 - link
SO-DIMMs exist, though they can sometimes be a bit crowded for the high-capacity registered modules that get used with servers. You won't see anything significantly smaller than that, because it would force the memory bus to be narrower. DIMM slots have to be able to provide a lot more bandwidth than any SSD, and they have to do it with a simple enough connection that latency isn't adversely affected by high-level packet oriented protocols. That means they need a lot of carefully laid out wires.Gigaplex - Saturday, October 31, 2015 - link
HBM should be interesting on the Zen server APUs.Samus - Sunday, November 1, 2015 - link
They've been talking about it, and trying various solutions from RDRAM to FBDIMMs for years to bring a high speed serial interface to DDR memory.The problem is traditional memory controller interfaces are parallel, which is slow, so you need a lot of connections to keep the IO high. A few wide serial interfaces (or 8 wide like what RDRAM used) would have similar throughput with less pins, but it is complex for a lot of reasons. The traces all have to be close to the same distance because at the speeds these serial interfaces work at, the difference between the physical distance (locations) of the DIMMs actually matters for timing the signal. This makes manufacturing motherboards additionally complex to trace out. There also has to be a termination, although some technologies allow for self termination via fused-termination detection, which is what a SAS multiplexer does, but granted, DIMM slots are a lot difference to engineer than a multi-connection cable. Lastly is the price.
In the end, this really comes down to JEDEC, and because of the fear of RAMBUS trolling memory makers, I think they have artificially distanced themselves from serial interfaces. Should it be a surprise trolling is holding back technology?
BurntMyBacon - Tuesday, November 3, 2015 - link
@Samus: "The problem is traditional memory controller interfaces are parallel, which is slow, so you need a lot of connections to keep the IO high."Getting cause and effect backwards. It's not that you need a lot of IO because it is slow, it is you have to slow it down to maintain synchronization with such a large number of connections. The distinction is small in practice, but critical.
@Samus: "The traces all have to be close to the same distance because at the speeds these serial interfaces work at, the difference between the physical distance (locations) of the DIMMs actually matters for timing the signal."
This is more of an issue for parallel interfaces than serial interfaces. The move to PCIe from PCI reduced board complexity significantly. Rather than rely on all bits arriving at the same time (as in a parallel interface), each link is used as an independent path. You can imagine there is some overhead associated with keeping data in order.
Keep in mind that to replace a 128bit memory bus with a single line, you would need to run 128 times faster. Parallel buses are slower, but not that slow. If you use multiple links (PCIe), then you incur more latency as you now have to make sure packets are reordered. Further, serial protocols incur additional overhead as speed increases just to make sure the sending and receiving ends are properly synchronized. PCIe 1.1 uses a relatively simple encoding scheme that sends 10 bits for every 8 bits of data. The extra bits are lost bandwidth for the purpose of making sure the endpoints are synchronized. Another potential issue is that the power use and consequently heat of a link eventually starts to rise faster than the speed of the connection.
There are situations (plenty of them) where this extra overhead and latency are less of an issue to the overall throughput than the slowdown parallel interfaces have to incur to make sure the bits arrive at the same time. There is also the fact that it is impractical to make a parallel interface full duplex, while it is quite common in serial links. The longer the run the worse the turnaround time for half duplex connections. These longer runs where it is harder to keep lines equal length and turnaround times are poor (I.E. PCIe) are typically best kept serial. There is also a practical physical limit of how wide you can go on an interface, though HBM just redefined what that limit is for some use cases.
@Samus: "In the end, this really comes down to JEDEC, and because of the fear of RAMBUS trolling memory makers, I think they have artificially distanced themselves from serial interfaces. Should it be a surprise trolling is holding back technology?"
I think JEDEC deserves a little more credit than you give them here. That said, there is no denying the effects RAMBUS has had on the industry. Your assessment of the effects of patent trolls on innovation is spot on. Patents were originally intended to be used sparsely and for the specific purpose of protecting the inventor's investment by allowing them to sell their product without fear of others cloning it without the research investment. Too many companies today exist with a strangle hold on some technology, but no product to sell with said technology.
julianb - Saturday, October 31, 2015 - link
Can anyone please, please tell me how would the Xeon D-1540 compare to my current 4790K in a Cinebench Multi- threaded test? I checked the original review link but there was no Cinebench test there.I realize these CPUs have different target markets in mind but still...I do lots of 3D rendering and would like to buy 2-3 of these Xeon D-1540 motherboards as render nodes.
Am I right to think that the Xeon D's 8Cx2GHz are equally as powerful as 4790K's 4Cx4GHz?
Thank you very much.
QinX - Saturday, October 31, 2015 - link
If you look at the original review for the Xeon-Dhttp://www.anandtech.com/show/9185/intel-xeon-d-re...
You can see it performs gets around 29k points vs 35k for the 2560L v3
In CPU Bench a 2650L V3 gets 36.6K points and a 4790K gets 33.5K
So I believe the 4790K performs around 10% better if my math is somewhat right.
QinX - Saturday, October 31, 2015 - link
of course the 4790K has a much higher TDP so the Xeon-D has that going for it.julianb - Saturday, October 31, 2015 - link
Thank you very much for that comparison, QinX.I hope these boards won't cost too much in that case.
MrSpadge - Sunday, November 1, 2015 - link
As far as I know those Xeon D are very exxpensive. Consider socket 2011-3 6-core CPUs as alternative as well. Power efficeincy is OK if you eco-tune them, although obviously not as good as for Xeon D.TomWomack - Monday, November 2, 2015 - link
I have a Supermicro Xeon D 1540 board; it is not all _that_ power efficient, it takes 75W at the plug when running floating-point-intensive jobs on all eight cores. I agree that for most uses a 6-core Haswell-E is a better way to go - similar or lower price, twice the memory bandwidth.julianb - Monday, November 2, 2015 - link
Thanks for the input to you too, guys.Kind of disappointing then- I was hoping these wouldn't cost too much, not to mention the 45W envelope. If it goes up to 75W though, that is not so much different from the 88W of my 4790K...
oh well....
Jaybus - Monday, November 2, 2015 - link
The Xeon-D doesn't go up to 75W. That was 75W at the plug, meaning the entire machine draws 75W of line power under heavy load.TomWomack - Wednesday, November 4, 2015 - link
I happen also to have an i7-4790K system, which draws about 120W at the plug under heavy load.Of course these figures are neither here nor there - the cost-optimised 4790K will cost its purchase price in electricity in five years, the more efficient and more expensive Xeon D will cost its purchase price in electricity in something like fourteen years.
evancox10 - Saturday, October 31, 2015 - link
I'm confused about the 10 GbE Base-T ports. Aren't these provided on-die by the Xeon-D? Why then would the controller be different for one chip vs another? Or did Asrock put down a separate controller? That seems unnecessarily expensive given the on-die controllers in Xeon-Dganeshts - Sunday, November 1, 2015 - link
OK, I can explain this for you after doing some research on the board components as well as Xeon-D:First of all, the 10G support that is on-die in Xeon-D is 2x10G KR (copper backplane). It needs a PHY for translation to SFP+ or 10GBASE-T.
In the mITX boards, the X557-AT2 is the PHY part that provides a 10GBASE-T interface and it is connected to the 10G KR ports on the Xeon-D SiP. [ http://www.intel.com/content/www/us/en/embedded/pr... ]
Now, the uATX boards are more interesting. All the uATX boards actually integrate the Cortina Systems CS4227 [ http://www.inphi.com/products/cs4227-cs4223-cs4343... - Cortina was acquired by Inphi in mid-2014 ]. This is a PHY connected to the 10G KR ports of the Xeon-D SiP. The CS4227 chip is present even in the board which doesn't have the SFP+ ports.
In order to get 10GBASE-T ports on the uATX boards, there is no option but to use a separate 10GBASE-T controller like the X540 or X550. and connect it to the PCIe 3.0 lanes from the Xeon-D.
evancox10 - Sunday, November 1, 2015 - link
Ok, thanks for the clarification!cygnus1 - Monday, November 9, 2015 - link
So that means that the uATX boards with copper 10G actually have separate NIC on PCIe instead of using the Xeon-D on-die 10G interfaces? that definitely seems less cost effectiveWatcherCK - Sunday, November 1, 2015 - link
Liking the computing power and density that asrock are offering with their Xeon D miniboards, two questions jump to mind however, one as someone else raised is what the final price will be (am I looking at $1000+ NZ for the D1540D4U-2L+ more for 10G?The other question is about the reliability of the Asrock Rack boards, can people offer their 5c about their experiences with how reliable Rack boards are? The Newegg comments for the C2750D4L vary quite a bit in the experiences people have had with that particular board...
nils_ - Thursday, November 5, 2015 - link
$1k is probably spot on.Ninhalem - Monday, November 2, 2015 - link
Do the memory slots on the mITX boards support ECC?kwrzesien - Monday, November 2, 2015 - link
Yes, they are all listed as RDIMM's in the specs which (should/always?) imply ECC. So no UDIMM's here, unless that is optional. UDIMM's of course come in two flavors: ECC and not.mctylr - Monday, November 2, 2015 - link
While most RDIMM (https://en.wikipedia.org/wiki/Registered_memory">registered DIMMs) are ECC, it does not imply ECC, it simply means the memory is buffered.Ninhalem - Wednesday, November 4, 2015 - link
I checked with ASRock directly on their facebook page and confirmed those slots do support ECC memory.mdw9604 - Monday, November 2, 2015 - link
How much these cost?creed3020 - Wednesday, November 4, 2015 - link
I would like a D1520D4I please!That looks like an awesome platform to build a true next gen home server which would make my current NAS cry in shame.
jb510 - Saturday, January 2, 2016 - link
The Supermicro Xeon D boards have been out for a little while now... anyone have any idea when these ASRock boards will actually reach the retail market?Brokerssa - Tuesday, March 3, 2020 - link
currency-trading-brokers.com/forex-comparisons-ratings-reviews-japan.html