Comments Locked

78 Comments

Back to Article

  • hojnikb - Tuesday, December 16, 2014 - link

    Wow, i have never motherboard that simple :)
  • CajunArson - Tuesday, December 16, 2014 - link

    OK you devote another huge block of text to the typical x86 complexity myth* followed by: Oh, but the ARM chips are superior because they have special-purpose processors that overcome their complete lack of performance (both raw & performance per watt).

    Uhm... WTF?? I need to have a proprietary, poorly documented add-on processor to make my software work well now? How is that a "standard"? How is requiring a proprietary add-on processor that's not part of any standard and requires boatloads of software cruft working in a "reduced instruction set architecture" exactly?

    I might as well take the AVX instruction set for modern x86... which is leagues ahead of anything that ARM has available, and say that x86 is now a "RISC" architecture because the AVX part of x86 is just as clean or cleaner than anything ARM has. I'll just conveniently forget about the rest of x86 just like the ARM guys conveniently forget about all the non-standard "application accelerators" that are required to actually make their chips compete with last-year's Atoms.

    * Maybe in a micro-controller setting where you are using a PIC or Arduino the x86 decoding is a real issue, but in a server? Please. Considering the only hard numbers you have show a 2013-model Atom beating a 2015-model ARM server processor, you'll have to try harder.
  • hlmcompany - Tuesday, December 16, 2014 - link

    The article describes ARM chips as becoming more competitive, but still lagging behind...not that they're superior.
  • Kevin G - Tuesday, December 16, 2014 - link

    The coprocessor idea is something stems from mainframe philosophy. Historically things like IO requests and encryption were always handled by coprocessors in this market.

    The reason coprocessors faded away outside of the mainframe market is that it was generally cheaper to do a software implementation. Now with power consumption being more critical than ever, coprocessors are seen as a means to lower overall platform power while increasing performance.

    Philosophically, there is nothing that would prevent the x86 line from doing so and for the exact same reasons. In fact with PCIe based storage and NVMe on the horizon in servers, I can see Intel incorporating a coprocessor to do parity calculations for RAID 5/6 in there SoCs.
  • kepstin - Tuesday, December 16, 2014 - link

    Intel has already added some instructions in avx and avx2 that vastly improve the performance of software raid5 and 6; the Haswell chip in my laptop has the Linux software raid implementation claiming 24GiB/s raid5 with avx, and 23GiB/s raid6 with avx2 (per core).
  • MrSpadge - Tuesday, December 16, 2014 - link

    Of course additional power draw for more complex instruction deconding mattes in servers: today they are driven by power-efficiency! The transistors may not matter as much, but in a multi-core environment they add up. Using the quoted statement from AMD of "only 10% more transistors" means one could place 11 RISC cores in the same area for the same cost as 10 otherwise identical x86 cores. Johan said it perfectly with "the ISA is not a game changer, but it matters".

    And you completely misunderstood him regarding the accelerators. Intel is producing "CPUs for everyone" and hence only providing few accelerators or special instructions. In the ARM ecosystem it's obvious that vendors are searching niches and are willing to provide custom solutions for it - hence the chance is far higher that they provide some accelerator which might be game-changing for some applications.

    This doesn't mean the architecture has to rely entirely on them, neither does it mean they have to be undocumented. The accelerators do not even have to be faster than software solutions, as long as they're easy enough to work with and provide significant power savings. Intel is doing just that with special-purpose hardware in their own GPUs.

    And don't act as if much would have changed in the Atom space ever since 22 nm Silvermont cores appeared. It doesn't matter if it's from 2013 or 2015 - it's all just the same core.
  • OreoCookie - Tuesday, December 16, 2014 - link

    What's with all the unnecessary piss and vinegar?

    All CPU vendors rely increasingly on specialized silicon, newer Intel CPUs feature special crypto instructions (AES-NI) and Quick Sync, for instance. Adding special purpose hardware to augment the system (in the past usually done for performance reasons) is quite old, just think of hardware RAID cards and video »accelerators« (which are not called GPUs). The reason that Intel doesn't add more and more of these is that they build general purpose CPUs which are not optimized for a specific workload (the article gives a few examples). In other environments (servers, mobile) the workload is much more clearly defined, and you can indeed take advantage of accelerators.

    The biggest advantage of ARM cpus is flexibility -- the ARM ecosystem is built on the idea to tailor silicon to your demands. This is also a substantial reason why Intel's efforts in the mobile market have been lackluster. Recently, Synology announced a new professional NAS (the DS2015xs) which was ARM-based rather than Intel-based. Despite its slower CPU cores, the throughput of this thing is massive -- in part, because it sports two (!) 10 GBit ethernet ports out of the box. Vendors are looking for niches where ARM-based servers could gain a foothold, so they are trying a lot of things and see what sticks.
  • goop666666 - Saturday, December 20, 2014 - link

    LOL! Most of the comments here like this one seem to be written by people who think computers should all be like gaming machines or something.

    Here'a tip: no-one cares about "complexity," "standardization," "RISC," or anything else you mention. All they care about in the target market for ARM server chips is price, performance and power, and I mean ALL THREE.

    On this Intel cannot compete. They sell wildly overpriced legacy hardware propped up by massive R&D expenditures and they're wedded to that model. The rest of the industry is wedded to the new and cheap model. Just like how the industry moved to mobile devices and Intel stood still, this change will also wash over Intel while they sit still in denial.

    There's a reason why Intel stock has gone no-where for years.
  • nlasky - Monday, December 22, 2014 - link

    Jan 8, 2010, Intel stock price $20.83. Dec 19, 2010, Intel stock price $36.37. If by gone no-where in for years you mean increased by 70% I guess you would be correct. Intel can't compete because they are wedded to their model? They have a profit margin of 20% and an operating margin of 27%. They could easily cut prices to compete with any ARM offerings. Servers have been around forever, unlike the mobile computing platform. Intel has an even larger stranglehold on this industry than ARM has in the mobile space. Here's a tip - stop spewing a bunch of uniformed nonsense just to make an argument.
  • nlasky - Monday, December 22, 2014 - link

    *Dec 19, 2014
  • jjj - Tuesday, December 16, 2014 - link

    If you look at phones and tabs ,we might be getting some rather big custom cores in 2015 and 2016. Apple and Nvidia already have that, ofc much smaller than Intel's core when adjusting for process (actually that's an assumption when it comes to Denver since don't think we've seen any die shots).
    Intel at the same time in consumer is pushing for more non-CPU/GPU compute units and low power and they might face a tough question about core size and even process (if they target low clocks, low power , or the opposite).Got to wonder if at some point they'll have to go for a big core just for server.Would make things even more interesting.
    Might not matter but Apple kinda has the perf for an ARM Macbook Air if they go quad. Not something worth doing for such low volume but doable when they go quad on all ipads or sooner if they launch a bigger ipad. Could be a trigger for others pushing more ARM based Chromebooks and beyond. That would set the stage for even bigger ARM cores.
    Also got the feeling Nintendo will go ARM in 2016 and not many reasons for Sony and M$ not to go that way if they ever make a new gen- just another market for bigger ARM cores, any significant revenue helps with dev costs so it matters.
  • CajunArson - Tuesday, December 16, 2014 - link

    1. The Core-m is widely derided as not being fast enough for the MacBook Air.
    2. The Core-m is easily twice as fast as the A8X in benchmarks that count... even Anandtech's own benchmarks show that. Furthermore, when you step away from web browsers and get to use the advanced features of the Core-m like AVX, that advantage jumps to about 8x faster in compute-heavy benchmarks like Linpack.
    3. Even the mythical A9 coming in 2015 is expected to have roughly a 20% performance boost over the A8x.
    4. Any real computer using an ARM chip would have to have a translation layer just like the old Rosetta to run the huge library of x86 software out there. Rosetta sort of worked because the Core 2 chips from Intel were *massively* faster than the PowerPC parts they replaced. Now you expect to run the translation overhead on an A9 chip that is slower -- by a large margin -- than the Core-m parts you've already derided as not being good enough?

    Yeah, I'm not holding my breath.
  • fjdulles - Tuesday, December 16, 2014 - link

    You may be right, but remember that ARM chips using the same power budget as Intel core i* will no doubt be clocked higher and perform that much better. Not sure if that will be competitive but it would be interesting to see.
  • wallysb01 - Tuesday, December 16, 2014 - link

    Only if you want a glorified tablet as a laptop. The software most people use in real work on laptops/desktops is not going to be ported over to ARM at an speed, even if ARMs could do that work reasonably well.
  • Kevin G - Wednesday, December 17, 2014 - link

    I'm under the impression that a good chunk has already been ported. MS Office for example is native ARM on Windows RT. Various Linux distributions have ARM ports completed with ARM based office and desktop software. The main thing missing are some big commercial applications like Photoshop etc.

    The server side of thing is similar with Linux and open software ports. MS is weirdly absent but I suspect that an ARM based version of Windows 2012/2014 is waiting of major hardware to be released. Much of the Windows base is already ported over to ARM due to Windows RT.
  • Kevin G - Wednesday, December 17, 2014 - link

    Indeed. Performance of ARM platforms once power constraints have been removed is a very open question. So far all the core designs in products have been used in mobile where SoC power consumption is less than 5 W. What a 100 W product would look is an open and very interesting question.
  • Ratman6161 - Wednesday, December 17, 2014 - link

    If they "use the same power budget as an Intel core i*" then what would be the point?
  • jjj - Tuesday, December 16, 2014 - link

    Ok you are focusing on the wrong thing but lets do that anyway.
    I have never claimed that Apple's own SoC would beat Intel's current SoCs, just that the perf would be enough if they go quad and obviously higher clocks.
    When you talk Core M you should remember that the price at launch was $281 so it's not good enough for anything.
    Anyway how about you compare a possible Apple SoC with a MacBook Air from 2011, lets face it the Air is a crap machine anyway , not much perf and TN panel for w/e ridiculous price it costs now and it's users are certainly not doing any heavy lifting with it.
    At the same time Apple's own 15- 20$ SoC would allow them a much cheaper machine and a presence in a price segment they never competed in, adding at least 5B of revenue per year (including cannibalization) and a share gain in PC of 2-3%.
    But then again the point was that there are a bunch of trends that could favor bigger ARM cores.
  • Morawka - Wednesday, December 17, 2014 - link

    it might cost them $20 for the A8X in fab cost, but the R&D for that chip is in the 10's of millions. Factor that in, to however many they ship, and it adds at least another $20 per chip
  • jospoortvliet - Wednesday, December 17, 2014 - link

    Even more obvious then that this would save them money by spreading out the fixed costs over more devices...
  • esterhasz - Thursday, December 18, 2014 - link

    But this is exactly why a wider array of machines based on their chips would make sense: the R&D cost is already spent anyways, since iPhone and iPad need chips, selling more units thus reduces R&D cost per unit. Economies of scale.

    I don't believe a MBA variant with ARM is down the road either, but the rumored iPad Pro could develop into something similar rather quickly.
  • OreoCookie - Tuesday, December 16, 2014 - link

    If you want to talk about ARM on the desktop, that's a whole other discussion, but one that most certainly needs to include price: if the price difference between a Broadwell-based Core M and a fictitious Apple A9X is $200~$230, then this changes the discussion completely. Two other factors are graphics performance (the Core M has »only« 1.3 billion transistors, the A8X ~2 billion, indicating that the mythical A9X may have faster graphics) and the fact that Apple controls the release schedule and can spec the SoC to meet its projected needs. To view this topic solely through the lens of CPU performance is myopic.
  • darkich - Friday, December 19, 2014 - link

    Your comparisons missed the picture spectacularly.
    A8X is a 20nm 2-4W TDP chip with a price that is probably around 70$.
    Top of the line Core M5Y70 is a 14nm 4.5 W TDP chip with a price of 270$.
    And it has a weaker GPU, btw. (raw performance). And it throttles massively, effectively giving only 50% of the benchmark performance.

    If you're going to compare that to an Apple chip, compare it to a 14nm A9X with custom derived PowerVR series 7 GPU,(scales up to 1,4 TFLOPS) vastly expanded memory controllers connected to a much faster RAM (compared to one in the iPad) upclocked to 2GHz, that are available at any time.
  • darkich - Friday, December 19, 2014 - link

    .. *with cores upclocked to about 2GHz
  • Flunk - Tuesday, December 16, 2014 - link

    Nintendo already sells ARM systems, the 3DS and the DS before it are both ARM-based. The PSVita is ARM too. I don't see an ARM Macbook Air anytime soon, they need a bigger and higher-clocking chip for that and it doesn't look like that's going to happen anytime soon.
  • Nintendo Maniac 64 - Tuesday, December 16, 2014 - link

    Even the Game Boy Advance used an ARM7 for its main CPU.
  • jjj - Tuesday, December 16, 2014 - link

    Obviously there are handhelds using ARM but the point was about bigger cores and clearly not handhelds.
  • DLoweinc - Tuesday, December 16, 2014 - link

    Don't quote Wikipedia, not suitable for this level of writing.
  • garbagedisposal - Tuesday, December 16, 2014 - link

    Says DLoweinc, master of knowledge and scholarly writing.
    In contrast to your childish and outdated opinion, Wikipedia is a perfectly valid source of information, go read about it and quit crying.
  • Daniel Egger - Tuesday, December 16, 2014 - link

    The problem really is the custom solutions can simply not compete with Intel on any level for general purpose computing (which the majority of applications are), not on performace/price, performance/power and not even on features/price.

    For instance I can see a huge market for sub-Xeon (or Atom C) performance at a corresponding price -> not going to happen because everyone is targeting > Xeon performance at ridiculous prices because they're expecting the margin to be there however there're simply to many compromises to be made by the buyers so that has to fail.

    Also I can see a huge demand for Atom C - Xeon performance at lower power consumption however no one seems to be really targetting this, all we get are Raspberry Pi's and a bit beefier but close from even Atom C. The new virtualisation techniques (Docker et al) opened a whole new can of possibilities for non-x86(_64) devices because virtualisation is suddenly possible and much more lightweight than ever before but no one seems to want to jump this opportunity.

    I'd really like to buy some affordable general purpose (BYOM/BYOS) hardware which has a little bit of oomph and takes little power which should be the powerful sides of any of the contenders but somehow all fail to deliver and I don't even see an attempt to change that.

    If I want mind-boggling performance at decent performance/price ratio with real virtualisation and 100% standard software compatibility there's no way around the high end Xeons (and maybe AMD iff they manage to get their asses back up) and none of the contenders is ever going to challenge that so they might as well stop trying.
  • beginner99 - Tuesday, December 16, 2014 - link

    Agree. I just don't see it. What wasn't mentioned or I might have missed is Intels turbo technology. Does ARM have anything similar? Single-threaded performance matters. If a websites takes double the time to be built by the server the user can notice this. And given complexity of modern web sites this is IMHO a real issue. Latency or "service time" is greatly affected by single-threaded performance. That's why visualization is great. Put tons of low-usage stuff on the same physical server and yet each request profits from the single-threaded performance.

    Now these ARM guys are targeting this high single-threaded performance but why would any company change? Whole software stack would have to change as well at don't forget the software usually cost way, way more than the hardware it runs on. So if you save 10% on the SOC you maybe save less than 1% on the total BOM including software. They can't win on price and on performance/watt Intel still hast best process. So no i don' see it except for niche markets like these Mips SOCs from cavium.
  • Ratman6161 - Wednesday, December 17, 2014 - link

    "Xeon performance at ridiculous prices" I just don't get the "ridiculous prices" comment. To me, it seems like hardware these days is so cheap they are practically giving it away. I remember in the days of NT 4.0 Servers we paid $40K each for dual socket Dell systems with 16 GB Ram.

    A few years later we were doing Windows 2000 Server on Dell 2850's that were less than half the price.

    Then in 2007 we went the VMWare route on Dell 2950's where the price actually went up to $23K but we were getting dual sockets/8 cores and 32GB of RAM so they made the $40K servers we bought years before look like toys.

    Four years later we got R-710's that were dual socket/12 cores and 64GB or RAM and made the $23K 2950's look like clunkers but the price was once again almost half at about $12K.

    Today we are looking at replacing the R-710's with the latest generation which will be even more cores and more RAM for about the same price.

    So to me, the prices don't seem ridiculous at all. The servers themselves now make up only a fraction of our hardware costs with the expensive items being SAN storage. But that too is a lot cheaper. We are looking at going from our two SANS with 4GB fiber channel connections to a single SAN with 10GB Ethernet and more storage than the two old units combined...but still costing less than the old SANs did for just one. So prices there are expensive but less than half of what we paid in 2007 for more storage.

    The real costs in the environment are in Software licensing and not I'm not talking about Microsoft or even VMware. Licensing those products are chump change compared to the Enterprise Software crooks...that's where the real costs are. The infrastructure of servers, storage and "plumbing" sorts of software like Windows Server and VMWare are cheap in comparison.
  • mrdude - Tuesday, December 16, 2014 - link

    Great article, Johan

    I think the last page really describes why so many people, myself included, feel that ARM servers/vendors have a very good chance of entrenching themselves in the market. Server workloads are more complex and varied today than they ever have been in the past and it isn't high volume either: the Facebook example is a good one. These companies buy hardware by the truckload and can benefit immensely from customization that Intel may not have on offer.

    To add to that, what wasn't mentioned is that ARM, due to its 'license everything' business model, provides these same companies the opportunity to buy ready-made bits of uArch and, with a significantly smaller investment, build them own as-close-to-ideal SoC/CPU/co-processor that they need.

    Competition is a great thing for everyone.
  • JohanAnandtech - Tuesday, December 16, 2014 - link

    True. Although it seems that only AMD really went for the "license almost everything" model of ARM.
  • mrdude - Tuesday, December 16, 2014 - link

    Yep. And that's likely due to the budget/timing constraints. I think they were gunning for the 'first to market' branding but they couldn't meet their own timelines. Something of a trend with that company. I'm curious as to why we haven't heard a peep from AMD or partners regarding performance or perf-per-watt. Iirc, we were supposed to see Seattle boards in Q3 of 2014.

    I also feel like ARM isn't going to stop at the interconnect. There's still quite a bit of opportunity for them to expand in this market.
  • cjs150 - Tuesday, December 16, 2014 - link

    Ultimately, my interest in servers is limited but I would like a simple home server that would tie all my computers, NAS, tablets and the other bits and bobs that a geek household has.
  • witeken - Tuesday, December 16, 2014 - link

    Who's interested in Intel's data center strategy, can watch Diane Bryant's recent presentation (including PDF): http://intelstudios.edgesuite.net/im/2014/live_im.... The Q&A from 2013 also has some comments about ARM servers: http://intelstudios.edgesuite.net/im/2013/live_im....
  • Kevin G - Tuesday, December 16, 2014 - link

    "Now combine this with the fact that Windows on Alpha was available." - Except that Windows NT was available for Alpha. There was a beta for Windows 2000 in both 32 bit and 64 bit flavors for the curious.

    I disagree with the reason why Intel beat the RISC players. Two of the big players were defeated by corporate politics: Alpha and PA-RISC were under the control of HP who was planning to migrate to Itanium. That leaves POWER, SPARC, MIPs and Intel's own Itanium architecture at the turn of the millennium. Of those, POWER and SPARC are still around as they continue to execute. So the only two victims that can be claimed by better execution is MIPs and Intel's own Itanium.

    While IBM and Oracle are still executing on hardware, the Unix market as a whole has decreased in size as a whole. The software side isn't as strong as it'd use to be. Linux has risen and proven itself to be a strong competitor to the traditional Unix distribution. Open source software has emerged to fill many of the roles Unix platforms were used to. Further more, many of these applications like Hadoop and Casandra are designed to be clustered and tolerate node failures. No need to spend extra money on big iron hardware if the software doesn't need that level of RAS for uptime. The general lower cost of Linux and open source software (though they're not free due to the need for support) combined with furhter tightening of budgets during the great recession has made many businesses reconsider their Unix platforms.
  • JohanAnandtech - Tuesday, December 16, 2014 - link

    My main argument was that the RISC market was fragmented, and not comparable to what the x86 market is now (Intel dominating with a very large software base).

    While I agree with many of your points, you can not say that SPARC is not a victim. In 90ies, Sun had a very broad product range from entry-level workstation to high-end server. The same is true for the Power CPUs.

  • Kevin G - Wednesday, December 17, 2014 - link

    The RISC market was fragmented on both hardware and software. The greatest example of this would be HP that had HPUX, Tru64, OpenVMS, and Nonstop as operating system and tried to get them all migrated to a common hardware platform: Itanium. How each platform handled backwards compatibility with their RISC roots was different (and Tru64 was killed in favor of HPUX).

    The midrange RISC workstation suffered the same fate as the dual socket x86 workstation market: good enough hardware and software existed for less. The race to 1Ghz between Intel and AMD cut out the performance advantage RISC platforms carried. Not to say that the RISC a chips didn't improve performance but vendors never took steps to improve their price. Window 2000 and the rise of Linux early in the 2000's gave x86 a software price advantage too while having good enough reliability.

    Sun's hardware business did suffer some horrible delays which helped lead the company into Oracle's acquisition. Notably was the Rock chip which featured out-of-order execution but also out-of-order instruction retirement. Sun was never able to validate any prototype silicon and ship it to customers.
  • jhh - Tuesday, December 16, 2014 - link

    SPARC and Power have had trouble keeping up with Moore's law, as neither sold enough to amortize R&D to push out innovation at the same rate as Intel. As Moore's law comes to an end, this will stop being a unique Intel advantage. It just might be too late for both of them. One can see the pressure on IBM, with their opening the Power architecture in similar ways to ARM. Both POWER and SPARC have to keep up to porting drivers to their Unix implementations, while the device manufacturers either write drivers for Linux or don't get volume. I just can't see either POWER or SPARC being cost effective over the long run. And, when others see the same thing, they aren't going to be excited about porting application software to those platforms.

    ARM needs to have a good performance/power and performance/cost ratio to get people excited to buy something other than Intel. They are certainly getting enough volume from the low-end to make investment on high-end parts. So far, I'm not excited enough to recommend any ARM proof-of-concept though.
  • Kevin G - Wednesday, December 17, 2014 - link

    IBM always had a licensing model similar to ARM with PowerPC cores. The only thing really new here is that IBM is licensing out there flagship POWER chip in the same manner. Despite Intel having a process advantage, IBM was able to keep up in performance. (The 45 mm based 8POWER7 was generally faster than the 32 mm 10 core Westmere-EX.) There will always be a market for top performance but you are correct that sustaining on just that customer base is unwise.

    IBM does realize that their software licensing model to subsidize hardware R&D was not sustainable. So while you can't run AIX, you can get a POWER8 box for less than $3k now.
  • OreoCookie - Wednesday, December 17, 2014 - link

    Really, just $3000? Wow, how times have changed, I remember ~12 years ago that a single Alpha CPU cost that much (the department I was working for had a workstation fail, fortunately under warranty, because otherwise they would have had to pay for 2 new CPUs and new RAM worth about 15,000 German Marks).
  • Ratman6161 - Wednesday, December 17, 2014 - link

    "The general lower cost of Linux and open source software" While it's true that the cost of a Linux OS including support is lower than an equivalent Windows OS, in the larger scheme of things the cost of Windows and even VMware becomes little more than background noise in the total cost of operations. Try pricing out an Oracle DB for example and you find that the cost of that software dwarfs the price of the hardware it's running on as well as whatever the OS is costing. Ditto with most "enterprise software".
  • lefty2 - Tuesday, December 16, 2014 - link

    Intel has another big advantage over ARM, which everyone seems to have forgotten about, and that is software compatibilty. 64-bit ARM server software is still a work in progress. The stuff that's being worked on at the moment is open source. Once that's finished you still have to convince clients to convert their proprietary software to ARM.
  • JohanAnandtech - Tuesday, December 16, 2014 - link

    Don't you think that the open source software that has been/is ported now is enough? Apache/PHP/MySQL, Memcached and Hadoop...that is a massive server market. And there is little stopping Microsoft to invest in ARM software too. Just VMware might be a bit tricky, but I don't think the software is a problem.
  • Kevin G - Wednesday, December 17, 2014 - link

    Actually VMware has said some less that flattering about ARM. Xen is the main hyper visor on ARM for the moment.
  • goop666666 - Thursday, December 25, 2014 - link

    Yeah, recompiling is so very hard. Essentially what you're saying is that Intel is for legacy systems and software that is poorly written. That is a large enough market, but doesn't apply to hyperscale deployments, which are the future.
  • gostan - Tuesday, December 16, 2014 - link

    great article by Johan as always.

    but the argument is muted. we have heard this tune before.

    the hardware might be cheaper. the power bill might be cheaper. wait until you see the software maintenance cost. custom software needs 'custom' pricing.

    besides, arm has no cutting edge fab process to back them.
  • JohanAnandtech - Tuesday, December 16, 2014 - link

    You do not need expensive software to create a server market these days. Just look how many webservers are running the LAMP stack.
  • JohanAnandtech - Tuesday, December 16, 2014 - link

    Did you miss this page?
    http://www.anandtech.com/show/8776/arm-challinging...

    The software ecosystem is developing...there is no indication that this will stop soon.
  • Kevin G - Wednesday, December 17, 2014 - link

    The LAMP stack is there and can easily give ARM a foot hold. Scaling up they'll need vendors like Oracle to port key applications. ARM will also need to enhance there RAS to be production capable with that software.
  • Samus - Tuesday, December 16, 2014 - link

    Johan,

    You need to review the compatibility of the Xeon E3's. They actually work in just about any Intel 80 or 90-series board. I have an E3-1230v3 in an Asus ITX H87 on the PC I'm currently typing on.

    A C220 chipset is NOT required.
  • JohanAnandtech - Tuesday, December 16, 2014 - link

    you are right :-).

    By "Xeon E3 needs C220" I meant that you need to add that part to calculate the power consumption per node. And the E3 needs it to support ECC RAM.
  • eanazag - Tuesday, December 16, 2014 - link

    Ubuntu's ARM version OS is a big deal. I believe the fact that MS had been dragging on with supporting RT was in fact to have something to port to the server side. Even though RT is mostly a dud at first, it could still be sensible and sell in a server config.

    I'm waiting for AMD to finally sell the ARM chip in the channel so I can throw a mobo with it together. If it has 10GbE I would be all over it.
  • rootheday3 - Tuesday, December 16, 2014 - link

    Intel also has Rangeley soc which includes crypto block for comms usage
  • wintermute000 - Tuesday, December 16, 2014 - link

    "What if I need massive amounts of memory but moderate processing power? The Xeon E3 only supports 32GB."

    Thousands of techs labbing away @ home nod sagely in agreement. Right now our choices are to scale horizontally or live with loud jet-engine ex-enterprise gear, because I can't get 64gb of RAM into a whitebox.
  • wintermute000 - Tuesday, December 16, 2014 - link

    Clarification: a whitebox that I can afford i.e. not a Xeon E5. lol
  • beginner99 - Wednesday, December 17, 2014 - link

    What kind of servers use tons of RAM and little processing power? Right, memcached and similar stuff. But let's be honest. That is still a niche market given the total server market. Most servers are just standard multipurpose servers running some company internal low-traffic (web) application. They don't need memcached. Memcached is for huge internet deployments and let's be honest that in itself is niche.

    I work in a 10'000 people company and I would bet you $1000 we have 0 memcached servers. I don't really know except for the lack of performance in core apps and the questionable competency of our IT.
  • bobbozzo - Wednesday, December 17, 2014 - link

    VM servers.
    And ZFS-filesystem storage (NAS/SAN) servers. e.g. FreeNAS. Add much more RAM if using DeDup.
  • aryonoco - Wednesday, December 17, 2014 - link

    I just wanted to thank you Johan De Gelas for this very insightful and interesting article.

    Hugely enjoyed reading it and your thoughts on the subject.

    Good to see high quality content continue to be published at AT now that Anand has left.
  • JohanAnandtech - Wednesday, December 17, 2014 - link

    aryonoco, Jann Thanks for letting me know. A good motivation to always push a bit harder to make sure I don't let my readers down :-).
  • jann5s - Wednesday, December 17, 2014 - link

    Thank you Johan, for writing this very interesting article!
  • przemo_li - Wednesday, December 17, 2014 - link

    Very well written walk through current and possible CPU/SOC parts.

    Will there be similar piece for software?
    ARM (embedded) folks aren't famous for quality drivers/code.

    It must change, so it will change. But for now such overview would be great!
  • bobbozzo - Wednesday, December 17, 2014 - link

    Typo on page2:
    "(4 Slots x 8 DIMMs)" - change 8 to 8GB

    Thanks
  • bobbozzo - Wednesday, December 17, 2014 - link

    and page 4:
    "you will be able to choose between SoCs that have 100 Gbit Ethernet and 10GBit Ethernet."

    should 100 be 40?
  • bobbozzo - Wednesday, December 17, 2014 - link

    Page 12:
    "Most of them are the usual IPSec, TPC offloading engines"

    Should that be TCP?

    Also, are there still accelerators for AntiVirus engines and IDS/IPS search (there were some back in 2005).

    Thanks
  • bobbozzo - Wednesday, December 17, 2014 - link

    ...
    I guess that's what the RegEx would be useful for.

    However, not all IDS/IPS / A/V patterns use RegEx, and there are other means of acceleration.
  • eanazag - Wednesday, December 17, 2014 - link

    Welcome back Johan.

    Glad to see you're still writing here. Good stuff in the article.
  • JKflipflop98 - Wednesday, December 17, 2014 - link

    I simply don't get where this whole "microserver" thing is coming from.

    By the time you cluster up enough ARM processors to match the processing power of an Intel/AMD solution, you're burning just as much power and spent just as much money as you would have by using x86 in the first place. Except now you have to use some janky middleware solution because all your software is x86 and you're running on ARM cores.
  • patrickjchase - Thursday, December 18, 2014 - link

    It's been a while since I worked on this stuff, but I don't think that the statement that "CCN is very comparable to the ring bus found inside all Xeon processors beginning with Sandy Bridge" is quite right.

    CCN
  • patrickjchase - Thursday, December 18, 2014 - link

    Finishing my comment:

    CCN
  • stefstef - Wednesday, December 17, 2014 - link

    the idea of having an energy efficient design certainly will pay off. nvidia and samsung showed that having i.e. 4 cores and a fifth core dedicated to the energy management can be a good low cost solution. i dont often read the articles at anandtech because they are usually boring. although i am happy to place a coment here. arm rules in certain fields but in a couple of years only because intel will allow them to do so. every company needs a room to live in. another american breakfast for the chinese who will get their share in the processor market as well.
  • milli - Thursday, December 18, 2014 - link

    I don't understand how ARM is suddenly going to succeed while MIPS and PowerPC have already tried and failed. I feel that ARM is more of a market trend than anything else (in the server market).
    Even the current ARM server SOC manufacturers have already tried to penetrate the server market. Cavium and Broadcom already had custom designed low-power MIPS SOCs. IBM, Applied Micro and Freescale have had a bunch of low-power PowerPC options.
    By the time any of these products is released, Intel is going to have a better alternative thanks to their process advantage. No IT manager is going to manage to convince any of the corporate fat-cats that a huge overhaul is needed. Same story over again.
  • yuhong - Friday, December 19, 2014 - link

    "Unfortunately their 16GB DIMMs will only work with the Atom C2000, leading to the weird situation that the Atom C2000 supports more memory than the more powerful Xeon E3."
    I think the reason is software related. More precisely, the Memory Reference Code (MRC).
  • intiims - Tuesday, December 30, 2014 - link

    If You want to know something about External Hard Drives visit http://www.hddmag.com/
  • adrian1987 - Monday, January 5, 2015 - link

    Hi. The Haswell core can actually have a max IPC of 6 instructions per cycle using macro-fusion not 5 as listed here (assuming the code is ideal). It has 2 execution units that can handle fused ALU and branch instructions. Source: http://www.anandtech.com/show/6355/intels-haswell-...
  • aaronjoue - Tuesday, April 7, 2015 - link

    Here is the real micro server. http://www.ambedded.com.tw/pt_list.php?CM_ID=20140...
    http://wiki.ambedded.com.tw/index.php?title=MicroS...
    7 & 21 nodes in a chassis
    It support Ubuntu and open source Ceph.

Log in

Don't have an account? Sign up now