
Original Link: https://www.anandtech.com/show/1100
AMD Opteron Coverage - Part 3: The First Servers Arrive
by Anand Lal Shimpi on April 23, 2003 9:41 PM EST- Posted in
- IT Computing
In the past 24 hours quite a bit has changed for AMD; the company went from being viewed as one with no competitiveness left, to a well defined leader and competitor in the Enterprise market.
As we've shown in Part 1 of our Opteron Coverage, AMD has a core architecture that is perfectly tailored for the Enterprise market. The K8 architecture, the Opteron processor and x86-64, all rolled into one package AMD likes to call AMD64, were designed from the start to be the Enterprise customer's dream. Although we will eventually see this AMD64 platform on the desktop, there's no getting around the fact that we're dealing with a very high-end product.
The extremely high-end nature of the AMD64 platform is what will unfortunately limit its success in the mobile market (especially in its ability to compete with Intel's Pentium M), but this is the price that must be paid in order to ensure competitiveness in the biggest money maker in the industry - the Enterprise market.
AMD's Opteron launch was met with much more enthusiasm and a much higher degree of support than the Athlon MP launch back in June of 2001. There are more motherboard manufacturers producing Opteron solutions (MSI, Newisys, Rioworks, and Tyan among others) than there were Athlon MP solutions (Tyan was the one and only launch partner in the summer of 2001).
There are also more chipset vendors than just AMD that will be supporting the platform; designing a multiprocessor chipset used to be a difficult task that required millions of R&D dollars, but with the extremely scalable (and low cost) AMD64 architecture, even VIA is offering their K8T400M as an Opteron chipset for the Enterprise market (not to mention NVIDIA's nForce3 Pro).
Today we're taking a look at three 1U servers based on AMD's Opteron, two of which are more conventional designs reminiscent of the Athlon MP servers we've seen and one that you'd think was an Intel server if you didn't look under the hood. What do we mean by that? Keep reading to find out…
Target Market 1: High Performance Computing
There is one area that the Athlon MP was significantly more successful than Intel's Xeon - the HPC (High Performance Computing) market. This market tailors to the needs of the science community or any group of users that require a lot of processing power and very little more.
As we've seen in the past, AMD's architectures are highly optimized for floating-point intensive applications that haven't been SSE/SSE2 optimized; it turns out that the majority of HPC applications fall directly into this category and thus benefit significantly from Athlon MP, as well as the Opteron.
With the Opteron's SSE2 support, performance is even more competitive in those scenarios that don't depend on raw x87 FP performance. To give you an example of both the Opteron's competitiveness in scientific workloads as well as SSE2 optimized situations, here are two benchmarks from ScienceMark 2.0:
|
|
You can see that the Opteron is clearly competitive with Intel's Xeon, however the picture changes slightly if we toss in the latest Pentium 4 with an 800MHz FSB into the mix:
|
|
So while AMD can enjoy a performance advantage for now, Intel is a much more formidable competitor than they were when the Xeon was first launch. Once the Xeon gets a larger L2 cache, a faster FSB (initially 667MHz) and a higher bandwidth memory subsystem (all which are on the roadmap), AMD will have to work on getting clock speeds up there to remain competitive on their home-turf.
Now that we've established AMD's previous and current strengths in the HPC market, let's look at two servers that tailor to this market.
Appro up to Bat with the 1100H
For this review, Appro submitted one of their servers that's clearly aimed at the HPC market - the 1100H.
From the front of the 1U chassis its very evident that you're not meant to be swapping drives in and out of the system as there are no removable drive bays; in fact, there isn't even a floppy drive in the server, just a slim CD-ROM drive. While we don't necessarily agree with (or understand) the motivation behind excluding the floppy drive, we see the cost savings in not implementing removable drive bays.
The 1100H chassis is actually very similar to the 1124, which is what the majority of AnandTech servers are based off of. For those of you that aren't familiar with the design, Appro's 1124 was one of the first 1U Athlon MP servers to hit the market. Appro made very few changes to the base chassis design in order to accommodate the 2-way Opteron configuration as you'll soon see.
Externally, other than a slightly modified look, the 1100H is no different from the 1124. The server does have two front mounted USB 2.0 ports, but other than the CD-ROM drive there are only power and reset switches to be found at the front of the chassis.
The rear is home to one serial port, a VGA connector for taking advantage of the on-board ATI Rage XL video, two Gigabit Ethernet ports, two USB 2.0 ports and the usual PS/2 keyboard and mouse ports.
Ease of accessibility has always been a strength of Appro, and the tradition continues with the 1100H; two thumbscrews secure one of three panels covering the majority of the 1U server. Removing this panel reveals the Rioworks motherboard Appro chose to use with the 1100H.
Chances are you've never heard of Rioworks the motherboard maker, and we wouldn't blame you. Not only is Rioworks a very widely known manufacturer, this is also their first try at an AMD based platform - luckily the Opteron is a brand new architecture so any lessons learned with the Athlon MP wouldn't have done Rioworks much good. We have actually reviewed a couple of Rioworks motherboards during the days of the massive Slot-2 Pentium II Xeon processors and never really had any bad experiences with them, but with all of the options out there, why did Appro choose Rioworks?
Considering Tyan was everyone's best friend during the Athlon MP days, we found it peculiar that Appro didn't use Tyan's solution in their 1100H. It turns out that no one was using Tyan's design simply because of availability; citing that Opterons won't be available in large quantities until June, Tyan's board won't be available in quantity until then either. It remains to be seen whether Tyan's estimates are correct or not; the Opteron 244 will not be available in quantity until June however the rest of the line should be available immediately.
So we know why Tyan is out, but what about MSI? The biggest problem with MSI's board design is the unique (read: strange) setup of memory slots. Remember that each Opteron CPU has its own memory controller, and thus for every CPU you have to have a dedicated set of memory slots on the motherboard.
Most motherboard manufacturers have gone with either two DIMM slots per CPU (the bare minimum, remember the Opteron has a 144-bit wide memory bus that requires 72-bit ECC DIMMs to be installed in pairs) or four DIMM slots per CPU. MSI however has outfitted their board with 4 slots for CPU0, but only 2 slots for CPU1. Although this type of a configuration can work, it makes very little sense, which is why a good deal of server manufacturers (e.g. Appro & Einux) have stayed away from MSI's design.
This leaves us with Rioworks; interestingly enough, the motherboard design is quite good, which just reinforces the fact that being popular doesn't necessarily mean you'll have well thought-out designs.
The board features 4 PCI-X slots and two regular 32-bit PCI slots, but in the 1100H all but one of these slots goes to waste. Although Appro didn't bundle a riser card with our 1100H (a huge oversight on their part, customers will definitely demand one be included), a riser card will allow you to use the last PCI-X slot. Given that the motherboard features dual Broadcom Gigabit Ethernet controllers and an integrated Serial ATA controller, the only need for expansion would be a SCSI RAID card.
Beneath this heatsink is the on-board 4-port SATA controller
...that drives these four ports
When talking about expansion on the 1100H you have to keep in mind that your options are extremely limited. With no removable drive bays, adding new hard drives and even moving to SCSI isn't as simple of a task as it should be. Although the two thumbscrews make it easy for you to gain access to the motherboard, the 1100H isn't really designed to be upgraded with ease outside of installing faster CPUs or more memory. We'll get to why in a minute.
The CPUs and half of the memory banks are covered by a piece of thick paper in the shape of a tunnel that is designed to channel air from the blower compartment adjacent to the motherboard over the CPUs and out the rear of the chassis. We have seen better solutions in the past, such as the plastic ducting from CCSI, as the paper approach isn't exactly top notch. We have to give it to Appro here though, it does get the job done, although it is difficult to gain access to the CPUs without tearing the paper (it doesn't help that the paper is perforated at the corners).
Beneath the paper tunnel you'll find two heatsinks covering the two Opteron CPUs that ship with the server by default. Our sample featured Opteron 240 processors (1.40GHz), but you can configure the machine however you would like.
Appro also configured our server with 8 - 512MB DDR333 memory modules by ATP; we haven't had much experience with their DIMMs although they did not give us any troubles during our tests. We have been using Corsair's modules without any issues in all of our Opteron tests thus far. Remember that Opterons require registered ECC DIMMs and will not work with regular unbuffered DDR SDRAM; you also must be certain to install DIMMs in pairs in order to match the 144-bit wide Opteron memory bus. Since AMD did not use independent memory controllers, it is impossible to install just a single DIMM and have it work.
If you lift open the next panel on the 1100H you'll expose the four blowers that cool the system. The blowers pull in cool air from the front of the case, thus thoroughly cooling the hard drive(s), and blow it through the powersupply and motherboard compartment in order to keep everything running nicely. As you can expect, this configuration isn't optimized for low noise emissions, but then again if you've ever been to a datacenter you'll know that the whirring of thousands of fans isn't uncommon. The blowers are easily accessible and easily replaceable, with a warning light at the front of the chassis indicating a failure.
The third panel is the only one that requires a screwdriver to remove, as it isn't something you'll be removing too often. Beneath the frontmost panel you'll find the CD-ROM drive as well as get a glimpse of the IBM Deskstar 180GXP hard drive that's installed in the 1100H by default. Unfortunately, you have to remove quite a bit in order to get to that hard drive so you better hope it doesn't fail.
The difficult to access hard drive was by far our biggest complaint about the 1100H; we understand that the server isn't geared towards high-availability environments like web or database servers, however quick access to the one component in the server with the highest failure rate should be a requirement for all Opteron servers - regardless of their intended use.
Einux - Similar, but Different
If you haven't heard of Einux before, one reason to know them is that they are one of ASUS' launch partners for their nForce3 Pro 150 motherboard. Einux also happens to be one of the three manufacturers that provided us with a 1U server for this solutions overview/roundup.
Einux provided us with a member of their Excelera64 line of Opteron servers, the A1740. From the outside you can already see some differences between the A1740 and Appro's 1100H, mainly in that the Einux system features four removable hard drive bays in addition to CD-ROM and Floppy drives.
We mentioned before that the Einux solution was more of a HPC computing platform rather than a server, despite the classification of the A1740 as an Enterprise level server. The reason for us treating the A1740 as more of a HPC solution is because it lacks the manageability features that all truly enterprise-class servers should have. With the Athlon MP, we could let a lack of manageability features slide but if the Opteron is to be taken seriously we need to see some serious manageability options from vendors. Later in this article we'll give you an idea of exactly what we're talking about; for now just keep in mind that even though the A1740 can be used as an enterprise server, from a feature standpoint it is more like the Appro 1100H with hot-swappable drive bays. Einux does list support for their ServerController management technology on all of their servers, however nothing of that sort was provided with our evaluation sample.
The A1740 uses the same Rioworks motherboard that we found in the Appro 1100H, but Einux actually took advantage of the on-board 4-port Serial ATA controller to handle all four hot-swappable drive bays. Our evaluation sample came with a 120GB Seagate Barracuda Serial ATA V drive, which happens to be the only Serial ATA drive on the market with any real availability.
The Serial ATA connector makes a hot-swappable drive configuration fairly simple to implement on the A1740; RAID functionality is offered through the controller's BIOS.
The Serial ATA connector makes hot-swappable drive bays a piece of cake to implement
Since the A1740 uses the same motherboard as the Appro 1100H, the internal layout is virtually identical; thankfully Einux did ship the server with a PCI-X riser card. Getting access to the internals of the Einux server isn't as easy as the Appro, but it is still fairly simple; two screws hold the only two panels of the server in place.
With the same motherboard as the Appro server, looking at the motherboard compartment left us with a feeling of déjà vu. The only major difference between the two servers is the cooling; while both use blowers, the Einux solution uses slightly better made blowers that are unfortunately more difficult to replace.
Cool air is drawn in primarily from ventilation holes at the top of the chassis, rather than only from the front drive bays, which does reduce the amount of cooling that the hard drives get.
The cold air is channeled over the CPUs using clear pieces of plastic; this is a more elegant solution than Appro's because the tops of the heatsinks are not covered with a paper insulator. It is debatable as to which solution offers higher cooling performance, although both had no problems contending with the needs of our Opteron 244s. In the Einux case, the CPU farthest from the coolers (CPU1) did seem to receive slightly worse cooling than in the Appro. The difference in cooling performance could be do to slower/lesser volume blowers or the fact that without a complete tunnel used to move air around a good deal of the air is wasted on cooling other parts of the case.
Target Market #2: Enterprise-class Servers
We mentioned High Performance Computing as an area where the Opteron was in heavy demand, but as we saw in our Enterprise performance investigation, the Opteron makes for an excellent database server and an even more competitive web server.
The database and web server markets demand one thing more than performance and cost - availability. Being able to guarantee uptime and minimizing the amount of downtime when the unfortunate does occur are paramount to the success of a server in these markets. Unfortunately, in the past, none of AMD's partners have been able to deliver solutions worthy of the title: Enterprise-class.
What's the level that AMD's partners should be setting their sights on? The level of Intel's Enterprise Platforms & Solutions Division (EPSD). For example, the servers that come out of Intel's EPSD have features like dynamically adjusting fans that speed up and slow down depending on the cooling needs of the server; if one fan fails, not only is an error sent to the administrator, but the other fans in the system spin up to compensate for the loss of cooling. Intel performs fire tests on their EPSD products where they generate a fire within the chassis and see if it spreads to other systems in a rack; the dynamically adjusting fans actually end up putting out the fire in a number of cases because they spin up to compensate for the increase in heat inside the chassis. This is all in addition to the remote manageability of the server, and the highly integrated software package that ships with their solutions. When any component within an EPSD server fails, the administrator is notified of the failure as well as linked to a place to purchase replacement parts and given a video and diagrams explaining how to replace the part.
Updating BIOSes remotely, remote power on and shutdown are all features that are basic requirements for this market; these are all requirements that are not thoroughly met by solutions like the Appro and Einux platforms we just illustrated.
AMD recognized the need for servers with better management features if they were to be taken seriously in the Enterprise world, and thus partnered with a company called Newisys to produce their flagship 1U Opteron design.
The Newisys Platform
The
first thing to understand about Newisys is that they are not in the business
of making servers. Although the server we received for evaluation is branded
as a Newisys 2100 platform, Newisys does all of the design of the hardware in
house but leaves manufacturing and packaging up to their partners. For example,
Appro is selling a 1U server based off of the Newisys 2100 platform as an Appro
branded server - the 120S.
The following companies will be selling the Newisys 2100 design:
- Angstrom Microsystems
- Appro
- M&A Technology
- Microway
- NTSI
- Promicro
- Racksaver
As we just mentioned, Newisys does all of the design in house, that includes the motherboard.
Newisys also uses an even easier to install heatsink mechanism with the 2100:
As is the case with all of the Opteron servers we've reviewed, Broadcom Gigabit Ethernet is standard:
In order to save real estate on the motherboard, Newisys uses vertically mounted PCBs for the memory voltage regulators as you can see at the very back of this picture; the memory's VRM looks like it is stuck in a DIMM slot:
Newisys was the only manufacturer to use a non-ATI graphics controller:
If you've ever worked on a rack full of 1U servers, finding the one you want can be quite difficult. The Newisys 2100 design includes a button that will cause a LED to blink on both the front and back of the chassis, indicating which machine you're working on; this LED can also be activated remotely to direct a technician working on the rack to the appropriate server.
One thing you'll notice from the server's block diagram is the inclusion of a "Service Processor" that features a direct connection to all of the major chips on the motherboard, including both CPUs and the HT I/O Hub. What's so important about this Service Processor that it is in communication with everything on the motherboard? It is the Service Processor that separates the men from the boys, and the Newisys design from all of the other 1U Opteron servers available…
Newisys - Introducing the "Service Processor"
As we mentioned before, the key selling point of the Newisys design is its manageability features, it is also what Newisys specializes in. We introduced the Service Processor (SP) on the previous page, and now we'll explain exactly what the SP does:
As you can see from the above diagram, the SP is physically a PowerPC microprocessor - a Motorola MPC855T chip to be specific. The SP is always powered as long as the server has power; it runs off of a dedicated 3.3v voltage rail coming off of the Newisys power supply.
The Service Processor
The SP has its own memory controller, SDRAM as well as flash memory, so it is effectively an autonomous server within the Newisys 2100 design. Although it has no VGA output, the SP's output is redirected to a small display at the front of the server; not to mention that the SP is remotely configurable.
The benefit of giving the SP its independence is that regardless of the state of the rest of the machine, there is a part of the server that is fully accessible, just as long as it has power. You can even access the SP if you have no CPUs and no memory in the system; remember the SP has its own memory and BIOS.
The SP has a bus that interfaces with the Opteron allowing it to get state information from the CPU(s), which is useful in diagnosing what caused a particular failure. All of this data is recorded and stored by the SP so that regardless of what happens to the server, the data is still retained.
The SP's 10/100 Ethernet ports
Before we get to the real-world functionality of the SP there's one other note to make; the processor has its own 10/100 Ethernet controller. The benefit of this Ethernet controller is that the SP architecture is truly independent from the rest of the server as well as the rest of your network. You can connect the SP to a separate switch if you'd like to keep it separate and prevent it from cluttering up your otherwise neat IT structure. Newisys did mention that a number of customers could care less about this feature but they decided to include it regardless; the low cost of adding a 10/100 Ethernet controller was obviously not discouraging.
In the Target Market #2 section of this review we mentioned Intel's dynamically adjusting fans as one feature that was lacking with Athlon MP and Opteron servers; Newisys takes care of this by allowing the Service Processor (SP) to control the speed of all CPU/chassis cooling fans (basically, all fans but the power supply fans). The fan speed is adjusted based on the demands of the server; those demands are determined based on a number of probes setup in the system.
The basic features you would expect from a systems management engine are all here in the 2100:
- Remote administration of server (shutdown OS, reboot OS, etc…)
- Remote BIOS configuration and updates
- Desired Power Level Adjustment (through voltage/clock speed adjustments, you can set how much power your server draws, also done remotely)
- System Failure Notification
For details on everything supported by the Newisys System Management architecture be sure to read the whitepaper from Newisys.
Newisys System Management in Use
The SP is controlled on the server via a set of buttons at the front of the chassis:
The controls pictured above allow you to navigate through the service menus displayed on the front-panel LCD. Your first task is to assign the SP's NIC an IP address:
You can also turn on the server, reboot or shut down the OS from this menu:
Newisys System Management in Use (Continued)
The remotely accessible web interface is a huge part of the Newisys System Management technology:
The interface will let you see the current state of the server:
Powering on the machine, getting into the BIOS and shutting it down are all accessible through the web:
The web interface also has an inventory of all the parts contained within the system, should you need to replace something:
You can also add new users to the web admin:
Finally, every option that's available through the web is also made available through SSH:
Final Words
Solutions like the HPC-targeted Appro 1100H or the HPC/webserver Einux A1740 will continue to do well in markets that have been previously tailored to by the Athlon MP. The added performance offered by the Opteron processor and the AMD64 platform in general will ensure that companies like Appro and Einux will be able to move these boxes as quickly as possible.
What is necessary however is more solutions based on designs like the Newisys 2100; fully manageable, high-performance Opteron designs that can go head-to-head with the best from Intel's EPSD. The amount of validation that is done within Intel's EPSD will be difficult to mimic, however it is absolutely necessary if the Opteron is to be a true success in the server market.
The need for Newisys-style boxes will continue to grow as AMD launches their 4-way Opteron server platforms. Customers that are running multimillion dollar databases will not put their faith in a server, regardless of how fast or how affordable, without proper validation tests and extremely flexible server management; in these cases, time is most definitely money and downtime is, well, clearly the opposite.
We did not publish a performance comparison between the three servers in this article simply because they perform identically, assuming you configure them with identical hardware.
For more information on the Opteron's performance in an enterprise environment be sure to check out Part 2 of our ongoing Opteron coverage, as well as Part 1 if you're curious as to exactly what goes on beneath the hood of AMD's latest architecture.