Original Link: https://www.anandtech.com/show/10304/nvidia-announces-the-geforce-gtx-1080-1070
NVIDIA Announces the GeForce GTX 1000 Series: GTX 1080 and GTX 1070 Arrive In May & June
by Ryan Smith on May 7, 2016 3:25 AM ESTAfter many months of speculation – and more than a little anticipation – at an event in Austin, Texas this evening, NVIDIA announced the next generation of GeForce video cards, the GeForce 1000 series. Powered by the company’s new Pascal architecture and built on TSMC’s 16nm FinFET process, NVIDIA is promising that the 1000 series will set a new high bar for performance and efficiency while also offering a collection of new features to further set it apart from its predecessors.
The first two cards out of the gate will be NVIDIA’s high-end cards, the GTX 1080 and GTX 1070. 1080p jokes aside, these are straightforward successors to what has now been NVIDIA’s product stack naming scheme since 2010, with the GTX 1080 representing their new high-end card while the GTX 1070 is a cheaper card meant to hit the enthusiast sweet spot. These cards will be launching over the next month, with GTX 1080 hitting the market May 27th, and GTX 1070 two weeks later, on June 10th. They will be priced at $599 and $379 respectively.
NVIDIA GPU Specification Comparison | ||||||
GTX 1080 | GTX 1070 | GTX 980 | GTX 780 | |||
CUDA Cores | 2560 | (Fewer) | 2048 | 2304 | ||
Texture Units | 160? | (How many?) | 128 | 192 | ||
ROPs | 64 | (Good question) | 64 | 48 | ||
Core Clock | 1607MHz | (Slower) | 1126MHz | 863MHz | ||
Boost Clock | 1733MHz | (Again) | 1216MHz | 900Mhz | ||
TFLOPs (FMA) | 9 TFLOPs | 6.5 TFLOPs | 5 TFLOPs | 4.1 TFLOPs | ||
Memory Clock | 10Gbps GDDR5X | GDDR5 | 7Gbps GDDR5 | 6Gbps GDDR5 | ||
Memory Bus Width | 256-bit | ? | 256-bit | 256-bit | ||
VRAM | 8GB | 8GB | 4GB | 3GB | ||
FP64 | ? | ? | 1/32 FP32 | 1/24 FP32 | ||
TDP | 180W | ? | 165W | 250W | ||
GPU | "GP104" | "GP104" | GM204 | GK110 | ||
Transistor Count | 7.2B | 7.2B | 5.2B | 7.1B | ||
Manufacturing Process | TSMC 16nm | TSMC 16nm | TSMC 28nm | TSMC 28nm | ||
Launch Date | 05/27/2016 | 06/10/2016 | 09/18/2014 | 05/23/2013 | ||
Launch Price | MSRP: $599 Founders $699 |
MSRP:$379 Founders: $449 |
$549 | $649 |
The Pascal Architecture & "GP104"
While NVIDIA’s public presentation this evening was not a deep dive technical presentation – and as such there are still a number of blanks to fill in later – we still have a bunch of information to dive into. First and foremost, let’s talk about the GPU. NVIDIA is not using the GPU’s name, but in their presentation they did confirm that it had 7.2B transistors, and as such making it clear that this is a new Pascal GPU and not a cut-down version of the 15.3B transistor GP100. For the moment I am assuming this is GP104 – following NVIDIA’s traditional naming scheme – but this is subject to confirmation.
GP104, as you’d expect, is being produced by long-time partner TSMC on their 16nm FinFET process, making this the first consumer GPU built on the 14/16nm processes to ship when it hits the streets later this month. It goes without saying that the GPU industry has been stuck on 28nm for much longer than usual – GK104 to GP104 will be 4 years and 2 months – so this a long-awaited opportunity to jump to a smaller feature process and reap the power and performance benefits that entails.
At this time we don’t know GP104’s die size, but at fewer than half the transistors than GP100 it’s a safe bet that this is a die size in-line with previous first-run 104 GPUs such as GK104 and GF104, which were 294mm2 and 332mm2 respectively. GM204, by comparison, was a larger GPU, coming later in the lifetime of the 28nm process. What this means is that NVIDIA has spent some of their 16nm savings on bringing the die size back down as compared to GM204. 16nm is still relatively early in its life, and unlike Tesla P100 where customers will pay very high prices for early access to a lower-yielding chip, for consumer parts NVIDIA has needed to be more conservative. On that note, at this time it is not clear whether the GP104 in GTX 1080 is a fully enabled GP104 GPU, though traditionally it has been.
Pascal of course isn’t just NVIDIA’s 16nm architecture, but it is their successor to Maxwell. We know from NVIDIA’s previous Tesla P100 announcement that the company has opted to pay special attention to CUDA Core efficiency with Pascal, improving the throughput of the architecture as opposed to adding a significant number of additional CUDA cores. As a result, like previous architectural shifts, core-for-core comparisons are going to be tricky thanks to GP104 hitting a higher percentage of its theoretical throughput in practice. I’m still waiting on more technical details on GP104, and whether certain improvements like the larger register file have been carried over. Otherwise NVIDIA is promoting a 2x efficiency increase in Pascal versus Maxwell for gaming purposes.
On top of that 2x efficiency improvement – and how NVIDIA gets to their claimed 3x efficiency improvement over GTX Titan X – Pascal implement some new features that developers can tap for improved performance. Key among these is Simultaneous Multi-Projection, a technology that allows Pascal to create up to 16 viewports. NVIDIA is promoting the tech as being especially well suited for more efficient implementations of VR lens distortion correction (Lends Matched Shading) and rendering the same scene twice in VR – once for each eye – in a single pass (Single Pass Stereo), along with easier correction for distortion when using a multi-monitor setup.
Truth be told at this time I do not know how Simultaneous Multi-Projection Differs from NVIDIA’s previously announced features for Maxwell: Multi-Projection Acceleration and Multi-Res Shading. NVIDIAs description of SMP is very similar to these, as are the underlying algorithmic tweaks of reprojecting geometry and shading different parts of a frame at different resolutions.
A final new GPU-level feature on GP104 and the new GTX 1000 series cards is GPU Boost 3.0. Information about this is limited, but NVIDIA is saying that it supports more advanced overclocking functionality than has been present over the last couple of years with GPU Boost 2.0.
Meanwhile from a feature standpoint, NVIDIA has confirmed that GP104 cards will be DirectX feature level FL12_1 capable. Though as FL12_1 allows multiple tiers of certain sub-features, it’s not clear at this time whether Pascal supports higher tiers of those features. On the display side of matters, GP104’s display controller will support the newer DisplayPort 1.3 and 1.4 standards. Officially the cards are being called “DisplayPort 1.2 Certified, DisplayPort 1.3/1.4 Ready,” the distinction being that the latter is not currently certified, though I suspect the DP 1.3/1.4 certification process may itself not be ready yet.
Turning over from the GPU to memory, whereas GP100 was the first NVIDIA GPU to support HBM2, GP104 will be the first GPU to support GDDR5X. An updated version of the GDDR5 standard originally announced by Micron and more recently approved by the JEDEC, GDDR5X iterates on the technology to improve memory bandwidth. At a basic level, GDDR5X increases the overall capacity of the memory bus by moving from double-pumping (DDR) to quad-pumping (QDR), pairing this change with a larger memory prefetch to feed the faster memory bus, all while avoiding a more painful/difficult increase in the memory core clock.
GDDR5X will allow NVIDIA (and other vendors) to increase memory bandwidth while following more traditional GDDR5-style memory designs, rather than more complex designs like HBM. At the same time this also allows a single GPU to easily support a newer memory standard and the older GDDR5 standard at the same time, as the latter will remain a cheaper memory technology for some time. NVIDIA will be putting this feature to good use in enabling the GTX 1080 to run GDDR5X, while the GTX 1070 will run GDDR5.
Finally, not touched upon in NVIDIA’s initial presentation is that GP104 will use a newer version of NVIDIA’s SLI technology. Dubbed SLI HB (High Bandwidth), SLI HB will double the amount of bandwidth available between paired cards. At the same time this change will require a new SLI HB bridge, presumably to be compliant with the tighter signal integrity requirements a higher bandwidth link would require. NVIDIA is producing the new rigid bridge with their usual flair for industrial design, and so far I have not heard anything about the older non-rigid bridge making a return. In which case buyers will need bridges specifically sized for the slot arrangement of their board.
GeForce GTX 1080
With Pascal/GP104 particulars out of the way, let’s talk about the cards themselves. “The new king” as NVIDIA affectionately calls it will be the GTX 1080, and will be their new flagship card. NVIDIA is promoting it as having better performance than both GTX 980 SLI and GTX Titan X. NVIDIA’s own performance marketing slides put the average at around 65% faster than GTX 980 and 20-25% faster than GTX Titan X/980 Ti, which is relatively consistent for a new NVIDIA GPU. Of course, real-world performance remains to be seen, and will vary from game to game.
NVIDIA GTX x80 Specification Comparison | ||||||
GTX 1080 | GTX 1070 | GTX 980 | GTX 780 | |||
CUDA Cores | 2560 | (Fewer) | 2048 | 2304 | ||
Texture Units | 160? | (How many?) | 128 | 192 | ||
ROPs | 64 | (Good question) | 64 | 48 | ||
Core Clock | 1607MHz | (Slower) | 1126MHz | 863MHz | ||
Boost Clock | 1733MHz | (Again) | 1216MHz | 900Mhz | ||
TFLOPs (FMA) | 9 TFLOPs | 6.5 TFLOPs | 5 TFLOPs | 4.1 TFLOPs | ||
Memory Clock | 10Gbps GDDR5X | GDDR5 | 7Gbps GDDR5 | 6Gbps GDDR5 | ||
Memory Bus Width | 256-bit | ? | 256-bit | 256-bit | ||
VRAM | 8GB | 8GB | 4GB | 3GB | ||
FP64 | ? | ? | 1/32 FP32 | 1/24 FP32 | ||
TDP | 180W | ? | 165W | 250W | ||
GPU | "GP104" | "GP104" | GM204 | GK110 | ||
Transistor Count | 7.2B | 7.2B | 5.2B | 7.1B | ||
Manufacturing Process | TSMC 16nm | TSMC 16nm | TSMC 28nm | TSMC 28nm | ||
Launch Date | 05/27/2016 | 06/10/2016 | 09/18/2014 | 05/23/2013 | ||
Launch Price | MSRP: $599 Founders $699 |
MSRP:$379 Founders: $449 |
$549 | $649 |
The GTX 1080 will ship with the most powerful GP104 implementation – we don’t yet have confirmation of whether it’s fully enabled – with 2560 of Pascal’s higher efficiency CUDA cores. And while I’m also awaiting confirmation of this as well, I believe it’s a very safe bet that the card will feature 160 texture units and 64 ROPs, given what is known about the architecture.
Along with Pascal’s architectural efficiency gains, the other big contributor to GTX 1080’s performance will come from its high clockspeed. The card will ship with a base clock of 1607MHz and a boost clock of 1733MHz. This is a significant 43% boost in operating frequency over GTX 980, and it will be interesting to hear how much of this is from the jump to 16nm and how much of this is from any kind of specific optimization to hit higher clockspeeds. Meanwhile NVIDIA is touting that GTX 1080 will be a solid overclocker, demoing it running at 2114MHz with its reference air cooler in their presentation.
GTX 1080 will be paired with 8GB of the new GDDR5X memory, on a 256-bit memory bus. The shift to GDDR5X allows NVIDIA to run GTX 1080 at 10Gbps/pin, giving the card a total of 320GB/sec of memory bandwidth. Interestingly, even with GDDRX5 this is still a bit less memory bandwidth than GTX 980 Ti (336GB/sec), a reminder that even with GDDR5X, memory bandwidth improvements continue to be outpaced by GPU throughput improvements, so memory bandwidth efficiency is always paramount.
I am admittedly a bit surprised that GTX 1080’s GDDR5X is only clocked at 10Gbps, and not something faster. Micron’s current chips are rated for up to 12Gbps, and the standard itself is meant to go up to 14Gbps. So I am curious over whether this is NVIDIA holding back so that they have some future headroom, whether this is a chip supply thing, or if perhaps GP104 simply can’t do 12Gbps at this time. At the same time it will be interesting to see whether the fact that NVIDIA can currently only source GDDRX from a single supplier (Micron) has any impact, as GDDR5 was always multi-sourced. Micron for their part has previously announced that their GDDR5X production line wouldn’t reach volume production until the summer, which is a potential indicator that GDDR5X supplies will be limited.
On the power front, NVIDIA has given the GTX 1080 an 180W TDP rating. This is 15W higher than the GTX 980, so the GTX x80 line is drifting back up a bit in TDP, but overall NVIDIA is still trying to keep the GTX x80 lineup as mid-power cards, as this worked well for them with GTX 680/980. Meanwhile thanks to Pascal and 16nm this is much lower than GTX 980 Ti for higher performance. We’ll look at card design a bit more in a moment, but I do want to note that NVIDIA is using a single 8-pin PCIe power connector for this, as opposed to 2 6-pin connectors, and this is something that is becoming increasingly common.
Looking at the design of the card itself, the GTX 1080 retains a lot of the signature style of NVIDIA’s other high-end reference cards, however after using the same industrial design since the original GTX Titan in 2013, NVIDIA has rolled out a new industrial design for the GTX 1000 series. The new design is far more (tri)angular as opposed to the squared-off GTX Titan cooler. Otherwise limited information is available about this design and whether the change improves cooling/noise in some fashion, or if this is part of NVIDIA’s overall fascination with triangles. Though one thing that has not changed is size: this is a 10.5” double-wide card, the same as all of the cards that used the previous design.
Industrial design aside, NVIDIA confirmed that the GTX 1080 will come with a vapor chamber cooler; GTX 980 did not do this, as NVIDIA didn’t believe this was necessary on a 165W card. Given NVIDIA’s overclocking promises with this card, this likely has something to do with it, as a vapor chamber should prove very capable on a 180W card.
Meanwhile it looks like the DVI port will live to see another day. Other than upgrading the underlying display controller to support the newer iterations of the DisplayPort standard, NVIDIA has not changed the actual port configuration since GTX 980 Ti. So we’re looking at 3 DisplayPorts, 1 HDMI port, and one DL-DVI-D port. This does mean that built-in analog (VGA capabilities) are dead though, as NVIDIA has switched from DVI-I to the pure-digital DVI-D.
As mentioned elsewhere, the GTX 1080’s power input has evolved a bit over GTX 980. Rather than 2 6-pin connectors it’s now a single 8-pin connector to feed the 180W card. This is also the first card to feature NVIDIA’s SLI HB connectors, which will require new SLI bridges. Though at this point our concerns about the long-term suitability over AFR stand.
For pricing and availability, NVIDIA has announced that the card will be available on May 27th. There will be two versions of the card, the base/MSRP card at $599, and then a more expensive Founders Edition card at $699. At the base level this is a slight price increase over the GTX 980, which launched at $549. Information on the differences between these versions is limited, but based on NVIDIA’s press release it would appear that only the Founders Edition card will ship with NVIDIA’s full reference design, cooler and all. Meanwhile the base cards will feature custom designs from NVIDIA’s partners. NVIDIA’s press release was also very careful to only attach the May 27th launch date to the Founders Edition cards.
Consequently, at this point it’s unclear whether the $599 card will be available on the 27th. In previous generations all of the initial launch cards were full reference cards, and if that were the case here then all of the cards on launch day will be the $699 cards, but we are looking to get confirmation of this situation ASAP. Otherwise, I expect that the base cards will forgo the vapor chamber cooler and embrace the dual/triple fan open air coolers that most of NVIDIA’s partners use. Though with any luck these cards will use the reference PCB, at least for the early runs.
On a final observation, if the new NVIDIA reference design and cooler will only be available with the Founders Edition card, this means that customers who prefer the NVIDIA reference card will be seeing a greater de-facto price increase. In that case we’re looking at $699 versus $549 for a launch window reference GTX 980.
GTX 1070
NVIDIA GTX x70 Specification Comparison | ||||||
GTX 1070 | GTX 970 | GTX 770 | GTX 670 | |||
CUDA Cores | (Fewer than GTX 1080) | 1664 | 1536 | 1344 | ||
Texture Units | (How many?) | 104 | 128 | 112 | ||
ROPs | (Good question) | 56 | 32 | 32 | ||
Core Clock | (Slower) | 1050MHz | 1046MHz | 915MHz | ||
Boost Clock | (Again) | 1178MHz | 1085MHz | 9i80Mhz | ||
TFLOPs (FMA) | 6.5 TFLOPs | 3.9 TFLOPs | 3.3 TFLOPs | 2.6 TFLOPs | ||
Memory Clock | ? GDDR5 | 7Gbps GDDR5 | 7Gbps GDDR5 | 6Gbps GDDR5 | ||
Memory Bus Width | ? | 256-bit | 256-bit | 256-bit | ||
VRAM | 8GB | 4GB | 2GB | 2GB | ||
FP64 | ? | 1/32 FP32 | 1/24 FP32 | 1/24 FP32 | ||
TDP | ? | 145W | 230W | 170W | ||
GPU | "GP104" | GM204 | GK104 | GK104 | ||
Transistor Count | 7.2B | 5.2B | 3.5B | 3.5B | ||
Manufacturing Process | TSMC 16nm | TSMC 28nm | TSMC 28nm | TSMC 28nm | ||
Launch Date | 06/10/2016 | 09/18/2014 | 05/30/2013 | 05/10/2012 | ||
Launch Price | MSRP:$379 Founders: $449 |
$329 | $399 | $399 |
Finally, below the GTX 1080 we have its cheaper sibling, the GTX 1070. Information on this card is more limited. We know it’s rated for 6.5 TFLOPs – 2.5 TFLOPs (28%) slower than GTX 1080 – but NVIDIA has not published specific CUDA core counts or GPU clockspeeds. Looking just at rated TFLOPs, at 72% of the rated performance of the GTX 1080, the gap between the GTX 1070 and GTX 1080 is a bit wider than it was for the GTX 900 series. There the GTX 970 was rated for 79% of the GTX 980’s performance.
On the memory front, the card will be paired with more common GDDR5. Like the GTX 1080 there’s 8GB of VRAM, but specific clockspeeds are unknown at this time. Also unknown is the card’s TDP, though lower than GTX 1080 is a safe assumption.
Like GTX 1080, GTX 1070 will be offered in a base/MSRP version and a Founders Edition version. These will be $379 and $449 respectively – $50 over the GTX 970’s launch price of $329 – with the Founders Edition card employing the new NVIDIA industrial design. I’ll also quickly note that it remains to be seen whether the industrial design reuse will include GTX 1080’s vapor chamber, or if NVIDIA will swap out the cooling apparatus under the hood.
The GTX 1070 will be the latter of the two new Pascal cards, hitting the streets on June 10th. Like the GTX 1080, NVIDIA’s press release is very careful to only attach that date to the Founders Edition version, so we’re still waiting on confirmation over whether the base card will be available on the 10th as well.