Almost 7 years ago to this day, AMD formally announced their “small die strategy.” Embarked upon in the aftermath of the company’s struggles with the Radeon HD 2900 XT, AMD opted against continuing to try beat NVIDIA at their own game. Rather than chase NVIDIA to absurd die sizes and the risks that come with it, the company would focus on smaller GPUs for the larger sub-$300 market. Meanwhile to compete in the high-end markets, AMD would instead turn to multi-GPU technology – CrossFire – to offer even better performance at a total cost competitive with NVIDIA’s flagship cards.

AMD’s early efforts were highly successful; though they couldn’t take the crown from NVIDIA, products like the Radeon HD 4870 and Radeon HD 5870 were massive spoilers, offering a great deal of NVIDIA’s flagship performance with smaller GPUs, manufactured at a lower cost, and drawing less power. Officially the small die strategy was put to rest earlier this decade, however even informally this strategy has continued to guide AMD GPU designs for quite some time. At 438mm2, Hawaii was AMD’s largest die as of 2013, still more than 100mm2 smaller than NVIDIA’s flagship GK110.



AMD's 2013 Flagship: Radeon R9 290X, Powered By Hawaii

Catching up to the present, this month marks an important occasion for AMD with the launch of their new flagship GPU, Fiji, and the flagship video card based on it, the Radeon R9 Fury X. For AMD the launch of Fiji is not just another high-end GPU launch (their 3rd on the 28nm process), but it marks a significant shift for the company. Fiji is first and foremost a performance play, but it’s also new memory technology, new power optimization technologies, and more. In short it may be the last of the 28nm GPUs, but boy if it isn’t among the most important.

With the recent launch of the Fiji GPU I bring up the small die strategy not just because Fiji is anything but small – AMD has gone right to the reticle limit – but because it highlights how the GPU market has changed in the last seven years and how AMD has needed to respond. Since 2008 NVIDIA has continued to push big dies, but they’ve gotten smarter about it as well, producing increasingly efficient GPUs that have made it harder for a scrappy AMD to undercut NVIDIA. At the same time alternate frame rendering, the cornerstone of CrossFire and SLI, has become increasingly problematic as rendering techniques get less and less AFR-friendly, making dual GPU cards less viable than they once were. And finally, on the business side of matters, AMD’s market share of discrete GPUs is lower than it has been in over a decade, with AMD’s GPU plus APU sales now being estimated as being below just NVIDIA’s GPU sales.


AMD's Fiji GPU

Which is not to say I’m looking to paint a poor picture of the company – AMD Is nothing if not the perennial underdog who constantly manages to surprise us with what they can do with less – but this context is important in understanding why AMD is where they stand today, and why Fiji is in many ways such a monumental GPU for the company. The small die strategy is truly dead, and now AMD is gunning for NVIDIA’s flagship with the biggest, gamiest GPU they could possibly make. The goal? To recapture the performance crown that has been in NVIDIA’s hands for far too long, and to offer a flagship card of their own that doesn’t play second-fiddle to NVIDIA’s.

To get there AMD needs to face down several challenges. There is no getting around the fact that NVIDIA’s Maxwell 2 GPUs are very well done, very performant, and very efficient, and that between GM204 and GM200 AMD has their work cut out for them. Performance, power consumption, form factors; these all matter, and these are all issues that AMD is facing head-on with Fiji and the R9 Fury X.

At the same time however the playing field has never been more equal. We’re now in the 4th year of TSMC’s 28nm process and have a good chunk of another year left to go. AMD and NVIDIA have had an unprecedented amount of time to tweak their wares around what is now a very mature process, and that means that any kind of advantages for being a first-mover or being more aggressive are gone. As the end of the 28nm process’s reign at the top, NVIDIA and AMD now have to rely on their engineers and their architectures to see who can build the best GPU against the very limits of the 28nm process.

Overall, with GPU manufacturing technology having stagnated on the 28nm node, it’s very hard to talk about the GPU situation without talking about the manufacturing situation. For as much as the market situation has forced an evolution in AMD’s business practices, there is no escaping the fact that the current situation on the manufacturing process side has had an incredible, unprecedented effect on the evolution of discrete GPUs from a technology and architectural standpoint. So for AMD Fiji not only represents a shift towards large GPUs that can compete with NVIDIA’s best, but it represents the extensive efforts AMD has gone through to continue improving performance in the face of manufacturing limitations.

And with that we dive in to today’s review of the Radeon R9 Fury X. Launching this month is AMD’s new flagship card, backed by the full force of the Fiji GPU.

AMD GPU Specification Comparison
  AMD Radeon R9 Fury X AMD Radeon R9 Fury AMD Radeon R9 290X AMD Radeon R9 290
Stream Processors 4096 (Fewer) 2816 2560
Texture Units 256 (How much) 176 160
ROPs 64 (Depends) 64 64
Boost Clock 1050MHz (On Yields) 1000MHz 947MHz
Memory Clock 1Gbps HBM (Memory Too) 5Gbps GDDR5 5Gbps GDDR5
Memory Bus Width 4096-bit 4096-bit 512-bit 512-bit
VRAM 4GB 4GB 4GB 4GB
FP64 1/16 1/16 1/8 1/8
TrueAudio Y Y Y Y
Transistor Count 8.9B 8.9B 6.2B 6.2B
Typical Board Power 275W (High) 250W 250W
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
Architecture GCN 1.2 GCN 1.2 GCN 1.1 GCN 1.1
GPU Fiji Fiji Hawaii Hawaii
Launch Date 06/24/15 07/14/15 10/24/13 11/05/13
Launch Price $649 $549 $549 $399

With 4096 SPs and coupled with the first implementation of High Bandwidth Memory, the R9 Fury X aims for the top. Over the coming pages we’ll get in to a deeper discussion on the architectural and other features found in the card, but the important point to take away right now it that it packs a lot of shaders, even more memory bandwidth, and is meant to offer AMD’s best performance yet. R9 Fury X will eventually be joined by 3 other Fiji-based parts in the coming months, but this month it’s all about AMD’s flagship card.

The R9 Fury X is launching at $649, which happens to be the same price as the card’s primary competition, the GeForce GTX 980 Ti. Launched at the end of May, the GTX 980 Ti is essentially a preemptive attack on the R9 Fury X from NVIDIA, offering performance close enough to NVIDIA’s GTX Titan X flagship that the difference is arguably immaterial. For AMD this means that while beating GTX Titan X would be nice, they really only need a win against the GTX 980 Ti, and as we’ll see the Fury X will make a good run at it, making this the closest AMD has come to an NVIDIA flagship card in quite some time.

Finally, from a market perspective, AMD will be going after a few different categories with the R9 Fury X. As competition for the GTX 980 Ti, AMD is focusing on 4K resolution gaming, based on a combination of the fact that 4K monitors are becoming increasingly affordable, 4K Freesync monitors are finally available, and relative to NVIDIA’s wares, AMD fares the best at 4K. Expect to see AMD also significantly play up the VR possibilities of the R9 Fury X, though the major VR headset, the Oculus Rift, won’t ship until Q1 of 2016. Finally, it has now been over three years since the launch of the original Radeon HD 7970, so for buyers looking for an update AMD’s first 28nm card, Fury X is in a good position to offer the kind of generational performance improvements that typically justify an upgrade.

Fiji’s Architecture: The Grandest of GCN 1.2
Comments Locked

458 Comments

View All Comments

  • D. Lister - Friday, July 3, 2015 - link

    Ryan, to us, the readers, AT is just one of several sources of information, and to us, the result of your review sample is just one of the results of many other review samples. As a journalist, one would expect you to have done at least some investigation regarding the "overclockers' dream" claim, posted your numbers and left the conclusion making to those whose own money is actually going to be spent on this product - us, the customers.

    I totally understand if you couldn't because of ill health, but, with all due respect, saying that you couldn't review a review sample because there weren't enough review samples to find some scientifically accurate mean performance number, at least to me appears as a reason with less than stellar validity.
  • silverblue - Friday, July 3, 2015 - link

    I can understand some of the criticisms posted here, but let's remember that this is a free site. Additionally, I doubt there were many Fury X samples sent out. KitGuru certainly didn't get one (*titter*). Finally, we've already established that Fury X has practically sold out everywhere, so AT would have needed to purchase a Fury X AFTER release and BEFORE they went out of stock in order to satisfy the questions about sample quality and pump whine.
  • nagi603 - Thursday, July 2, 2015 - link

    "if you absolutely must have the lowest load noise possible from a reference card, the R9 Fury X should easily impress you."
    Or, you know, mod the hell out of your card. I have a 290X in a very quiet room, and can't hear it, thanks to the Accelero Xtreme IV I bolted onto it. It does look monstrously big, but still, not even the Fury X can touch that lack of system noise.
  • looncraz - Thursday, July 2, 2015 - link

    The 5870 was the fastest GPU when it was released and the the 290X was the fastest GPU when it was released. This article makes it sound like AMD has been unable to keep up at all, but they've been trading blows. nVidia simply has had the means to counter effectively.

    The 290X beat nVidia's $1,000 Titan. nVidia had to quickly respond with a 780Ti which undercut their top dog. nVidia had to release the 780Ti at a seriously low price in order to compete with the, then unreleased, Fury X and had to give the GPU 95% of the performance of their $1,000 Titan X.

    nVidia is barely keeping ahead of AMD in performance, but was well ahead in efficiency. AMD just about brought that to parity with THEIR HBM tech, which nVidia will also be using.

    Oh, anyone know the last time nVidia actually innovated with their GPUs? GSync doesn't count, that is an ages-old idea they simply had enough clout to see implemented, and PhysX doesn't count, since they simply purchased the company who created it.
  • tviceman - Thursday, July 2, 2015 - link

    The 5870 was the fastest for 7 months, but it wasn't because it beat Nvidia's competition against it. Nvidia's competition against it was many months late, and when it finally came out was clearly faster. The 7970 was the fastest for 10 weeks, then was either slower or traded blows with the GTX 680. The 290x traded blows with Titan but was not clearly faster and was then eclipsed by the 780 TI 5 days later.

    All in all, since GTX 480 came out in March of 2010, Nvidia has solidly held the single GPU performance crown. Sometimes by a small margin (GTX 680 launch vs. HD 7970), sometimes by a massive margin (GTX Titan vs. 7970Ghz), but besides a 10 week stint, Nvidia has been in the lead for over the past 5 years.
  • kn00tcn - Thursday, July 2, 2015 - link

    check reviews with newer drivers, 7970 has increased more than 680, sometimes similar with 290x vs 780/780ti depending on game (it's a mess to dig up info, some of it is coming from kepler complaints)

    speaking of drivers, 390x using a different set than 290x in reviews, that sure makes launch reviews pointless...
  • chizow - Thursday, July 2, 2015 - link

    I see AMD fanboys/proponents say this often, so I'll ask you.

    Is performance at the time you purchase and in the near future more important to you? Or are you buying for unrealized potential that may only be unlocked when you are ready to upgrade those cards again?

    But I guess that is a fundamental difference and one of the main reasons I prefer Nvidia. I'd much rather buy something knowing I'm going to get Day 1 drivers, timely updates, feature support as advertised when I buy, over the constant promise and long delays between significant updates and feature gaps.
  • silverblue - Friday, July 3, 2015 - link

    Good point, however NVIDIA has made large gains in drivers in the past, so there is definitely performance left on the table for them as well. I think the issue here is that NVIDIA has seemed - to the casual observer - to be less interested in delivering performance improvements for anything prior to Maxwell, perhaps as a method of pushing people to buy their new products. Of course, this wouldn't cause you any issues considering you're already on Maxwell 2.0, but what about the guy who bought a 680 which hasn't aged so well? Not everybody can afford a new card every generation, let alone two top end cards.
  • chizow - Sunday, July 5, 2015 - link

    Again, it fundamentally speaks to Nvidia designing hardware and using their transistor budget to meet the demands of games that will be relevant during the course of that card's useful life.

    Meanwhile, AMD may focus on archs that provide greater longevity, but really, who cares if it was always running a deficit for most of its useful life just to catch up and take the lead when you're running settings in new games that are borderline unplayable to begin with?

    Some examples for GCN vs. Kepler would be AMD's focus on compute, where they always had a lead over Nvidia in games like Dirt that started using Global Illumination, while Kepler focused on geometry and tessellation, which allowed it to beat AMD in most relevant games of the DX9 to DX11 transition era.

    Now, Nvidia presses its advantage as its compute has caught up and exceeded GCN with Kepler, while maintaining their advantage with geometry and tesseletion, so we see in these games, GCN and Kepler both fall behind. That's just called progress. The guy who thinks his 680 should still keep pace with a new gen architecture meant to take advantage of features in new gen games probably just needs to look back at history to understand, new gen archs are always going to run new gen games better than older archs.
  • chizow - Thursday, July 2, 2015 - link

    +1, exactly, except for a few momentary anomalies, Nvidia has held the single GPU performance crown and won every generation since G80. They did their best with the small die strategy for as long as they could, but they quickly learned they'd never get there against Nvidia's monster 500+mm^2 chips, so they went big die as well. Fiji was a good effort, but as we can see, it fell short and may be the last grand effort we see from AMD.

Log in

Don't have an account? Sign up now