A Quick Note on Architecture & Features

With pages upon pages of architectural documents still to get through in only a few hours, for today’s launch news I’m not going to have the time to go in depth on new features or the architecture. So I want to very briefly hit the high points on what the major features are, and also provide some answers to what are likely to be some common questions.

Starting with the architecture itself, one of the biggest changes for RDNA is the width of a wavefront, the fundamental group of work. GCN in all of its iterations was 64 threads wide, meaning 64 threads were bundled together into a single wavefront for execution. RDNA drops this to a native 32 threads wide. At the same time, AMD has expanded the width of their SIMDs from 16 slots to 32 (aka SIMD32), meaning the size of a wavefront now matches the SIMD size. This is one of AMD’s key architectural efficiency changes, as it helps them keep their SIMD slots occupied more often. It also means that a wavefront can be passed through the SIMDs in a single cycle, instead of over 4 cycles on GCN parts.

In terms of compute, there are not any notable feature changes here as far as gaming is concerned. How things work under the hood has changed dramatically at points, but from the perspective of a programmer, there aren’t really any new math operations here that are going to turn things on their head. RDNA of course supports Rapid Packed Math (Fast FP16), so programmers who make use of FP16 will get to enjoy those performance benefits.

With a single exception, there also aren’t any new graphics features. Navi does not include any hardware ray tracing support, nor does it support variable rate pixel shading. AMD is aware of the demands for these, and hardware support for ray tracing is in their roadmap for RDNA 2 (the architecture formally known as “Next Gen”). But none of that is present here.

The one exception to all of this is the primitive shader. Vega’s most infamous feature is back, and better still it’s enabled this time. The primitive shader is compiler controlled, and thanks to some hardware changes to make it more useful, it now makes sense for AMD to turn it on for gaming. Vega’s primitive shader, though fully hardware functional, was difficult to get a real-world performance boost from, and as a result AMD never exposed it on Vega.

Unique in consumer parts for the new 5700 series cards is support for PCI Express 4.0. Designed to go hand-in-hand with AMD’s Ryzen 3000 series CPUs, which are introducing support for the feature as well, PCIe 4.0 doubles the amount of bus bandwidth available to the card, rising from ~16GB/sec to ~32GB/sec. The real world performance implications of this are limited at this time, especially for a card in the 5700 series’ performance segment. But there are situations where it will be useful, particularly on the content creation side of matters.

Finally, AMD has partially updated their display controller. I say “partially” because while it’s technically an update, they aren’t bringing much new to the table. Notably, HDMI 2.1 support isn’t present – nor is more limited support for HDMI 2.1 Variable Rate Refresh. Instead, AMD’s display controller is a lot like Vega’s: DisplayPort 1.4 and HDMI 2.0b, including support for AMD’s proprietary Freesync-over-HDMI standard. So AMD does have variable rate capabilities for TVs, but it isn’t the HDMI standard’s own implementation.

The one notable change here is support for DisplayPort 1.4 Display Stream Compression. DSC, as implied by the name, compresses the image going out to the monitor to reduce the amount of bandwidth needed. This is important going forward for 4K@144Hz displays, as DP1.4 itself doesn’t provide enough bandwidth for them (leading to other workarounds such as NVIDIA’s 4:2:2 chroma subsampling on G-Sync HDR monitors). This is a feature we’ve talked off and on about for a while, and it’s taken some time for the tech to really get standardized and brought to a point where it’s viable in a consumer product.

AMD Announces Radeon RX 5700 XT & RX 5700 Addendum: AMD Slide Decks
Comments Locked

326 Comments

View All Comments

  • Korguz - Friday, June 14, 2019 - link

    CiccioB news for you buddy, based on your own replies, and your constant need to insult people.. you are WORSE, and probably just a young punk kid. the way you keep pushing nvidia, and constant bashing amd make YOU a fanboy yourself. FYI... navi, matched nvidia's midrange 20 series products.. the holy grail you call ray tracing, is only viable on 2080 or above. i could easily buy a 2080, but i wont.. cause i have more important things to spend my money on. just because you have more money then brains, and love paying nvidia for their over priced cards.. is your choice, must be nice to live at home with probably no bills, and no financial responsibility

    your calling me a fanboy.. fine go ahead.. doesnt bother me as we NEED competition, look what intel did to the cpu market.. and you said it your self " because nvidia wants money " thats ALL they care about.. MONEY AND PROFITS " On Turing, the high prices are due to the large dies " that is part of it, but its also something called no reason to price them lower, cause there is nothing really out there as an alternative, again.. look at intel for proof of this.
  • Korguz - Friday, June 14, 2019 - link

    oh and CiccioB not once that i can see, have you comment on how bad turing's performance hit is with ray tracing, or how the 20 series isnt that much faster then the 10 series with ray tracing off, for the price you pay, all you seem to be focusing on.. is trying to make nvidia look better in your replies.based on this.. do YOU think the 20 series is a good upgrade to the 10 series ?? if you do.. then you are a blind nvidia fanboy
  • Fallen Kell - Tuesday, June 11, 2019 - link

    Spunjji, there is a reason that NVIDIA put RTX in their midrange cards. The reason is that most sales are midrange and lower. The entire issue with RTX is the chicken and the egg problem. Game developers won’t put the effort for raytracing if there is no hardware in the consumers hands that can take advantage of it, and consumers will typically not opt to purchase hardware that is not going to be used by any games. NVIDIA is effectively forcing the current chicken to lay a next generation egg to open the market for new techniques for games by creating a large enough of an install base for game companies to do the math and see that there is an existing market for their raytracing games.
  • Fallen Kell - Tuesday, June 11, 2019 - link

    Yes Nvidia could have simply put RTX on their highest end cards and waited. And they would be waiting and waiting and waiting for game companies to actually implement raytracing. A gaming studio won’t invest the time and effort to retool their game engines for a potential consumer base of a few thousand people. However if that potential consumer base is a few hundred thousand, or a million, they will take a look at adding the features.
  • Korguz - Tuesday, June 11, 2019 - link

    a lot of good that thinking does.. when most people cant afford the cards, or dont want to pay the cost of entry for those cards, besides.. to make ray tracing usable, you kind of need a 2070, or better yet, a 2080, as the performance hit.. is just to great.
  • Korguz - Tuesday, June 11, 2019 - link

    Fallen Kell, but the prices nvidia is charging for their " mid range cards " is NOT mid range pricing, they have priced their mid range cards, to be more like entry level, high end range cards.. mid range would be 400 or less..
  • Phynaz - Tuesday, June 11, 2019 - link

    I’ll bet you do a 180 when AMD has these features
  • Xyler94 - Tuesday, June 11, 2019 - link

    Reading this comment thread, you must be an egotistical idiot or something.

    First off, Ray Tracing needs to be on AMD (and Intel's upcoming XE) hardware before it will be supported industry wide. Why waste tons of money on a feature barely anyone will use?

    Secondly, RTX is awesome, no one is denying it. But it's also not a reason I'll buy into an RTX card. sure, I can play Metro Exodus with it if I had that card, but my 980ti is rendering games just fine still, so I don't need to upgrade. I may buy a 2080 as my next card because I do use Moonlight, which is the open source nvidia stream service. But if AMD's Performance at a lower price in games without Ray Tracing is higher, then that'll be my purchase. I've got no allegiances to either AMD or NVIDIA. I got a 980ti because it was the strongest card at the time. ((Plus I watercooled it. It's darn awesome :) ))
  • Meteor2 - Sunday, June 30, 2019 - link

    Xyler, you read all that?! :-o
  • eva02langley - Thursday, June 13, 2019 - link

    ROFL... it is mid-range... and it is almost matching High-end Nvidia. Get out of here...

Log in

Don't have an account? Sign up now