Ashes of the Singularity: Escalation (DX12)

A veteran from both our 2016 and 2017 game lists, Ashes of the Singularity: Escalation remains the DirectX 12 trailblazer, with developer Oxide Games tailoring and designing the Nitrous Engine around such low-level APIs. The game makes the most of DX12's key features, from asynchronous compute to multi-threaded work submission and high batch counts. And with full Vulkan support, Ashes provides a good common ground between the forward-looking APIs of today. Its built-in benchmark tool is still one of the most versatile ways of measuring in-game workloads in terms of output data, automation, and analysis; by offering such a tool publicly and as part-and-parcel of the game, it's an example that other developers should take note of.

Settings and methodology remain identical from its usage in the 2016 GPU suite. To note, we are utilizing the original Ashes Extreme graphical preset, which compares to the current one with MSAA dialed down from x4 to x2, as well as adjusting Texture Rank (MipsToRemove in settings.ini).

We've updated some of the benchmark automation and data processing steps, so results may vary at the 1080p mark compared to previous data.

Ashes of the Singularity: Escalation - 2560x1440 - Extreme Quality

Ashes of the Singularity: Escalation - 1920x1080 - Extreme Quality

Ashes: Escalation - 99th Percentile - 2560x1440 - Extreme Quality

Ashes: Escalation - 99th Percentile - 1920x1080 - Extreme Quality

Interestingly, Ashes offers the least amount of improvement in the suite for the GTX 1660 Ti over the GTX 1060 6GB. Similarly, the GTX 1660 Ti lags behind the GTX 1070, which is already close to the older Turing sibling. With the GTX 1070 FE and RX Vega 56 neck-and-neck, the GTX 1660 Ti splits the RX 590/RX Vega 56 gap.

Far Cry 5 Wolfenstein II
Comments Locked

157 Comments

View All Comments

  • Rudde - Friday, February 22, 2019 - link

    Never mind, the second page explains this well. (Parallell execution of fp16, fp32 and int32)
  • CiccioB - Saturday, February 23, 2019 - link

    Not only that.
    With Turing you also get mesh shading and a better support for thread switching, which is a awful technique used on GCN to improve its terrible efficiency, having lots of "bubbles" in the pipelines.
    That's the reason you see previous AMD optimized games that didn't run too well with Pascal work much better with Turing, as the high threaded technique (the famous AC which is a bit overused in engines created for the console HW) is not going to constantly stall the SM with useless work as that of frequent task switching.
  • AciMars - Saturday, February 23, 2019 - link

    “Worse yet, the space used per SM has gotten worse“. not true.. you know, turing have separate cuda cores for int and fp. It means when turing have 1536 cuda cores means 1536 int + 1536 fp cores. So on die size actually turing have 2x cuda cores compare to pascal
  • CiccioB - Monday, February 25, 2019 - link

    Not exactly, the number of CUDA core are the same, just that a new independent ALU as been added.
    A CUDA core is not only an execution unit, it also registers, memory (cache), buses (memory access) and other special execution units (load/store).
    By adding a new integer ALU you don't automatically get double the capacity as really doubling the number of a complete CUDA core.
  • ballsystemlord - Friday, February 22, 2019 - link

    Here are some spelling and grammar corrections.

    This has proven to be one of NVIDIA's bigger advantages over AMD, an continues to allow them to get away with less memory bandwidth than we'd otherwise expect some of their GPUs to need.
    Missing d as in "and":
    This has proven to be one of NVIDIA's bigger advantages over AMD, and continues to allow them to get away with less memory bandwidth than we'd otherwise expect some of their GPUs to need.
    so we've only seen a handful of games implement (such as Wolfenstein II) implement it thus far.
    Double implement, 1 befor ()s and 1 after:
    so we've only seen a handful of games (such as Wolfenstein II) implement it thus far.

    For our games, these results is actually the closest the RX 590 can get to the GTX 1660 Ti,
    Use "are" not "is":
    For our games, these results are actually the closest the RX 590 can get to the GTX 1660 Ti,

    This test offers a slew of additional tests - many of which use behind the scenes or in our earlier architectural analysis - but for now we'll stick to simple pixel and texel fillrates.
    Missing "we" (I suspect that the sentence should be reconstructed without the "-"s, but I'm not that good.):
    This test offers a slew of additional tests - many of which we use behind the scenes or in our earlier architectural analysis - but for now we'll stick to simple pixel and texel fillrates.

    "Looking at temperatures, there are no big surprises here. EVGA seems to have tuned their card for cooling, and as a result the large, 2.75-slot card reports some of the lowest numbers in our charts, including a 67C under FurMark when the card is capped at the reference spec GTX 1660 Ti's 120W limit."
    I think this could be clarified as their are 2 EVGA cards in the charts and the one at 67C is not explicitly labeled as EVGA.

    Thanks
  • Ryan Smith - Saturday, February 23, 2019 - link

    Thanks!
  • boozed - Friday, February 22, 2019 - link

    The model numbers have become quite confusing
  • Yojimbo - Saturday, February 23, 2019 - link

    I don't think they are confusing, 16 is between 10 and 20, plus the RTX is extra differentiation. In fact if NVIDIA had some cards in the 20 series with RTX capability and some cards in 20 series without RTX capability, even if some were 'GTX' and some were 'RTX', then that would be far more confusing. Putting the non-RTX Turing cards in their own series is a way of avoiding confusion. But if they actually come out with an "1180" as say some rumors floating around, that would be very confusing.
  • haukionkannel - Saturday, February 23, 2019 - link

    Interesting to see the next year.
    Rtx 3050 and gtx 2650ti for the weaker version, if we get one new card rtx family... Hmm... that could work if They keep the naming. 2021 RTX3040 and gtx 2640ti...
  • CiccioB - Thursday, February 28, 2019 - link

    Next generation all cards will have enough RT and tensor core enabled.

Log in

Don't have an account? Sign up now