Battlefield 1 (DX11)

Battlefield 1 returns from the 2017 benchmark suite, the 2017 benchmark suite with a bang as DICE brought gamers the long-awaited AAA World War 1 shooter a little over a year ago. With detailed maps, environmental effects, and pacy combat, Battlefield 1 provides a generally well-optimized yet demanding graphics workload. The next Battlefield game from DICE, Battlefield V, completes the nostalgia circuit with a return to World War 2, but more importantly for us, is one of the flagship titles for GeForce RTX real time ray tracing.

We use the Ultra preset is used with no alterations. As these benchmarks are from single player mode, our rule of thumb with multiplayer performance still applies: multiplayer framerates generally dip to half our single player framerates. Battlefield 1 also supports HDR (HDR10, Dolby Vision).

Battlefield 1 - 3840x2160 - Ultra Quality

Battlefield 1 - 2560x1440 - Ultra Quality

Battlefield 1 - 1920x1080 - Ultra Quality

Our previous experience with Battlefield 1 shows that AMD hardware tend to do relatively well here, and the Radeon VII is no exception. Of the games in our suite, Battlefield 1 is actually only one of two games where the Radeon VII takes the lead over the RTX 2080, but nevertheless this is still a feather in its cap. The uplift over the Vega 64 is an impressive 34% at 4K, more than enough to solidly mark its position at the tier above. In turn, Battlefield 1 sees the Radeon VII meaningfully faster than the GTX 1080 Ti FE, something that the RTX 2080 needed the Founders Edition tweaks for.

Battlefield 1 - 99th Percentile - 3840x2160 - Ultra Quality

Battlefield 1 - 99th Percentile - 2560x1440 - Ultra Quality

Battlefield 1 - 99th Percentile - 1920x1080 - Ultra Quality

99th percentiles reflect the same story, and at 1080p the CPU bottleneck plays more of a role than slight differences of the top three cards.

The Test Far Cry 5
Comments Locked

289 Comments

View All Comments

  • Alistair - Thursday, February 7, 2019 - link

    Because everyone is already playing Anthem at 4k 60fps with a $400 card? Ray tracing is totally useless and we need way more rasterization performance per dollar than we have right now. Give me a 7nm 2080 ti without the RT cores for $699 and then we'll talk.
  • eva02langley - Friday, February 8, 2019 - link

    Fair, the main objective of gaming GPU are shaders per $. Gameworks gimmick are not something I call a selling factor... and Nvidia is forced to cook their books because of it.
  • RSAUser - Thursday, February 7, 2019 - link

    Why are you adding the Final Fantasy benchmark when it has known bias issues?
  • Zizy - Thursday, February 7, 2019 - link

    Eh, 2080 is slightly better for games and costs the same, while unfortunately MATLAB supports just CUDA so I can't even play with compute.
  • Hul8 - Thursday, February 7, 2019 - link

    On page 19, the "Load GPU Temperatur - FurMark" graph is duplicated.
  • Ryan Smith - Thursday, February 7, 2019 - link

    Thanks. The FurMark power graph has been put back where it belongs.
  • schizoide - Thursday, February 7, 2019 - link

    Man, I've never seen such a hostile response to an Anandtech article. People need to relax, it's just a videocard.

    I don't see this as a win for AMD. Using HBM2 the card is expensive to produce, so they don't have a lot of freedom to discount it. Without a hefty discount, it's louder, hotter, and slower than a 2080 at the same price. And of course no ray-tracing, which may or may not matter, but I'd rather have it just in case.

    For OpenCL work it's a very attractive option, but again, that's a loser for AMD because they ALREADY sold this card as a workstation product for a lot more money. Now it's discounted to compete with the 2080, meaning less revenue for AMD.

    Even once the drivers are fixed, I don't see this going anywhere. It's another Vega64.
  • sing_electric - Thursday, February 7, 2019 - link

    There's still a lot of people for whom a Radeon Instinct was just never going to happen, INCLUDING people who might have a workstation where they write code that will mostly run on servers, and it means you can run/test your code on your workstation with a fairly predictable mapping to final server performance.

    As Nate said in the review, it's also very attractive to academics, which benefits AMD in the long run if say, a bunch of professors and grad students learn to write ML/CL on Radeon before say, starting or joining companies.
  • schizoide - Thursday, February 7, 2019 - link

    Yes, it's attractive to anyone who values OpenCL performance. They're getting workstation-class hardware on the cheap. But that does devalue AMD's workstation productline.
  • Manch - Thursday, February 7, 2019 - link

    Not really. The instinct cards are still more performant. They tend to be bought by businesses where time/perf is more important than price/perf.

Log in

Don't have an account? Sign up now