PCMark 10 Storage Benchmarks

At the end of 2019, UL released a major update to their popular PCMark 10 benchmark suite, adding storage performance tests that had been conspicuously absent for over two years. These new storage benchmarks are similar to our AnandTech Storage Bench (ATSB) tests in that they are composed of traces of real-world IO patterns that are replayed onto the drive under test. We're incorporating these into our new SSD test suite, and including our first batch of results here.

PCMark 10 provides four different storage benchmarks. The Full System Drive, Quick System Drive and Data Drive benchmarks cover similar territory to our ATSB Heavy and Light tests, and all three together take about as long to run as the ATSB Heavy and Light tests combined. The Drive Performance Consistency Test is clearly meant to one-up The Destroyer and also measure the worst-case performance of a drive that is completely full. Due to time constraints, we are not yet attempting to add the Drive Performance Consistency Test to our usual test suite.

PCMark 10 Storage Tests
Test Name Data Written
Data Drive 15 GB
Quick System Drive 23 GB
Full System Drive 204 GB
Drive Performance Consistency 23 TB + 3x drive capacity

The primary subscores for the PCMark 10 Storage benchmarks are average bandwidth and average latency for read and write IOs. These are combined into an overall score by computing the geometric mean of the bandwidth score and the reciprocal of the latency score. PCMark 10 also records more detailed statistics, but we'll dig into those in a later review. These PCMark 10 Storage test runs were conducted on our Coffee Lake testbed:

AnandTech Coffee Lake SSD Testbed
CPU Intel Core i7-8700K
Motherboard Gigabyte Aorus H370 Gaming 3 WiFi
Chipset Intel H370
Memory 2x 8GB Kingston DDR4-2666
Case In Win C583
Power Supply Cooler Master G550M
OS Windows 10 64-bit, version 2004

 

Data Drive Benchmark

The Data Drive Benchmark is intended to represent usage a secondary or portable drive may be subject to. This test simulates copying around files, but does not simulate the IO associated with launching and running applications from a drive.

PCMark 10 Storage - Data
Overall Score Average Bandwidth Average Latency

Starting off, the 8TB Sabrent Rocket Q leads the field thanks to its massive and fast SLC cache; it clearly outperforms even the decently high-end 2TB TLC-based HP EX920. The several capacities of the Samsung 870 QVO all performa about the same: less than half the speed of the faster NVMe drives, and slower than the slowest entry-level NVMe drives. The enterprise SATA drive with no SLC caching comes in last place.

Quick System Drive Benchmark

The Quick System Drive Benchmark is a subset of the Full System Drive Benchmark, running only 6 out of the 23 sub-tests from the Full test.

PCMark 10 Storage - Quick
Overall Score Average Bandwidth Average Latency

Moving on to the Quick test, the Sabrent Rocket Q no longer stands out ahead of the other NVMe drives, but still offers decent performance. The performance gap between the NVMe drives and the Samsung 870 QVO drives has narrowed slightly, but is still almost a factor of two.

Full System Drive Benchmark

The Full System Drive Benchmark covers a broad range of everyday tasks: booting Windows and starting applications and games, using Office and Adobe applications, and file management. The "Full" in the name does not mean that each drive is filled or that the entire capacity of the drive is tested. Rather, it only indicates that all of the PCMark 10 Storage sub-tests are included in this test.

PCMark 10 Storage - Full
Overall Score Average Bandwidth Average Latency

The Full test starts to bring the downsides of QLC NAND into focus. The Sabrent Rocket Q is now the slowest of the NVMe drives, only moderately faster than the 8TB Samsung 870 QVO. The 1TB 870 QVO is also falling behind the larger and faster models. However, the QLC-based Intel 660p manages to hold on to decent performance, possibly a result of the class-leading SLC cache performance we usually see from Silicon Motion NVMe controllers paired with Intel/Micron flash.

AnandTech Storage Bench Random I/O Performance
Comments Locked

150 Comments

View All Comments

  • Oxford Guy - Monday, December 7, 2020 - link

    I have three OCZ 240 GB Vertex 2 drives. They're all bricked. Two of them were replacements for bricked drives. One of them bricked within 24 hours of being used. They bricked in four different machines.

    Pure garbage. OCZ pulled a bait and switch, where it substituted 64-bit NAND for the 32-bit the drives were reviewed/tested with and rated for on the box. The horrendously bad Sandforce controller choked on 64-bit NAND and OCZ never stabilized it with its plethora of firmware spew. The company also didn't include the 240 GB model in its later exchange program even though it was the most expensive in the lineup. Sandforce was more interested in protecting the secrets of its garbage design than protecting users from data loss so the drives would brick as soon as the tiniest problem was encountered and no tool was ever released to the public to retrieve the data. It was designed to make that impossible for anyone who wasn't in spycraft/forensics or working for a costly drive recovery service. I think there was even an announced partnership between OCZ and a drive recovery company for Sandforce drives which isn't at all suspicious.
  • Oxford Guy - Monday, December 7, 2020 - link

    The Sandforce controller also was apparently incompatible with the TRIM command but customers were never warned about that. So, TRIM didn't cause performance to rebound as it should.
  • UltraWide - Saturday, December 5, 2020 - link

    AMEN for silence. I have a 6 x 8TB NAS and even with 5,400rpm hdds it's quite loud.
  • TheinsanegamerN - Saturday, December 5, 2020 - link

    I really want to like the slim, and would love one that I could load up with 2TB SATA SSDS in raid, but they've drug their feet on a 10G version. 1G or even 2.5G is totally pointless for SSD NASes.
  • bsd228 - Friday, December 4, 2020 - link

    sequential transfer speed isn't all that matters.

    two mirrored SSDs on a 10G connection can get you better read performance than any SATA ssd locally. But it can be shared across all of the home network.
  • david87600 - Friday, December 4, 2020 - link

    My thoughts exactly. SSD rarely makes sense for NAS.
  • Hulk - Friday, December 4, 2020 - link

    What do we know about the long term data retention of these QLC storage devices?
  • Oxford Guy - Friday, December 4, 2020 - link

    16 voltage states to deal with for QLC. 8 voltage states for TLC. 4 for 2-layer MLC. 2 for SLC.

    More voltage states = bad. The only good thing about QLC is density. Everything else is worse.
  • Spunjji - Monday, December 7, 2020 - link

    It's not entirely. More voltage states is more difficult to read, for sure, but they've also begun implementing more robust ECC systems with each new variant of NAND to counteract that.

    I'd trust one of these QLC drives more than I'd trust my old 120GB 840 drive in that regard.
  • Oxford Guy - Tuesday, December 8, 2020 - link

    Apples and oranges. More robust things to try to work around shortcomings are not the shortcomings not existing.

Log in

Don't have an account? Sign up now