PCMark 10 Storage Benchmarks

At the end of 2019, UL released a major update to their popular PCMark 10 benchmark suite, adding storage performance tests that had been conspicuously absent for over two years. These new storage benchmarks are similar to our AnandTech Storage Bench (ATSB) tests in that they are composed of traces of real-world IO patterns that are replayed onto the drive under test. We're incorporating these into our new SSD test suite, and including our first batch of results here.

PCMark 10 provides four different storage benchmarks. The Full System Drive, Quick System Drive and Data Drive benchmarks cover similar territory to our ATSB Heavy and Light tests, and all three together take about as long to run as the ATSB Heavy and Light tests combined. The Drive Performance Consistency Test is clearly meant to one-up The Destroyer and also measure the worst-case performance of a drive that is completely full. Due to time constraints, we are not yet attempting to add the Drive Performance Consistency Test to our usual test suite.

PCMark 10 Storage Tests
Test Name Data Written
Data Drive 15 GB
Quick System Drive 23 GB
Full System Drive 204 GB
Drive Performance Consistency 23 TB + 3x drive capacity

The primary subscores for the PCMark 10 Storage benchmarks are average bandwidth and average latency for read and write IOs. These are combined into an overall score by computing the geometric mean of the bandwidth score and the reciprocal of the latency score. PCMark 10 also records more detailed statistics, but we'll dig into those in a later review. These PCMark 10 Storage test runs were conducted on our Coffee Lake testbed:

AnandTech Coffee Lake SSD Testbed
CPU Intel Core i7-8700K
Motherboard Gigabyte Aorus H370 Gaming 3 WiFi
Chipset Intel H370
Memory 2x 8GB Kingston DDR4-2666
Case In Win C583
Power Supply Cooler Master G550M
OS Windows 10 64-bit, version 2004

 

Data Drive Benchmark

The Data Drive Benchmark is intended to represent usage a secondary or portable drive may be subject to. This test simulates copying around files, but does not simulate the IO associated with launching and running applications from a drive.

PCMark 10 Storage - Data
Overall Score Average Bandwidth Average Latency

Starting off, the 8TB Sabrent Rocket Q leads the field thanks to its massive and fast SLC cache; it clearly outperforms even the decently high-end 2TB TLC-based HP EX920. The several capacities of the Samsung 870 QVO all performa about the same: less than half the speed of the faster NVMe drives, and slower than the slowest entry-level NVMe drives. The enterprise SATA drive with no SLC caching comes in last place.

Quick System Drive Benchmark

The Quick System Drive Benchmark is a subset of the Full System Drive Benchmark, running only 6 out of the 23 sub-tests from the Full test.

PCMark 10 Storage - Quick
Overall Score Average Bandwidth Average Latency

Moving on to the Quick test, the Sabrent Rocket Q no longer stands out ahead of the other NVMe drives, but still offers decent performance. The performance gap between the NVMe drives and the Samsung 870 QVO drives has narrowed slightly, but is still almost a factor of two.

Full System Drive Benchmark

The Full System Drive Benchmark covers a broad range of everyday tasks: booting Windows and starting applications and games, using Office and Adobe applications, and file management. The "Full" in the name does not mean that each drive is filled or that the entire capacity of the drive is tested. Rather, it only indicates that all of the PCMark 10 Storage sub-tests are included in this test.

PCMark 10 Storage - Full
Overall Score Average Bandwidth Average Latency

The Full test starts to bring the downsides of QLC NAND into focus. The Sabrent Rocket Q is now the slowest of the NVMe drives, only moderately faster than the 8TB Samsung 870 QVO. The 1TB 870 QVO is also falling behind the larger and faster models. However, the QLC-based Intel 660p manages to hold on to decent performance, possibly a result of the class-leading SLC cache performance we usually see from Silicon Motion NVMe controllers paired with Intel/Micron flash.

AnandTech Storage Bench Random I/O Performance
Comments Locked

150 Comments

View All Comments

  • heffeque - Friday, December 4, 2020 - link

    No worries on a NAS: BTRFS will take care of that in the background.
  • Billy Tallis - Friday, December 4, 2020 - link

    Not sure if that's a joke about BTRFS RAID5/6 ensuring you lose your data.

    A BTRFS scrub isn't automatic; you need a cron job or similar to automate periodic scrubbing. But assuming you do that and stay away from the more dangerous/less tested RAID modes, you shouldn't have to worry about silent data loss. I've been using BTRFS RAID1 with various SSDs as my primary NAS ever since I amassed enough 1 and 2TB models, and it's worked well so far. ZFS would also work reasonably well, but it is less convenient when you're using a pile of mismatched drives.

    Getting back to the question of data retention of QLC itself: the write endurance rating of a drive is supposed to be chosen so that at the end of the rated write endurance the NAND is still healthy enough to provide 1 year unpowered data retention. (For client/consumer drives; for enterprise drives the standard is just 3 months, so they can afford to wear out the NAND a bit further, and that's part of why enterprise drives have higher TBW ratings.)
  • heffeque - Wednesday, December 9, 2020 - link

    BTRFS background self-healing is automatic in Synology as of DSM 6.1 and above.
  • TheinsanegamerN - Saturday, December 5, 2020 - link

    Long term cold storage of any flash memory is terrible. QLC wont be any better then TLC in this regard.
  • Oxford Guy - Sunday, December 6, 2020 - link

    How could it possibly be better (than 3D TLC)?

    It can only be worse unless the TLC is really shoddy quality. This is because it has 16 voltage states rather than 8.
  • TheinsanegamerN - Monday, December 7, 2020 - link

    Hence why I said it wont be any better, because it cant be. That leaves the door open for it to be worse.

    Reeding iz hard.
  • Oxford Guy - Monday, December 7, 2020 - link

    But your comment obviously wasn't clear enough, was it?

    QLC is worse than TLC. Next time write that since that's the clear truth, not that QLC and TLC are somehow equivalent.
  • joesiv - Friday, December 4, 2020 - link

    I love the idea of 8TB SSD drives, it's the perfect size for a local data drive, I could finally be rid of my spinning rust! Just need the price to drop a bit, maybe next year!

    Thank you for the review. Though I wish reviews of SSD's would be more clear to consumers what endurance really means to the end user. "DWPD" and TB/D, are mentioned, noone seems to highlight the fact that, it's not end user's writes that matter in these specifications, it's "writes to nand", which can be totally different from user/OS writes. It is reliant on the firmware, and some firmwares do some wonky things for data collection, speed, or even have bugs, which drastically drop the endurance of a drive.

    Of course I would love an exhaustive endrance test in the review, at the bare minimum, if anandtech could check the smart data after the benchmark is done, and verify two things, it would give you some useful information.

    Check:
    - nand writes (average block erases is usually available)
    - OS writes (sometimes is not easily available), but since you run a standardized bench suite, perhaps you have an idea of how many GB's you typically run through your drives anyways.

    You might need to do a bit of math on the block erase count, to get it back to GBs, and you might need to contact the manufacturer for SMART data attribute documentation, but if they don't have good smart data attributes, or documentation available, perhaps it's something to highlight in the review.

    But then you could weed out, and present to the consumer drives that have firmwares have outrageously inefficient nand write patterns.

    My company has had several failures, and because of that, have had to test in this way potential drives for our products, and have had to outright skip drives that's specs were great, but the firmwares were doing very inefficient drive writes, limiting their endurance.

    anyways, feedback, and fingers crossed!

    Keep up the good work, and thanks for the quality content!
  • heffeque - Friday, December 4, 2020 - link

    Well... 2 TB per day every day seems like a lot of writes. Not sure it'll be a problem for normal use.
  • joesiv - Friday, December 4, 2020 - link

    well firmware bugs can cause writes to be magnified 10x, 100x higher than what is expected. I've seen it. So, you're 2TB's, would just be 20GB's... Of course we hope that firmwares don't have such bugs, but how would we know unless someone looked at the numbers?

Log in

Don't have an account? Sign up now