Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal fragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs (Logical Block Addresses) have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

Mushkin Reactor 1TB
25% Over-Provisioning

Despite the use of newer and slightly lower performance 16nm NAND, Reactor's performance consistency is actually marginally better than the other SM2246EN based SSDs we have tested. It's still worse than most of the other drives, but at least the increase in capacity didn't negatively impact the consistency, which happens with some drives. 

Transcend SSD370 256GB
25% Over-Provisioning


Transcend SSD370 256GB
25% Over-Provisioning

TRIM Validation

To test TRIM, I filled the drive with sequential 128KB data and proceeded with a 30-minute random 4KB write (QD32) workload to put the drive into steady-state. After that I TRIM'ed the drive by issuing a quick format in Windows and ran HD Tach to produce the graph below.

And TRIM works as expected.

Introduction, The Drive & The Test AnandTech Storage Bench 2013


View All Comments

  • nandnandnand - Monday, February 9, 2015 - link

    I get excited over Samsung/V-NAND SSD reviews. But it hasn't resulted in a steep price drop yet Reply
  • Uplink10 - Wednesday, February 11, 2015 - link

    That is because SSD`s are overpriced and people should avoid buying them because they are only giving fuel to greedy companies. I am still using 2.5" HDD in my laptop. Reply
  • Kristian Vättö - Thursday, February 12, 2015 - link

    The goal of a company is to generate profit for its shareholders, not to give away free stuff to random consumers. HDD companies aren't any different, the flood case is a good example of their greed. Reply
  • Uplink10 - Thursday, February 12, 2015 - link

    Yes but here we have a choice, if you aren`t intensive (power) user I suggest you stick to HDD for a while, while SSD`s lose price. But if you are power user and need SSD for virtualization (I try to hold back from buying one) I guess you should buy the cheapest one one there is (MX100) because if you really think about costlier SSD`s, Samsung 850 Pro is enterprise and 850 Evo is a consumer disc. Will you really have an SDD 10 years in a PC, probably not. Reply
  • eanazag - Monday, February 9, 2015 - link

    I won't get excited again until NVMe/SATA Express starts to really take flight. I looked through the article just so I know the pitfalls of the drive and the controller. 450MB-550MBps speed drives are all over the place. Differentiation has to come from somewhere else. Reply
  • Calista - Tuesday, February 10, 2015 - link

    Has it not more to do with the quick improvement of SSD drives which have removed one of the biggest bottlenecks, leaving the rest of the components to play catch-up. To explain, for literally 99.9 percent of computer users any half-decent SSD is more than speedy enough. Not so with any other component. A basic CPU will create situations with slowdowns from time to time, a basic GPU will prevent most modern games from running even if we drop the resolution and quality a lot, a basic WiFi chip will take a lot of time for basic operations, like say copying a gig of data.

    But for most people most of the time the slowdown caused by a slow SSD will be in the seconds range, i.e. Word may take 3 instead of 1 second to start, to reboot will take 20 instead of 10 seconds. Yeah, it all add up. But it's still just a few extra seconds during a normal day. Using an old Latitude E4200 its SSD drive is really slow by todays standards, like r/w performance in the 80-100 MB/s range. But while it feels slower than say my Samsung 840 it's really not that big of a difference, this despite the latter being like five times as fast.
  • antialienado - Tuesday, February 10, 2015 - link

    Somebody needs to test, so we know for good what is the best choice.

    But you are right. This is getting boring.

    It would be more interesting if Anadtech were testing the drives in the various RAID available, including the cheapest ones.

    I have an X58 system. It only supports SATA II.
    I would like to know the difference in performance between a BIOS RAID 0, a SATA 3 expansion card, a PCI-E SSD, and have it in a XY chart of performance vs cost.

    That would be far more interesting and useful.
  • Kristian Vättö - Tuesday, February 10, 2015 - link

    Currently we don't have the manpower to do that. I'm handling all SSDs on my own and I already have more drives to review than I can possibly do. The topics you mentioned are all interesting, but require a lot of work because it's not enough to test just one motherboard/chipset and SATA 6Gbps expansion card. Once you start including a handful of each the workload increases exponentially and the testing alone would take a couple of weeks, assuming there are no issues.

    I agree that the current state of SSD market isn't all that interesting but trust me, it's going to get a lot more interesting in H2 when PCIe and NVMe make a big entry to the market.
  • HisDivineOrder - Tuesday, February 10, 2015 - link

    SSD's are boring because the best and worst SSD's are miles ahead of HD's in terms of the experience provided.

    The most important factor with a SSD is PRICE AND/OR SIZE (but definitely not speed) followed by warranty.

    That's why companies continue to slowly get dragged down to where pricing should have been FIVE years ago.
  • Uplink10 - Wednesday, February 11, 2015 - link

    I it hate when new technologies are overpriced. Same thing is happening with BD-R blank discs, they are costlier in terms of capacity/price than HDDs. Reply

Log in

Don't have an account? Sign up now