Test Procedures

Our usual SSD test procedure was not designed to handle multi-device tiered storage, so some changes had to be made for this review and as a result much of the data presented here is not directly comparable to our previous reviews. The major changes are:

  • All test configurations were running the latest OS patches and CPU microcode updates for the Spectre and Meltdown vulnerabilities. Regular SSD reviews with post-patch test results will begin later this month.
  • Our synthetic benchmarks are usually run under Linux, but Intel's caching software is Windows-only so the usual fio scripts were adapted to run on Windows. The settings for data transfer sizes and test duration are unchanged, but the difference in storage APIs between operating systems means that the results shown here are lower across the board, especially for the low queue depth random I/O that is the greatest strength of Optane SSDs.
  • We only have equipment to measure the power consumption of one drive at a time. Rather than move that equipment out of the primary SSD testbed and use it to measure either the cache drive or the hard drive, we kept it busy testing drives for future reviews. The SYSmark 2014 SE test results include the usual whole-system energy usage measurements.
  • Optane SSDs and hard drives are not any slower when full than when empty, because they do not have the complicated wear leveling and block erase mechanisms that flash-based SSDs require, nor any equivalent to SLC write caches. The AnandTech Storage Bench (ATSB) trace-based tests in this review omit the usual full-drive test runs. Instead, caching configurations were tested by running each test three times in a row to check for effects of warming up the cache.
  • Our AnandTech Storage Bench "The Destroyer" test takes about 12 hours to run on a good SATA SSD and about 7 hours on the best PCIe SSDs. On a mechanical hard drive, it takes more like 24 hours. Results for The Destroyer will probably not be ready this week. In the meantime, the ATSB Heavy test is sufficiently large to illustrate how SSD caching performs for workloads that do not fit into the cache.

Benchmark Summary

This review analyzes the performance of Optane Memory caching both for boot drives and secondary drives. The Optane Memory modules are also tested as standalone SSDs. The benchmarks in this review fall into three categories:

Application benchmarks: SYSmark 2014 SE

SYSmark directly measures how long applications take to respond to simulated user input. The scores are normalized against a reference system, but otherwise are directly proportional to the accumulated time between user input and the result showing up on screen. SYSmark measures whole-system performance and energy usage with a broad variety of non-gaming applications. The tests are not particularly storage-intensive, and differences in CPU and RAM can have a much greater impact on scores than storage upgrades.

AnandTech Storage Bench: The Destroyer, Heavy, Light

These three tests are recorded traces of real-world I/O that are replayed onto the storage device under test. This allows for the same storage workload to be reproduced consistently and almost completely independent of changes in CPU, RAM or GPU, because none of the computational workload of the original applications is reproduced. The ATSB Light test is similar in scope to SYSmark while the ATSB Heavy and The Destroyer tests represent much more computer usage with a broader range of applications. As a concession to practicality, these traces are replayed with long disk idle times cut short, so that the Destroyer doesn't take a full week to run.

Synthetic Benchmarks: Flexible IO Tester (FIO)

FIO is used to produce and measure artificial storage workloads according to our custom scripts. Poor choice of data sizes, access patterns and test duration can produce results that are either unrealistically flattering to SSDs or are unfairly difficult. Our FIO-based tests are designed specifically for modern consumer SSDs, with an emphasis on queue depths and transfer sizes that are most relevant to client computing workloads. Test durations and preconditioning workloads have been chosen to avoid unrealistically triggering thermal throttling on M.2 SSDs or overflowing SLC write caches.

Introduction SYSmark 2014 SE
POST A COMMENT

97 Comments

View All Comments

  • Flunk - Tuesday, May 15, 2018 - link

    For $144 you can get a 256GB M.2 SSD, big enough to use as a boot drive. Even as a cache for a slow hard-drive (which means you also need to buy a hard drive, possibly bumping the cost up to 512GB SSD prices) means this product doesn't make any sense at all. Maybe it made sense when they started development, but it doesn't now. Reply
  • binary visions - Tuesday, May 15, 2018 - link

    I'm not sure I understand your comment.

    This product isn't designed for people whose data fits on a boot drive. It's designed to accelerate disk speeds for people who require large data drives.

    E.g. my photos do not even remotely fit on an affordable SSD. I have a 6tb drive I work off of, but I'm frequently working in sets of photos that are <100gb. I suspect an Optane drive would significantly improve my workflow (I don't have a compatible system, but it's something I'm looking into in the future).

    Copying photos back and forth between an SSD for working on them, and back to the spinning platters for storage, is an ugly process at best.
    Reply
  • qlum - Tuesday, May 15, 2018 - link

    However at this point a conventional ssd of a larger size could also be used gor caching and may require less swaoping to the slower hdd. Reply
  • lmcd - Thursday, May 17, 2018 - link

    No. The idea of caching means that the device used as the cache will almost always be close to capacity. Nearly-full MLC and TLC SSD devices perform very poorly compared to their empty numbers. MLC and TLC devices would have 1/2 and 1/3 the size they're listed at when used as caches, which makes the comparison much less favorable. Reply
  • frenchy_2001 - Friday, May 18, 2018 - link

    I have used SSD caching for HDD for longer than intel has offered it.
    I bought a OCZ Synapse and I've used it for years. It was a 64GB SSD, with 32GB usable
    https://www.newegg.com/Product/Product.aspx?Item=N...
    (overprovisionning allowed better performances while full), supplied with a custom caching software.
    The software did not work great, but I transitioned to intel SMART response SSD caching when I upgraded from an AMD system to a Z68 (and beyond) and this has helped a lot.
    It is fully transparent and I hardly realize it's there, but the few time I had to remove it (I changed to a bigger SSD as cache, maxed to 80GB, or changed the HDD and had to redo the cache system), how slow the HDD alone was surprised me.
    Boot time is less than a minute, game load times are short enough... Basically, even with with caching alone, it gave me most of the benefits of SSD for everyday tasks.

    I fully expects this product to behave similarly, benefits increasing with size.

    This is not really for people building a new computer, this is for people that want to speed up a current one with a big HDD.
    Reply
  • Lolimaster - Tuesday, May 15, 2018 - link

    Maybe optimize your workflow, you would be better buying a 500GB SSD and MOVING your frequent data to that drive. It's the same thing, for the same price and 10x more storage. Reply
  • GTVic - Tuesday, May 15, 2018 - link

    He just said that transferring photos to an SSD is not feasible. Reply
  • joenathan - Tuesday, May 15, 2018 - link

    His plan still doesn't make sense, what he's just gonna have to hope the Intel software magically knows which of the 6TB of photos he is going to use today? If it's can't cache everything then it's just a gamble. It would be better for him to get a larger SSD and modify his work flow so that it would be feasible to transfer the photos. Reply
  • nevcairiel - Wednesday, May 16, 2018 - link

    Initial access will still be slower as the cache is being populated, that is true - but you would have the same initial cost if you manually move files to your "work drive", nevermind all the hassle that comes with that. Reply
  • Arnulf - Wednesday, May 16, 2018 - link

    Can't stop laughing at those read/write speeds ... downright pathetic compared to low end NVMe drives ... and to think Optane was touted to perform as a class of its own between flash and DRAM.

    As for your photo predicament - where is the bottleneck of your 100 GB photo editing process? I doubt it's random access. If it is sequential access (throughput) for batch processing all those photos then you will be limited by the HDD in either case (with or without Optane). Besides those horrible sequential transfer rates ... just can't stop laughing :-D

    Just get a large enough NVMe SSD.
    Reply

Log in

Don't have an account? Sign up now