Lenovo's announcement today of a new generation of ThinkPads based on Intel's Kaby Lake platform includes brief but tantalizing mention of Optane, Intel's brand for devices using the 3D XPoint non-volatile memory technology they co-developed with Micron. Lenovo's new ThinkPads and competing high-end Kaby Lake systems will likely be the first appearance of 3D XPoint memory in the consumer PC market.

Several of Lenovo's newly announced ThinkPads will offer 16GB Optane SSDs in M.2 2242 form factor paired with hard drives as an alternative to a using a single NVMe SSD with NAND flash memory (usually TLC NAND, with a portion used as SLC cache). The new Intel Optane devices mentioned by Lenovo are most likely the codenamed Stony Beach NVMe PCIe 3 x2 drives that were featured in roadmap leaked back in July. More recent leaks have indicated that these will be branded as the Intel Optane Memory 8000p series, with a 32GB capacity in addition to the 16GB Lenovo will be using. Since Intel's 3D XPoint memory is being manufactured as a two-layer 128Gb (16GB) die, these Optane products will require just one or two dies and will have no trouble fitting on to a short M.2 2242 card alongside a controller chip.

The new generation of ThinkPads will be hitting the market in January and February 2017, but Lenovo and Intel haven't indicated when the configurations with Optane will be available. Other sources in the industry are telling us that Optane is still suffering from delays, so while we hope to see a working demo at CES, the Optane-equipped notebooks may not actually launch until much later in the year. We also expect the bulk of the initial supply of 3D XPoint memory to go to the enterprise market, just like virtually all of Intel and Micron's 3D MLC NAND output has been used for enterprise SSDs so far.

Support for Intel Optane branded devices based on 3D XPoint memory technology has long been bandied about as a new feature of the Kaby Lake generation of CPUs and chipsets, but Intel has not officially clarified what that means. The plan of record has always been for the first Optane products to be NVMe SSDs, but NVMe is already thoroughly supported by current platforms and software. Because Optane SSDs will have a significantly higher price per GB than NAND flash based SSDs, the natural role for Optane SSDs is to act as a small cache device for larger and slower storage devices. The "Optane support" that Kaby Lake brings is almost certainly just the addition of the ability to use NVMe SSDs (including Optane SSDs) as cache devices.

At a high level, using Optane SSDs as a cache for hard drives is no different from the SSD caching Intel first introduced in 2011 with the Z68 chipset for Sandy Bridge processors and the Intel Rapid Storage Technology (RST) driver version 10.5. Branded by Intel as Smart Response Technology (SRT), their SSD caching implementation built on the existing RAID capabilities of RST to use a SSD as a block-level cache of a hard drive, operating as a write-back or write-through cache depending on the user's preference. For SATA devices, no hardware features were required but booting from RST RAID or cache volumes requires support in the motherboard firmware, and Intel's drivers have used RAID and SRT SSD caching to provide product segmentation between different chipsets.

With the release of Skylake processors and the 100-series chipsets, Intel brought support for NVMe RAID to RST version 15. This was not as straightforward to implement as RAID and SRT for SATA drives, owing to the fact that the SATA drives in a RST RAID or SRT volume are all conveniently connected through Intel's own SATA controller and managed by the same driver. NVMe SSDs by contrast each connect to the system through general-purpose PCIe lanes and can use either the operating system's NVMe driver or a driver provided by the SSD manufacturer. In order to bring NVMe devices under the purview of Intel's RST driver, 100-series chipsets have an unusual trick: when the SATA controller is put in RAID mode instead of plain AHCI mode, NVMe devices that are connected to the PCH have their PCI registers re-mapped to appear within the AHCI controller's register space and the NVMe devices are no longer detectable as PCIe devices in their own right. This makes the NVMe SSDs inaccessible to any driver other than Intel's RST. Intel has provided very little public documentation of this feature and its operation is usually very poorly described by the UEFI configuration interfaces on supporting machines. This has caused quite a few tech support headaches for machines that enable this feature by default as it is seldom obvious how to put the machine back into a mode where standard NVMe drivers can be used. Worse, some machines such as the Lenovo Yoga 900 and IdeaPad 710 shipped with the chipset locked in RAID mode despite only having a single SSD. After public outcry from would-be Linux users, Lenovo released firmware updates that added the option of using the standard AHCI mode that leaves NVMe devices alone.

(excerpt from Intel 100-series chipset datasheet)

In spite of the limitations and rough edges, Intel's solution does ensure reliable operation in RAID mode, free of interference from third-party drivers. It's certainly less work than the alternative of writing a more general-purpose software RAID and caching system for Windows that can handle a variety of underlying drivers. It also lays the groundwork for adding support for NVMe cache devices to Intel's SRT caching system. Intel's SRT already has caching algorithms tuned for 16GB to 64GB caches in front of hard drives, so now that they have a solution for mediating access to NVMe SSDs it is simple to enable using both features simultaneously. The changes do need to be added to both the RST driver and to the motherboard firmware if booting from a cached volume is to be supported. Backporting and deploying the firmware changes to Skylake motherboards should be possible but is unlikely to happen.

In the years since Intel introduced SRT caching, another form of tiered storage has taken over: TLC NAND SSDs with SLC caching. NAND flash suffers from write times that are much longer than read times, and storing multiple bits per cell requires multiple passes of writes. To alleviate this, most TLC NAND-based SSDs for client PC usage treat a portion of their flash as SLC, storing just one bit per cell instead of three. This SLC is used as a cache to absorb bursts of writes, which are consolidated into TLC NAND when the drive is idle (or when the SLC cache fills up). Even TLC NAND has reasonably high read performance, so there is little need to use SLC to cache read operations. By contrast, Intel's Smart Response Technology has to cache access to hard drives, where both read and write latencies are painfully high. This means SRT has to balance keeping frequently-read data in the cache against making room for a burst of writes. Having a lot of static data hanging around on the cache device will cause significant write amplification to result from any wear leveling, but SRT already reduces the write load by having sequential writes bypass the cache. Taking into account that 3D XPoint memory can handle millions of write operations per cell, even a small 16GB cache device should have no trouble with endurance.

POST A COMMENT

54 Comments

View All Comments

  • vladx - Wednesday, December 28, 2016 - link

    True but like Kakti already mentioned above it will have little to no impact in consumer space. So good for enterprise, meh for the rest. Reply
  • beginner99 - Wednesday, December 28, 2016 - link

    I disagree. The advantage of even early SSD over HDD was to reduce stutters / waiting time on the disc to basically 0. Consumer space is mostly QD1 and hence latency matters a lot. It will make lag/stutters disappear completely. Reply
  • Nagorak - Wednesday, December 28, 2016 - link

    The thing is SSDs are already so fast that most users will simply not notice a difference. I don't have any stutters or slow access issues currently with my SSDs and they're only budget Samsung models. It's a case of diminishing returns.

    It's kind of like the difference between VHS, DVD and Blu-ray. There was a huge difference in quality in portability in moving to DVD, so it caught on strongly. The move to Blu-ray was much more subtle and a lot of consumers response was "meh".
    Reply
  • Shadowmaster625 - Wednesday, December 28, 2016 - link

    Tis true. I have a dozen systems with old Intel X25-M drives. And I have a few systems that have newer SSDs like Samsung 850 pro and I even have a newer Intel SSD with a skull on it. There is no noticeable difference between any of these. As long as they have an SSD they are good. Reply
  • bcronce - Thursday, December 29, 2016 - link

    SSDs are literally magnitudes faster than mech drives, but even my wife's Samsung 250 Pro M.2 can only manage about 100-200MiB/s for any single-threaded operations because of QD1 speed. It is nice that I can have many things going on at the same time and still get 100-200MiB/s, but it would be nice to be bandwidth or CPU limited and not IO latency limited. Reply
  • TheinsanegamerN - Wednesday, December 28, 2016 - link

    For general consumers, SSDs already eliminated lag/stutter. Non professionals just dont have a use for a drive this fast. Reply
  • lobz - Wednesday, December 28, 2016 - link

    I beg you to try to find any stutters starting and running consumer applications on my PC. It's sporting a 950 PRO and a 4790K, so nothing special or extraordinary. Still, I challenge you to find _any_ noticeable stutters and/or waiting time. Reply
  • StrangerGuy - Wednesday, December 28, 2016 - link

    Stuff you mentioned are already indistinguishable on a SATA3 SDD vs a RAMdisk. No flash is faster than DDR3/4.

    Another symptom of why consumer PC tech is so bloody boring these days, besides GPUs everything else new is hitting massive diminishing returns in real world usage.
    Reply
  • doggface - Wednesday, December 28, 2016 - link

    I am willing to admit I may be wrong, but I think anytime you improve random performance their will be a noticeable increase in responsiveness.

    However, having used RST before, I must admit to some scepticism here. I would rather have a 500gb Sata SSD, than a 16Gb OPTANE + 500GB spinning Rust.

    Mainly the thought of a hdd in a laptop makes me cringe.
    Reply
  • Anato - Saturday, December 31, 2016 - link

    Agree. RST limit for 64GB is arbitrary, old and stupid. 16GB is too low for meaningful write-back cache, I would prefer 16GB X-point + 256GB SSD under 1 controller on m2-2280 and 2TB HDD. Or only 256GB SSD as cache to 2TB HDD.

    Second thing is, I'd like to see TLC NAND made to perform as SLC and MLC depending on how much data is actually stored in drive. This could be change on the fly as from only SLC -> partial MLC -> partial TLC. I don't mean implementing this would be easy, but there are much tougher problems in SSD than this.
    Reply

Log in

Don't have an account? Sign up now