The Best NVMe SSD for Laptops and Notebooks: SK hynix Gold P31 1TB SSD Reviewed
by Billy Tallis on August 27, 2020 8:00 AM ESTWhole-Drive Fill
This test starts with a freshly-erased drive and fills it with 128kB sequential writes at queue depth 32, recording the write speed for each 1GB segment. This test is not representative of any ordinary client/consumer usage pattern, but it does allow us to observe transitions in the drive's behavior as it fills up. This can allow us to estimate the size of any SLC write cache, and get a sense for how much performance remains on the rare occasions where real-world usage keeps writing data after filling the cache.
The SLC write cache in the 1TB SK hynix Gold P31 runs out after just over 100GB of writes. After the SLC cache fills up, the Gold P31's sequential write performance becomes highly variable, ranging from about 1.4 to 2.3 GB/s with little change in character across the entire TLC filling phase. There are no obvious patterns of periodic garbage collection cycles visible at this scale.
Average Throughput for last 16 GB | Overall Average Throughput |
Despite the variability, the P31's long-term sustained write performance is excellent. It averages out to the best overall write throughput we've measured from a 1TB TLC drive, and in all that variation the performance never drops down to a disappointing level.
Working Set Size
Most mainstream SSDs have enough DRAM to store the entire mapping table that translates logical block addresses into physical flash memory addresses. DRAMless drives only have small buffers to cache a portion of this mapping information. Some NVMe SSDs support the Host Memory Buffer feature and can borrow a piece of the host system's DRAM for this cache rather needing lots of on-controller memory.
When accessing a logical block whose mapping is not cached, the drive needs to read the mapping from the full table stored on the flash memory before it can read the user data stored at that logical block. This adds extra latency to read operations and in the worst case may double random read latency.
We can see the effects of the size of any mapping buffer by performing random reads from different sized portions of the drive. When performing random reads from a small slice of the drive, we expect the mappings to all fit in the cache, and when performing random reads from the entire drive, we expect mostly cache misses.
When performing this test on mainstream drives with a full-sized DRAM cache, we expect performance to be generally constant regardless of the working set size, or for performance to drop only slightly as the working set size increases.
As expected for a drive with a full size DRAM buffer, the P31's random read latency is unaffected by spatial locality: reading across the whole drive is just as fast as reading from a narrow range. And the only other TLC drives that can match this read latency are the two Toshiba/Kioxia SSDs with 96L BiCS4 TLC NAND, but they can't maintain this performance across the entire test.
80 Comments
View All Comments
vladx - Thursday, August 27, 2020 - link
I have a SX8200 Pro on my laptop, do I need to enable the laptop Power Management state or is it detected automatically by the firmware?Billy Tallis - Thursday, August 27, 2020 - link
That really depends on what combination of firmware and driver bugs the laptop vendor gave you. But in theory, if the machine originally came with a M.2 NVMe drive, it should have been configured for proper power management and should continue to work well with an aftermarket SSD that doesn't bring any new power management bugs. I think the SX8200 Pro is okay on that score; the slow wake-up times shouldn't prevent the system from trying to use the deep idle states because the drive still promises the OS that it will have reasonable wake-up times.vladx - Thursday, August 27, 2020 - link
My laptop is a MSI Creator 17 that came with a Samsung PM981 drive. Could HWinfo offer any help in identifying the active power states?Billy Tallis - Thursday, August 27, 2020 - link
I'm not sure. I think you can figure out what PCIe power management settings are being used by digging through the PCI configuration space, but I'm not sure how easy it is to get that info while running Windows. As for the NVMe power management settings, my understanding is that it's impossible or very nearly impossible to access that information under Windows, at least with the usual NVMe drivers. The only reliable way I know of to confirm that everything is working correctly to get your SSD idling below 10mW is to have expensive power measurement equipment.vladx - Thursday, August 27, 2020 - link
Ok thanks, Billy. I was going to install Fedora anyways as secondary OS so I guess I'll try the Linux route then.MrCommunistGen - Thursday, August 27, 2020 - link
vladx, I'm really interested in how you go about trying to tease the NVMe power management info out of the drive. I did some internet searches a while back and didn't find anything definitive that I was able to follow and get results from. I've only ever used Debian-based distros, but if you're able to figure it out in Fedora then at least I'll know it is possible.Foeketijn - Thursday, August 27, 2020 - link
Did it happen? Did Samsung finally get an actual competitor? It doesn't really beat the 970 evo that much, so the 970 pro would still be better, but not at this price point, and definitely not with this power usage.Last time intel did that, Samsung suddenly woke up and beat them down again to a place where they stayed since.
Interesting to see what the new evo and pro line will bring.
Not high margin prices this time arround I guess.
LarsBolender - Thursday, August 27, 2020 - link
This has to be one of the most positive AnandTech articles I have read in years. Good job SK Hynix!Luminar - Thursday, August 27, 2020 - link
No recommendation sticker, though.Zan Lynx - Thursday, August 27, 2020 - link
It would be handy if you could add a power loss consistency test. I have a Dell with an older hynix NVMe and one time the battery ran down in the bag, and on reboot its btrfs was corrupt.Imagine these are sequence numbers in metadata blocks.
Correct: 10 12 22 30
Actual: 10 12 11 30
The hynix had committed writes for SOME of the blocks but a few in the middle of the update chain were old versions of the data. According to btrfs flush rules that is un-possible. Which means that the drive reported a successful write for 22 and for 30 but after powerloss recovery it lost that write for 22 and reverted to an older block.
I mean, that's better than some of the older flash drives that would trash the entire FTL and lose all the data. But it is not exactly GOOD.
I'm pretty sure Samsung consumer drives will also lose the data but at least they will revert all of the writes following the lost data, so in my example it would revert write 30 also. That would at least leave things in a logically consistent state.