The Plextor M3 (256GB) Review
by Kristian Vättö on April 5, 2012 3:05 AM ESTIntroduction
Plextor as a brand has been around for quite a while, but most of our long-time readers are likely more familiar with the name as a purveyor of optical drives (especially 8-15 years back when optical drive performance actually mattered). For our younger audience, the name may be a relative unknown. However, Plextor is not a newcomer in the SSD market or component world in general.
Plextor’s history dates back almost a century as it is a subsidiary of Shinano Kenshi Corporation, which was founded in 1918. The actual Plextor brand was founded in 1990 and Plextor mainly manufactured optical disc drives in the 90s. (For a fun blast from the past, you can still find our old Plextor drive reviews.) Plextor’s product lineup has always been and is still heavily optical drive orientated but in March 2010, Plextor revealed their first SSD lineup: The PX-64M1s and PX-128M1S.
About a year later, Plextor released their second generation SSDs: the M2 Series. It was among the first consumer SATA 6Gb/s drives and was based on Marvell’s 88SS9174-BJP2 controller, which is the same controller used in Crucial RealSSD C300. Plextor is now on its third generation of SSDs and we finally have the chance to take a look at their M3 Series.
Before we go into the actual drive, let’s talk briefly about gaining popularity and generating revenue in the SSD world. There are essentially two ways for an SSD manufacturer to generate revenue. The first is to make a deal with a PC OEM and supply them with SSDs. This is a relatively safe way because OEMs rarely offer more than one or two SSD choices, so if a customer wants an SSD pre-installed, there is a good chance that the drive will be yours. Toshiba’s SSD business model is solely based on OEM sales for example, and having scored a good deal with Apple (they used to be the only supplier of SSDs for Macs, and still ship most of the SSDs used in Macs), they are selling millions of SSDs every year thanks to Apple’s success.
The downside of an OEM partnership is the difficulty of building one. If you look at the SSDs that OEMs offer, they are mostly made by Intel or Samsung. Reliability is far more important for PC OEMs than raw performance figures because when a consumer is buying a computer, he is buying the big picture and not a specific SSD. Nobody likes failures and it should be one of the OEM’s main goals to build a reliable machine to avoid a stained brand image.
Furthermore, Intel and Samsung are both fab owners and use their own proprietary controllers (except for Intel’s Series 520 SATA 6Gb/s SSDs, but the firmware is still custom). Owning a fab means you have total control over what you produce and sell, and also know what to expect in terms of yields. If there is a problem in production, you can focus the available NAND on your own SSD products and ship the leftovers to others. That guarantees a fairly stable supply of SSDs, while fab-less SSD makers are at the mercy of NAND manufacturers and their supply can fluctuate a lot.
Using custom firmware, and especially an in-house controller, removes additional overhead that is produced by a third party controller and firmware. If you go with a drive that uses a third party controller and firmware, when an issue arises you first report it to the manufacturer of the drive, who then reports it to the maker of the controller and/or firmware, and then there's a delay while you wait for the problem to be fixed. SandForce in particularly cannot be praised for the quickness of their firmware updates in the past, and hence it’s a safer bet for PC OEMs to go with a manufacturer who also designs the firmware as it’s easier to work out potential issues that might crop up.
If you can’t establish a relationship with a PC OEM, then you are left with selling SSDs through retailers. This is what most SSD OEMs do and some do it along with OEM sales. The retail market greatly differs from the OEM market. Your SSD is no longer part of the whole product—it is the whole product. That means your SSD has to sell itself. The best way is obviously to have a high performance yet reasonably priced SSD, as that is what buyers will see when buying a product. Reliability is another big concern but it's something you can't really use as a marketing tool because there aren't any extensive, unbiased studies.
The positive side is that if you have an SSD that is very competitive, it will also sell. In the OEM market, you may not get a lot sales if the end-product isn't competitive. Take for example the Razer Blade that we just reviewed. It uses Plextor's M2 SSD (see why I picked the Blade now? Note however that our review sample was an earlier unit that used a Lite-On SSD) but as we mentioned in our review, the Blade is too expensive for what you get. Plextor will of course get some SSD sales through Razer but due to the small niche of the Blade, it's not a gold mine.
As far as brand awareness for Plextor, I believe the reason for their relative obscurity of late has been the lack of media awareness and contacts. Their journey to become an SSD manufacturer has been rather abnormal. When you think of the history of other SSD manufacturers, they were mostly known for RAM before entering the SSD world. Being in the RAM market acts as a shortcut because you are likely to have relations with the media that are interested in your products, plus there is a good chance that people are already familiar with your brand. For optical drive manufacturers, the case is the opposite.
These days, optical drives aren’t tested and benchmarked as much as other components; it’s not a component where people pay a lot attention when building a computer. When most people don’t really care what you are making, it’s tough to create media contacts and build brand image. Coming up with a new product line won’t solve the problem overnight but give it some time and it may. This is essentially what has happened to Plextor—it has taken a few generations of SSDs before consumers and media started recognizing the new player in the game—and now it’s time for us to take a look at what they have been holding in their sleeves.
113 Comments
View All Comments
jwilliams4200 - Thursday, April 5, 2012 - link
I know it is Anand's fault and you are just parroting his erroneous statements, but you guys really need to do better with your steady-state testing. Sandforce is actually among the worst at steady-state performance, and Plextor M3(P) is the best of the consumer SSDs at steady-state performance.anandtech.com should use some version of the SNIA steady-state testing protocol.
Using HDTach is just crazy, since it writes a stream of zeros that is easily compressed by Sandforce SSDs, and thus does not give a good indication of steady-state performance (which SNIA specifies should be tested with random data streams). Besides, the workload of sequential writes spaced across the entire SSD is not realistic at all.
Here are a couple reviews that do a decent job of steady-state testing (could be better, but at least they are far superior to anandtech.com's terrible testing protocols):
scroll down to "Enterprise Synthetic Benchmarks" and look at the "... steady average speed" graphs for steady-state performance:
http://www.storagereview.com/plextor_pxm3p_ssd_rev...
http://www.xbitlabs.com/articles/storage/display/m...
bji - Thursday, April 5, 2012 - link
Jarred and Kristin, I know you guys are reading these comments ... I think you would do very well to respond to this comment. You guys are doing great articles but this looks like something you should definitely consider if you want to be more accurate on steady-state performance.I personally very much care about this issue as the last thing I want is for my drive to fall into JMicron style performance holes. One of the factors that I used in deciding to get the Intel 520s that I got a few weeks ago was the fact that your tests showed that under torture situations the performance is still good. If your tests are not accurate, then I think you really need to address this.
Beenthere - Thursday, April 5, 2012 - link
I use a variety of sources for SSD reviews. Storage Reviews uses some different metrics that may be of interest to those trying to make sense of SSD performance as the benches often do NOT mirror real world performance.To me the Plextor M3 just isn't where it needs to be. The M3 Pro should be the entry level Plextor SSD IMO. It's performance is a little better but currently it's over-priced. It should be priced as the M3 is now.
http://www.storagereview.com/reviews
Anand Lal Shimpi - Thursday, April 5, 2012 - link
Note that we don't use the HDTach approach for SandForce TRIM testing and instead fill the drive with incompressible data, throw incompressible random writes at the drive, and then use AS-SSD to measure incompressible write speed afterwards.Note that fully random data patterns are absolutely not indicative of client workloads at all. What you are saying is quite correct for certain enterprise applications, but not true in the consumer client space (this is also why we have a different enterprise SSD testing suite). IOs in the consumer space end up being a combination of pseudo-random and sequential, but definitely not fully random and definitely not fully random over 100% of the LBA space.
SandForce actually behaves very well over the long run for client workloads as we've mentioned in the past. We have seen write amplification consistently below 1x for client workloads, which is why the SF drives do so very well in client systems where TRIM isn't present.
Our current recommendation for an environment like OS X however continues to be Samsung's SSD 830. Its firmware tends to be a lot better behaved under OS X (for obvious reasons given Samsung's close relationship with Apple), regardless of write amplification and steady state random write behavior.
Take care,
Anand
jwilliams4200 - Thursday, April 5, 2012 - link
"Note that we don't use the HDTach approach for SandForce TRIM testing and instead fill the drive with incompressible data, throw incompressible random writes at the drive, and then use AS-SSD to measure incompressible write speed afterwards."What?
Are you really saying that you test Sandforce SSDs differently from non-Sandforce SSDs, and then you compare the results?
Surely the first rule any decent tester learns is that all devices must be tested in the same way if you are to have a prayer of comparing results.
Anand Lal Shimpi - Thursday, April 5, 2012 - link
We don't directly compare the TRIM/torture-test results, they are simply used as a tool to help us characterize the drive and understand the controller's garbage collection philosophies. HDTach (or an equivalent) is typically for doing that on non-SF drives because you can actually visualize high latency GC routines (dramatic peaks/valleys).The rest of the numbers are directly comparable.
Take care,
Anand
jwilliams4200 - Thursday, April 5, 2012 - link
So your reviews should not make comments comparing the steady-state performance of Sandforce drives to non-Sandforce drives, since you have no objective basis of comparison.SNIA guidelines for SSD testing clearly state that the "tests shall be run with a random data pattern". Other review sites that do steady-state testing comply with this protocol.
anandtech.com is urgently in need of improving its steady-state test protocols and complying with industry standard testing guidelines, since currently anandtech.com is making misleading statements about the relative performance of SSDs in steady-state tests
Anand Lal Shimpi - Thursday, April 5, 2012 - link
As I mentioned before, we have done extensive long term analysis of SandForce drives and came away with a very good understanding of their behavior in client workloads - that's the feedback that's folded into reviews. For client workloads, SF drives have extremely good steady-state characteristics since a lot of data never gets written to NAND (I've mentioned this in previous articles, pointing to sub-1x write amplification factors after several months of regular use).We use both incompressible and compressible data formats in our tests, as well as have our own storage suites that provide a mixture of both. No client system relies on 100% random data patterns or 100% random data access, it's simply not the case. We try our best to make our client tests representative of client workloads.
Our enterprise test suite does look different however, and included within it is a random write steady state test scenario. Even within the enterprise world it is not representative of all workloads, but there are some where it's an obvious fit.
Take care,
Anand
jwilliams4200 - Thursday, April 5, 2012 - link
"As I mentioned before, we have done extensive long term analysis of SandForce drives and came away with a very good understanding of their behavior in client workloads - that's the feedback that's folded into reviews."And as I have explained before, your tests are flawed. You do NOT have a good understanding, because you are unable to specify the actual data that was written to the SSDs during your testing. You are just guessing.
All other studies that have looked at compressibility of data written to Sandforce SSDs in typical consumer workloads have shown that most data is incompressible. The only common data that is compressible is OS and program installs, but that is only done once for most users. Probably your testers were installing lots of programs and OS's and running benchmarks that write easily compressible data, but that is not typical of most consumers. But the bottom line is that you seem to have no idea of what was actually written in your "analysis". So you really do not have a good understanding.
Day to day, most home users write Office documents (automatically compressed before saving), MP3 files, JPGs, compressed video files, and hibernation files (automatically compressed in Win7). All of these are incompressible to sandforce.
But none of that is really relevant to the question of how to test SSDs. The fact is that the only non-arbitrary way to do it is to use random, incompressible data patterns. There is a reason the industry standard SSD test protocols defined by SNIA specify mandatory random data patterns -- because that is the only completely objective test.
Anand Lal Shimpi - Thursday, April 5, 2012 - link
Again - we do use incompressible data patterns for looking at worst case performance on SF drives.There's no impact on incompressible vs. compressible data with these other controllers, so the precondition, high-QD torture, HDTach pass is fine for other drives.
As far as our internal experiment goes - we did more than just install/uninstall programs for 3 - 8 months. Each editor was given a SandForce drive and many of them used the drives as their boot/application drive for the duration of the study. My own personal workstation featured a SF drive for nearly a year, average write amplification over the course of that year was under 0.7x. My own workload involves a lot of email, video editing, photo editing, web browsing, HTML work, some software development, Excel, lots of dealing with archives, presentations, etc... I don't know that I even installed a single application during the test period as I simply cloned my environment over.
We also measured fairly decent write amplification for our own server workloads with Intel's SSD 520.
Take care,
Anand