Original Link: https://www.anandtech.com/show/12819/the-toshiba-rc100-ssd-review



Toshiba's RC100 has arrived as the company's first low-end retail NVMe SSD, and only their second retail NVMe SSD after the aging OCZ RD400. There's nothing else quite like the RC100 in the retail SSD market, but it is part of a broader trend of PCIe and NVMe interfaces being used for cheaper SSDs, and not just the high-end drives that all the first-generation NVMe products aspired to be. Prices on these entry-level NVMe SSDs are now encroaching on the SATA SSDs that still make up the bulk of the market.

BGA SSDs

The RC100 is descended from Toshiba's line of Ball Grid Array (BGA) SSDs for the OEM market. These drives stack the SSD controller and NAND flash memory dies in a single BGA package, making them suitable for small form factor systems that might otherwise use eMMC. Toshiba has also been mounting their BG series SSDs on M.2 2230 cards for OEMs that require upgradable storage devices. The Toshiba RC100 is based on the BG3 SSD, and the primary change in making a retail version is that the M.2 card has been lengthened to 42mm because relatively few existing systems support 30mm M.2 SSDs. This is still quite a bit shorter than the usual 80mm card length used by most consumer M.2 SSDs.

Toshiba's first NVMe BGA SSD was the BG1 introduced in 2015. It used 15nm planar MLC NAND and a 16x20mm package with a PCIe 2 x2 interface. The next generation BG2 was the first client drive to ship with Toshiba's 3D NAND flash memory, but it used their 48-layer design that was never competitive enough for a retail SSD. The BG3 was announced last year as part of Toshiba's transition to their 64-layer 3D NAND that is finally good enough to fully displace their planar NAND.

The small physical size of BGA SSDs limits both the width of their host interface (to two PCIe lanes instead of the four used by high-end NVMe SSDs) and the amount of memory they have. Toshiba's BG1 only offered 128GB and 256GB capacities, and the BG2 and BG3 only go up to 512GB. Toshiba's BG series and the RC100 also don't have a DRAM die in the stack, so these are DRAMless SSDs, and as we'll see can definitely behave like one. Meanwhile thermal throttling is usually not a concern for BGA SSDs because they don't offer the same performance as high-end NVMe SSDs, and consequently only use 2-3W under load instead of the 5-8W used by larger high-end M.2 SSDs.

To mitigate the performance limitations that result from not having a DRAM cache, Toshiba's BG2 introduced support for the NVMe Host Memory Buffer (HMB) feature, and that has been carried over to the BG3 and RC100. HMB is an optional feature that was added in version 1.2 of the NVMe specification, released in 2014. Though the feature was standardized years ago, adoption has been slow because there hasn't been much of a market for low-end NVMe SSDs in either the retail or OEM channels, and Microsoft's NVMe driver didn't implement HMB support until the Windows 10 Anniversary Update in 2016.

Toshiba RC100 Series Specifications Comparison
  120 GB 240 GB 480 GB
Form Factor single-sided M.2 2242 B+M key
Controller Toshiba unnamed
Interface NVMe 1.2.1 PCIe 3.1 x2
DRAM None (HMB supported)
NAND Toshiba 64L BiCS3 3D TLC
Sequential Read 1350 MB/s 1600 MB/s 1600 MB/s
Sequential Write 700 MB/s 1050 MB/s 1100 MB/s
4KB Random Read (QD32) 80k IOPS 130k IOPS 150k IOPS
4KB Random Write (QD32) 95k IOPS 110k IOPS 110k IOPS
Active Power 3.2 W
Idle Power (PCIe L1.2) 5 mW
Endurance 60 TBW
0.45 DWPD
120 TBW
0.45 DWPD
240 TBW
0.45 DWPD
Warranty 3 years
MSRP $59.99 (50¢/GB) $79.99 (33¢/GB) $154.99 (32¢/GB)

The Toshiba RC100 is available in capacities from 120GB to 480GB, essentially the same as the BG3 but with more spare area reserved to allow for slightly higher performance than the BG3. Sequential transfer speeds are rated to be several times faster than a SATA drive, while random access performance is only a bit higher than SATA drives—the flash itself is more of a bottleneck for random IO than the host interface, especially on a DRAMless SSD. The RC100 comes with a three year warranty and its write endurance rating is about 0.45 drive writes per day (DWPD) for that time span, so it's a bit behind the mainstream and high-end consumer drives that are usually rated for 0.3 DWPD over a five year span.

The active power rating of 3.2W is much lower than most of the NVMe SSDs we've tested, and is more in line with SATA SSDs. Idle power is rated at 5 mW, but this is only on platforms with properly working PCIe power management, which doesn't include most desktops. The lack of DRAM and the narrower PCIe link both help keep power consumption low, but the performance impact of those limitations may prevent the overall efficiency from breaking out of the general pattern of NVMe SSDs being less efficient than SATA SSDs.

Toshiba's current retail SSDs: RD400, RC100, TR200

The RC100 uses the single-sided 22x42mm M.2 card form factor with notches in both the B and M positions because it only uses two PCIe lanes instead of four. This means it's mechanically compatible with M.2 slots that may only provide SATA signals. On the card itself, we find a little bit of power regulation circuitry to provide 1.2V and 1.8V from the 3.3V supply, the BGA SSD itself in a 16x20mm package, and enough empty space for the card to reach the first mounting hole on most motherboards.

The Toshiba RC100 essentially has no direct competition in the retail SSD market: M.2 2242 PCIe SSDs have been almost impossible to find until now, and even M.2 SATA SSDs in this form factor are rare. But systems that require these  shorter M.2 cards instead of the more common 80mm length are also rare. The closest competitors to the RC100 are other recent low-end NVMe SSDs based on either the Phison E8 controller or Silicon Motion SM2263, or their respective DRAMless variants (E8T and SM2263XT) that also use the NVMe HMB feature. We've reviewed the MyDigitalSSD SBX with the Phison E8 controller and have several more reviews on the way for this product segment.

With this review, we are finally switching entirely to test results gathered on a system with Meltdown and Spectre patches, current as of May 2018. We have not yet re-tested every drive in our sample collection, so the comparison results in this review don't always show every relevant drive.

AnandTech 2018 Consumer SSD Testbed
CPU Intel Xeon E3 1240 v5
Motherboard ASRock Fatal1ty E3V5 Performance Gaming/OC
Chipset Intel C232
Memory 4x 8GB G.SKILL Ripjaws DDR4-2400 CL15
Graphics AMD Radeon HD 5450, 1920x1200@60Hz
Software Windows 10 x64, version 1709
Linux kernel version 4.14, fio version 3.6
Spectre/Meltdown microcode and OS patches current as of May 2018


Exploring the NVMe Host Memory Buffer Feature

Most modern SSDs include onboard DRAM, typically in a ratio of 1GB RAM per 1TB of NAND flash memory. This RAM is usually dedicated to tracking where each logical block address is physically stored on the NAND flash—information that changes with every write operation due to the wear leveling that flash memory requires. This information must also be consulted in order to complete any read operation. The standard DRAM to NAND ratio provides enough RAM for the SSD controller to use a simple and fast lookup table instead of more complicated data structures. This greatly reduces the work the SSD controller needs to do to handle IO operations, and is key to offering consistent performance.

SSDs that omit this DRAM can be cheaper and smaller, but because they can only store their mapping tables in the flash memory instead of much faster DRAM, there's a substantial performance penalty. In the worst case, read latency is doubled as potentially every read request from the host first requires a NAND flash read to look up the logical to physical address mapping, then a second read to actually fetch the requested data.

The NVMe version 1.2 specification introduced an in-between option for SSDs. The Host Memory Buffer (HMB) feature takes advantage of the DMA capabilities of PCI Express to allow SSDs to use some of the DRAM attached to the CPU, instead of requiring the SSD to bring its own DRAM. Accessing host memory over PCIe is slower than accessing onboard DRAM, but still much faster than reading from flash. The HMB is not intended to be a full-sized replacement for the onboard DRAM that mainstream SSDs use. Instead, all SSDs using the HMB feature so far have targeted buffer sizes in the tens of megabytes. This is sufficient for the drive to cache mapping information for tens of gigabytes of flash, which is adequate for many consumer workloads. (Our ATSB Light test only touches 26GB of the drive, and only 8GB of the drive is accessed more than once.)

Caching is of course one of the most famously difficult problems in computer science and none of the SSD controller vendors are eager to share exactly how their HMB-enabled controllers and firmware use the host DRAM they are given, but it's safe to assume the caching strategies focus on retaining the most recently and heavily used mapping information. Areas of the drive that are accessed repeatedly will have read latencies similar to that of mainstream drives, while data that hasn't been touched in a while will be accessed with performance resembling that of traditional DRAMless SSDs.

SSD controllers do have some memory built in to the controller itself, but usually not enough to allow a significant portion of NAND mapping tables to be cached. For example, the Marvell 88SS1093 Eldora high-end NVMe SSD controller has numerous on-chip buffers with capacities in the kilobyte range and aggregate capacity of less than 1MB. Some SSD vendors have hinted that their controllers have significantly more on-board memory—Western Digital says this is why their SN520 NVMe SSD doesn't use HMB, but they declined to say how much memory is on that controller. We've also seen some other drives in recent years that don't fall clearly into the DRAMless category or the 1GB per TB ratio. The Toshiba OCZ VX500 uses a 256MB DRAM part for the 1TB model, but the smaller capacity drives rely just on the memory built in to the controller (and of course, Toshiba didn't disclose the details of that controller architecture).

The Toshiba RC100 requests a block of 38 MB of host DRAM from the operating system. The OS could provide more or less than the drive's preferred amount, and if the RC100 gets less than 10MB it will give up on trying to use HMB at all. Both the Linux and Windows NMVe drivers expose some settings for the HMB feature, allowing us to test the RC100 with HMB enabled and disabled. In theory, we could also test with varying amounts of host memory allocated to the SSD, but that would be a fairly time-consuming exercise and would not reflect any real-world use cases, because the driver settings in question are so obscure and not worth changing from their defaults.

Working Set Size

We can see the effects of the HMB cache quite clearly by measuring random read performance while increasing the test's working set—the amount of data that's actively being accessed. When all of the random reads are coming from the same 1GB range, the RC100 performs much better than when the random reads span the entire drive. There's a sharp drop in performance when the working set approaches 32GB. When the RC100 is tested with HMB off, performance is just as good for a 1GB working set (and actually substantially better on the 480GB model), but larger working sets are almost as slow as the full-span random reads. It looks like the RC100's controller may have about 1MB of built-in memory that is much faster than accessing host DRAM over the PCIe link.

Most mainstream SSDs offer nearly the same random read performance regardless of the working set size, though performance through this test varies some due to other factors (eg. thermal throttling). The drives using the Phison E7 and E8 NVMe controllers are a notable exception, with significant performance falloff as the working set grows, despite these drives being equipped with ample onboard DRAM.



AnandTech Storage Bench - The Destroyer

The Destroyer is an extremely long test replicating the access patterns of very IO-intensive desktop usage. A detailed breakdown can be found in this article. Like real-world usage, the drives do get the occasional break that allows for some background garbage collection and flushing caches, but those idle times are limited to 25ms so that it doesn't take all week to run the test. These AnandTech Storage Bench (ATSB) tests do not involve running the actual applications that generated the workloads, so the scores are relatively insensitive to changes in CPU performance and RAM from our new testbed, but the jump to a newer version of Windows and the newer storage drivers can have an impact.

We quantify performance on this test by reporting the drive's average data throughput, the average latency of the I/O operations, and the total energy used by the drive over the course of the test.

ATSB - The Destroyer (Data Rate)

The Destroyer truly lives up to its name when presented with the Toshiba RC100. High-end NVMe SSDs complete this test in as little as seven hours. Mainstream SSDs usually take more like twelve hours. The 240GB Toshiba RC100 took 34 hours, leaving us with insufficient time to run the test again with HMB off. The Host Memory Buffer doesn't even come close making an impact on how long the larger 480GB model took, because The Destroyer simply moves too much data for a small cache to matter.

ATSB - The Destroyer (Average Latency)ATSB - The Destroyer (99th Percentile Latency)

The average latency from the 480GB RC100 on The Destroyer is at least twice as high as that of other low-end NVMe SSDs, and the 240GB's latency is an order of magnitude worse. The situation for 99th percentile latency is even worse, leaving the RC100 looking bad even in comparison to most SATA SSDs.

ATSB - The Destroyer (Average Read Latency)ATSB - The Destroyer (Average Write Latency)

The average read latency of the 480GB RC100 is a bit high but still within the normal range for most SSDs, but the 240GB stands out with more than twice the read latency. For writes, both capacities of the RC100 score poorly, and this is why the overall average tanked.

ATSB - The Destroyer (99th Percentile Read Latency)ATSB - The Destroyer (99th Percentile Write Latency)

In spite of its DRAMless design, the 480GB RC100 manages a decent 99th percentile read latency score, but its smaller sibling can't control read latency under a workload this heavy. For writes, both capacities have very high 99th percentile latency, with the 240GB approaching a full second for its worst-case completion times.

ATSB - The Destroyer (Power)

The Toshiba RC100 uses relatively little power, but its poor performance means that the test runs long enough that total energy usage isn't great. The 240GB RC100's run of The Destroyer went on for longer than any other SSD tested in recent memory, leaving it with an energy usage score that looks more like what a desktop hard drive would produce.



AnandTech Storage Bench - Heavy

Our Heavy storage benchmark is proportionally more write-heavy than The Destroyer, but much shorter overall. The total writes in the Heavy test aren't enough to fill the drive, so performance never drops down to steady state. This test is far more representative of a power user's day to day usage, and is heavily influenced by the drive's peak performance. The Heavy workload test details can be found here. This test is run twice, once on a freshly erased drive and once after filling the drive with sequential writes.

ATSB - Heavy (Data Rate)

The ATSB Heavy test is small enough to reveal some impact from the HMB feature: it clearly makes a big difference to full-drive performance for the 480GB model, and slightly improves empty-drive data rates for both capacities. The 240GB falls apart when full, leading to data rates that are inexcusably bad.

ATSB - Heavy (Average Latency)ATSB - Heavy (99th Percentile Latency)

The average and 99th percentile latencies from the RC100 are reasonable when the test is run on an empty drive. For the 480GB model, HMB keeps both latency scores from getting out of control even when the drive is full, but the 240GB model has serious issues with or without HMB.

ATSB - Heavy (Average Read Latency)ATSB - Heavy (Average Write Latency)

For both average read latency and average write latency, the 480GB RC100's scores with HMB enabled are competitive with the drives that have onboard DRAM. Disabling HMB makes write latency especially stand out when the 480GB model is full.

ATSB - Heavy (99th Percentile Read Latency)ATSB - Heavy (99th Percentile Write Latency)

The 99th percentile read and write latency scores for the 480GB RC100 are great when HMB is enabled and acceptable without it. The 240GB model also performs reasonably when the drive is not full.

ATSB - Heavy (Power)

The power efficiency of the RC100 is generally quite good, except when the 240GB model is full and takes forever to finish the test. The HMB feature is particularly helpful for the 480GB RC100, allowing it to complete the full-drive test using barely more energy than the empty-drive run.



AnandTech Storage Bench - Light

Our Light storage test has relatively more sequential accesses and lower queue depths than The Destroyer or the Heavy test, and it's by far the shortest test overall. It's based largely on applications that aren't highly dependent on storage performance, so this is a test more of application launch times and file load times. This test can be seen as the sum of all the little delays in daily usage, but with the idle times trimmed to 25ms it takes less than half an hour to run. Details of the Light test can be found here. As with the ATSB Heavy test, this test is run with the drive both freshly erased and empty, and after filling the drive with sequential writes.

ATSB - Light (Data Rate)

Once again the 240GB Toshiba RC100 exhibits very poor performance overall when the drive is full, and the 480GB doesn't do particularly well in that situation either. But for the more typical case of running the Light test on a drive that isn't full, both RC100s are competitive with other low-end NVMe SSDs and much faster than SATA drives.

ATSB - Light (Average Latency)ATSB - Light (99th Percentile Latency)

Most of the low-end NVMe SSDs show substantially higher latency when running the Light test on a full drive, so the RC100's results aren't quite as extreme an outlier. The RC100 is actually better off with HMB off for the full-drive runs of this test, possibly because the overhead of the extra PCIe communication isn't worthwhile when the cache isn't going to be of much use.

ATSB - Light (Average Read Latency)ATSB - Light (Average Write Latency)

Average read and write latencies are both competitive for the RC100's empty-drive test runs, and the full-drive read latencies are high but aren't extreme outliers. It's the write latency that really causes problems for the RC100 when it is full.

ATSB - Light (99th Percentile Read Latency)ATSB - Light (99th Percentile Write Latency)

The 99th percentile read and write latency scores show similar results to the averages, but more prominently highlight the drives that are having trouble—which is mostly just the RC100, though the 600p's 99th percentile write latency is pretty bad, too.

ATSB - Light (Power)

The Toshiba RC100 doesn't quite manage to beat the Crucial MX500 SATA drive for energy usage on this test, but it's first-place among its NVMe competition for the empty-drive test runs, and isn't unreasonably power-hungry even when it is performing poorly.



Random Read Performance

Our first test of random read performance uses very short bursts of operations issued one at a time with no queuing. The drives are given enough idle time between bursts to yield an overall duty cycle of 20%, so thermal throttling is impossible. Each burst consists of a total of 32MB of 4kB random reads, from a 16GB span of the disk. The total data read is 1GB.

Burst 4kB Random Read (Queue Depth 1)

The Toshiba RC100 surprises with excellent burst random read performance, and even when HMB is off it outperforms the other low-end NVMe SSDs we've tested.

Our sustained random read performance is similar to the random read test from our 2015 test suite: queue depths from 1 to 32 are tested, and the average performance and power efficiency across QD1, QD2 and QD4 are reported as the primary scores. Each queue depth is tested for one minute or 32GB of data transferred, whichever is shorter. After each queue depth is tested, the drive is given up to one minute to cool off so that the higher queue depths are unlikely to be affected by accumulated heat build-up. The individual read operations are again 4kB, and cover a 64GB span of the drive.

Sustained 4kB Random Read

On the longer random read test that covers a broader span of the drive than HMB can help with, the Toshiba RC100's scores are unsurprisingly in last place among NVMe drives, but it's not too far behind the Intel 600p.

Sustained 4kB Random Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The RC100 clearly uses less power during random reads than any other NVMe SSD we've tested, but the poor performance when reading from a wide span of the drive means the efficiency is just a bit below average.

For the larger RC100, the HMB feature has a fairly large impact on random read performance at high queue depths even though the HMB cache is too small to completely handle this workload. At low queue depths and for the smaller 240GB model at any queue depth, HMB has minimal impact on random read performance.

Random Write Performance

Our test of random write burst performance is structured similarly to the random read burst test, but each burst is only 4MB and the total test length is 128MB. The 4kB random write operations are distributed over a 16GB span of the drive, and the operations are issued one at a time with no queuing.

Burst 4kB Random Write (Queue Depth 1)

With HMB enabled, the burst random write performance of the Toshiba RC100 is decent, but without HMB it can't beat a mainstream SATA drive.

As with the sustained random read test, our sustained 4kB random write test runs for up to one minute or 32GB per queue depth, covering a 64GB span of the drive and giving the drive up to 1 minute of idle time between queue depths to allow for write caches to be flushed and for the drive to cool down.

Sustained 4kB Random Write

Once again, the large working set size of this test compared to the small host buffer size used by the RC100 condemns the drive to last place. The margin between the RC100 and the next-slowest NVMe drive is much larger than it was for the sustained random read test. HMB actually slightly hurts performance here.

Sustained 4kB Random Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

Power consumption is slightly higher for random writes than for random reads, but still well below the other NVMe SSDs. The performance is low enough that the power efficiency score for the RC100 is worse than all the competition.

Random write performance from the RC100 is low at any queue depth. The drive doesn't have enough memory to perform effective write combining and caching under this sustained load, while high-end drives usually manage to significantly improve performance when working with a large queue of write operations.



Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The burst sequential read performance of the Toshiba RC100 is faster than any SATA drive can manage, and is only slightly slower than the MyDigitalSSD SBX. The Host Memory Buffer feature has no significant impact here.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data.

Sustained 128kB Sequential Read

On the longer sequential read test, the RC100 places slightly ahead of other low-end NVMe drives, but there's still a pretty large gap separating it from the high-end drives that can deliver multiple GB/s at low queue depths.

Sustained 128kB Sequential Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

Power efficiency from the Toshiba RC100 is decent by NVMe standards, but not record setting. Total power draw approaches 2W for the 480GB model, which is still quite low for NVMe drives.

HMB appears to have a moderate impact on sequential read performance for the 480GB RC100 at some queue depths. Both capacities hit maximum performance when the queue depth is at least 8.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

The burst sequential write performance of the Toshiba RC100 is good for a low-end NVMe drive (or an older high-end drive), but is far below the current high-end drives.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

On the longer sequential write test, the RC100 performs quite well with HMB on—it slightly outperforms the 250GB Samsung 960 EVO, but can't keep pace with the newer 970 EVO. Even without HMB, the RC100 is one of the faster low-end NVMe drives for sequential writes, but having that extra buffer helps a lot.

Sustained 128kB Sequential Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The Toshiba RC100 finally manages to score a power efficiency win: it just barely cracks 2W during this test and performance is better than most NVMe drives that pull 4W on this test.

he sequential write speed of the 480GB RC100 plateaus at 1GB/s at a queue depth of 2 or higher, but there was a drop in performance at the end of the test that may have been the SLC cache finally running out. The performance from the 240GB takes a bit longer to reach full speed, and without HMB it is both slower and less consistent.



Mixed Random Performance

Our test of mixed random reads and writes covers mixes varying from pure reads to pure writes at 10% increments. Each mix is tested for up to 1 minute or 32GB of data transferred. The test is conducted with a queue depth of 4, and is limited to a 64GB span of the drive. In between each mix, the drive is given idle time of up to one minute so that the overall duty cycle is 50%.

Mixed 4kB Random Read/Write

Mixed workloads are often the toughest for DRAMless SSDs, and with this mixed random I/O test covering 64GB of the drive, the Toshiba RC100's Host Memory Buffer is of little use. The RC100 is substantially slower than other NVMe drives on this test.

Sustained 4kB Mixed Random Read/Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

Power efficiency from the RC100 during the mixed random I/O test is also poor, but it's not a significant outlier compared from the competition. Total power consumption is half a Watt lower than any of the other NVMe drives.

The performance and power consumption of the Toshiba RC100 are remarkably constant across the varying workload of this test. There's no sign of improved performance as the fraction of writes increases, which gives most SSDs the opportunity to perform more write combining.

Mixed Sequential Performance

Our test of mixed sequential reads and writes differs from the mixed random I/O test by performing 128kB sequential accesses rather than 4kB accesses at random locations, and the sequential test is conducted at queue depth 1. The range of mixes tested is the same, and the timing and limits on data transfers are also the same as above.

Mixed 128kB Sequential Read/Write

On the mixed sequential I/O test, the Toshiba RC100 is a decent performer with an average that exceeds what any SATA SSD is capable of. HMB is  a bit of help here because the sequential access pattern is very cache-friendly even though the test spans a wider range of data than the cache can track.

Sustained 128kB Mixed Sequential Read/Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The Toshiba RC100's power efficiency on the mixed sequential I/O test is great with or without HMB. The RC100 is clearly much slower than the high-end drives, but its power consumption is reduced proportionally.

The performance and power consumption of the Toshiba RC100 are not quite as flat on the mixed sequential test as for the mixed random I/O test. The RC100 gets a bit faster as the workload shifts toward writes, and HMB becomes more beneficial with increasing write volume.



Power Management Features

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

For many NVMe SSDs, the closely related matter of thermal management can also be important. M.2 SSDs can concentrate a lot of power in a very small space. They may also be used in locations with high ambient temperatures and poor cooling, such as tucked under a GPU on a desktop motherboard, or in a poorly-ventilated notebook.

Toshiba RC100
NVMe Power and Thermal Management Features
Controller Toshiba unknown
Firmware ADRA0101
NVMe
Version
Feature Status
1.0 Number of operational (active) power states 3
1.1 Number of non-operational (idle) power states 2
Autonomous Power State Transition (APST) Supported
1.2 Warning Temperature 82 °C
Critical Temperature 85 °C
1.3 Host Controlled Thermal Management Supported
 Non-Operational Power State Permissive Mode Not Supported

The Toshiba RC100 supports a fairly complete set of power and thermal management features. The RC100 is well-equipped to be kept within the often tight power and thermal limits of the small form factor machines it was originally designed for as a BGA SSD. The three active power states cover a reasonably wide range of power limits. Based on the transition latency ratings, there's no reason for a system to bother with the shallower PS3 idle state if the deeper PS4 state can be used.

Toshiba RC100
NVMe Power States
Controller Toshiba unknown
Firmware ADRA0101
Power
State
Maximum
Power
Active/Idle Entry
Latency
Exit
Latency
PS 0 3.3 W Active - -
PS 1 2.7 W Active - -
PS 2 2.3 W Active - -
PS 3 50 mW Idle 10 ms 45 ms
PS 4 5 mW Idle 10 ms 50 ms

Note that the above tables reflect only the information provided by the drive to the OS. The power and latency numbers are often very conservative estimates, but they are what the OS uses to determine which idle states to use and how long to wait before dropping to a deeper idle state.

Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks.

We report two idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link or NVMe power saving features are enabled and the drive is immediately ready to process new commands. The idle power consumption metric is measured with PCIe Active State Power Management L1.2 state enabled and NVMe APST enabled if supported.

Active Idle Power Consumption (No LPM)Idle Power Consumption

The active idle power consumption of the RC100 is about 1W, which is fairly typical for NVMe SSDs but a bit surprising for this drive in particular given its low-power focus. As with many drives, the deepest NVMe power state doesn't work out of the box with our desktop testbed, leaving it with substantially worse idle power draw than typical SATA drives and some NVMe drives.

Idle Wake-Up Latency

The idle wake-up latency of the RC100 is in the 3-4ms range which is plenty fast and far better than the drive's pessimistic 45-50ms specification.



Conclusion

The RC100 is Toshiba's contribution to the growing field of entry-level NVMe SSDs, and it is distinctive in several ways: the small form factor based on a BGA SSD, its use of the relatively rare NVMe Host Memory Buffer feature, and its fairly low maximum power draw. Unfortunately, the RC100's performance is nothing special, except when it's bad.

Under ideal conditions, the RC100 doesn't even need the NVMe Host Memory Buffer feature to offer competitive performance against other low-end NVMe SSDs. Leaving HMB on allows the 480GB RC100 to continue performing reasonably well even under adverse conditions like running tests on a completely full drive. From SATA SSDs, we're used to seeing those tougher tests clearly reveal the high latency cost of DRAMless SSDs. The NVMe HMB feature successfully eliminates that often acute weakness of DRAMless SSDs, making the 480GB RC100 a fairly well-rounded performer. HMB doesn't help with every workload, but it's definitely a valuable feature. DRAMless NVMe SSDs don't have to suffer all the problems that DRAMless SATA drives exhibit.

The 240GB RC100 didn't fare quite as well. On lighter workloads it trails the 480GB model by a fairly normal margin given the capacity difference, but the situation completely changes when the 240GB drive is full. In that case, write latency goes sky high and that leads to a fairly severe impact on read operations as well. The 240GB RC100 is clearly incapable of performing wear leveling and garbage collection at an acceptable speed when the drive is full; some of the results are not even clearly better than a mechanical hard drive. We would suspect a defective drive if it weren't for the other results continuing to look reasonable while the full-drive ATSB tests produced reproducible horrifying results.

This looks pretty likely to be an inherent flaw, and it is likely to be even more severe and easier to encounter on the 120GB model. Because while not filling a SSD is common and well-grounded advice, the reality is that these drives sometimes will be filled in day-to-day use, especially in the case of small drives where space is at a premium. Toshiba may be able to improve the garbage collection somewhat with firmware updates, but for now it is clear that those smaller two models should not be filled completely if at all possible. We have not determined how much manual overprovisioning is necessary to keep performance within a reasonable range, but users definitely should set aside some spare area with those models, and it's been a long time since we've felt the need to make that recommendation. Plenty of other recent low-end SSDs lose a lot of there performance when full, but there's a big difference between losing half the performance and losing 90%.

There aren't many options at the moment for other M.2 2242 SSDs, and most the alternatives are outdated M.2 SATA drives with planar MLC NAND—so they might offer better worst-case write speeds than the RC100, but they won't beat it on capacity or real-world performance. If anybody does try to challenge Toshiba in the M.2 2242 niche, the competition would be subject to the same constraints Toshiba has faced. Samsung could put their PM971 BGA SSD on a M.2 card and completely outclass the RC100's performance thanks to the inclusion of LPDDR4 in the PM971, but I doubt Samsung would bother making a retail product for this small of a market segment. The companies that do like to maintain a wide product selection with lots of form factors (ADATA, Transcend, Lite-On/Plextor) would have to use a DRAMless NVMe controller like the Phison E8T or Marvell 88NV1160 in order to have room for any actual NAND on the card, or else opt for more expensive packaging to stack the NAND on the controller and make it a BGA SSD. The options for this form factor will continue to be largely limited to the drives OEMs are shipping and a handful of retail derivatives of those same drives, so users looking to upgrade from an OEM drive will not be able to get much of a performance or capacity boost unless their system can accommodate the more common 80mm M.2 card length.

NVMe SSD Price Comparison
(2018-06-14)
  120-128GB 240-256GB 400-512GB 960-1200GB
Toshiba RC100 $59.99 (50¢/GB) $79.99 (33¢/GB) $154.99 (32¢/GB)  
MyDigitalSSD SBX $44.99 (35¢/GB) $69.99 (27¢/GB) $139.99 (27¢/GB) $299.99 (29¢/GB)
HP EX900 $56.99 (47¢/GB) $94.99 (38¢/GB) $174.99 (35¢/GB)  
ADATA XPG SX8200   $89.99 (37¢/GB) $169.99 (35¢/GB) $349.99 (36¢/GB)
HP EX920   $109.99 (43¢/GB) $179.99 (35¢/GB) $279.99 (27¢/GB)
Intel SSD 760p $82.96 (65¢/GB) $115.20 (45¢/GB) $217.35 (42¢/GB) $371.99 (36¢/GB)
Samsung 970 EVO   $106.01 (42¢/GB) $196.01 (39¢/GB) $396.01 (40¢/GB)
Western Digital WD Black (2D NAND)   $79.99 (31¢/GB) $149.95 (29¢/GB)  
Western Digital WD Black
(3D NAND)
  $109.90 (44¢/GB) $199.99 (40¢/GB) $399.99 (40¢/GB)
SATA Drives:        
Crucial MX500   $72.99 (29¢/GB) $109.99 (22¢/GB) $229.99 (23¢/GB)
Crucial BX300 $42.99 (36¢/GB) $74.91 (31¢/GB) $143.87 (30¢/GB)  
Samsung 860 EVO   $78.69 (31¢/GB) $126.94 (25¢/GB) $248.01 (25¢/GB)
WD Blue 3D NAND   $69.99 (28¢/GB) $117.53 (24¢/GB) $229.99 (23¢/GB)

Toshiba's introductory pricing for the RC100 isn't too bad, but it will need to come down a bit to beat the Phison-based MyDigitalSSD SBX, the current price leader among NVMe SSDs. The Toshiba RC100 does score several performance wins against the SBX, but the overall picture doesn't justify a significant price premium.

The 120GB RC100 should be ignored. At this capacity, the NAND flash will almost always be the bottleneck so there's no reason to prefer a NVMe drive over a SATA drive. The Crucial BX300 with 3D MLC (albeit an older generation) is still available for those who really need a cheap, small SSD. For most users, jumping up to at least 240GB makes the most sense, even if it means sticking with SATA for now. Unlike the 120GB capacity class, there's tons of competition for 240GB and larger drives. The 240GB Toshiba RC100 has a very small price premium over mainstream SATA drives, and the RC100 does outperform them on typical workloads. But those mainstream SATA drives are equipped with on-board DRAM that helps them perform well on the heaviest workloads and retain much better performance when filled up. The abysmal full-drive performance of the 240GB RC100 combined with the likelihood of getting a drive that size close to full means many users should avoid that model.

The 480GB RC100 is a safer buy with less crippling full-drive performance and a much lower likelihood of ending up full from ordinary desktop usage. A large video or game library could still cause it some trouble, but for most users that's a minor and avoidable concern. Unfortunately, 480GB is also the point at which the SATA drives start having a serious price advantage over even the cheapest NVMe SSDs.

Some users will value the RC100 for its unique features such as the M.2 2242 form factor. Most users simply want to know if low-cost NVMe drives like the RC100 mean that NVMe is ready to push SATA out of the mainstream SSD market. The answer there is still clearly "no", but we are getting closer to having NVMe drives that can beat SATA on both price and performance.

Log in

Don't have an account? Sign up now