I wonder why no maker seems to have done the many layer (such as in this case 06 layer) for SLC or MLC instead going TLC or QLC (yuck) I can understand cost and such, however, you figure there would be more a market for SLC or MLC from the perspective of data longevity which TLC may or may not have let alone QLC which very likely will suffer from data retention problems.
then of course there is the actual P/E cycle endurance which nothing comes close to SLC in this regard, latency etc as well.
It would be one thing if they could cram say 2tb SSD for like $100 TLC or QLC at decent speeds/endurance levels, but this does not seem to be the case currently, they trim back THEIR costs but are in no rush to pass the savings/performance benefits to the buyer O.O
If anything about this article is nice, it is Micron upping the layer count of TLC vs chasing the QLC "pipedream" that someone like Intel is doing (likely to be priced higher than should be because of the complexity QLC is likely to have)
Samsung's Z-NAND is 3D SLC optimized for latency. In the enterprise, nobody wants to go back to MLC or SLC for the sake of throughput, because you can achieve the same in a more cost effective way by adding more TLC drives.
In the client/consumer space, everybody uses SLC to the extent that it makes sense (as a write cache), and the latest TLC is otherwise fast enough that there's no point spending so much extra for a pure MLC drive.
And Intel and Micron are both doing QLC, with Micron's QLC drives actually shipping first. They're also both doing the same 96 layer 3D NAND—their joint venture for flash development expires after the 96L node.
That doesn't address the perfectly legit worry about data retention. It seems the option is either move all your TBs-worth of totally legitimate* audio, video, and game content to the cloud - and live at the mercy of your ISP's broadband data caps - or keep all that lifetime's worth of stuff on a local drive backup, only to discover 5 years down the road when you actually bother to try retrieving it, that all the data has 'decayed' into a pile of digital garbage...
If you're worried about long-term cold storage, then either write to to tape or to archival optical media if you're daring. Of course, you still need a climate controlled room to keep them in, so it;s not totally passive. HDDs are electromechanical devices and have plenty of failure modes even if the bits on the platter are technically intact (right up until you fire up that old drive and gouge the heads across it, anyway).
For home storage, then the most feasible option is an HDD or SSD array on ZFS, with at least double-parity (RAIDZ2), with array scrubs run regularly, and drives cycles out when they start getting UBRs or when they hit MTBF.
Yeah, but what you're describing is not so much a home storage solution, as it is an IT Pro's Master's Thesis. Your typical average consumer not only will fail to follow the expected best practices, but will utterly fail to comprehend all the jargon and all the essential concepts to begin with...
Storage Spaces in included in Windows and it's super easy. StableBits DrivePool, also on Windows is super nice and easy too. What more do you want? If consumers care, they'll have multiple copies of something. They'll get USB/Thunderbolt external drives. Or use BackBlaze or similar. What exactly are you expecting here? For it to be spoon fed to you? Hire someone then. It's already not difficult and you don't need to be some Linux Wizard to have at it.
Also, on every new Intel motherboard, you get Intel RST, a way to do RAID, that's not part of Windows. It's also easy. There's no different from what you could do with HDDs. Windows itself tries to cloud back most important things like documents and they offer One Drive. There's several options. I'm really not sure what you're expecting more of.
A much easier system would be to simply have two (spinning platter) NAS systems*: on online in use and the other offline and for backup. Don't go anymore complicated that you need.
Note: if you want RAID, then put it on the backup. The N+2 is almost certainly overkill, but the moment one decides not to power up after a long rest you will thank me. All RAID does on the primary is lets your backup go untested even longer.
* best NAS system is an old computer (either yours or from Craigslist). Possibly in a new case with more drive bays.
If you're hoarding a lot of data you want to keep, then why aren't you doing something like Storage Spaces ReFS with Integrity Streams or ZFS, keeping multiple copies (in a RAID like setup) and following the 3-2-1 rule? If it's all decayed and there's zero copy of it to get back, that means you really didn't care enough.
And you will be getting it (6TB for $400), or at least very close. The point a lot of people making comments, seem to be missing, is that modern TLC is way more reliable, and faster than it use to be.
"TLC...up to 3.5 GB/s sequential read speed as well as up to 3 GB/s sequential write speed". That's near the limit of PCI x4. And most TLC drives now have a durability of 400TBW. I've had my SSD, system drive, for 2 1/2 years, I'm a programmer, I use it for work, and play (much heavier use than an average home computer), I'm up to 25TBW. So, it should last about 40 years.
So, considering the improvements in TLC NAND speed and reliability, it's very reasonable to go to QLC, just for people like you. Still plenty fast, and reliable, as long as you're not writing a lot of data to it every day.
I had an old OCZ vector 120GB boot drive for YEARS (good old sandforce), and even after being a primary across 3 systems it was still in the 90% range for drive endurance, so it went to my youngest brother building his first custom system.
Anyone who thinks they will approach drive durability limits with typical consumer use hasn't actually done any accounting of their data use/movement.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
15 Comments
Back to Article
Dragonstongue - Monday, June 11, 2018 - link
I wonder why no maker seems to have done the many layer (such as in this case 06 layer) for SLC or MLC instead going TLC or QLC (yuck) I can understand cost and such, however, you figure there would be more a market for SLC or MLC from the perspective of data longevity which TLC may or may not have let alone QLC which very likely will suffer from data retention problems.then of course there is the actual P/E cycle endurance which nothing comes close to SLC in this regard, latency etc as well.
It would be one thing if they could cram say 2tb SSD for like $100 TLC or QLC at decent speeds/endurance levels, but this does not seem to be the case currently, they trim back THEIR costs but are in no rush to pass the savings/performance benefits to the buyer O.O
If anything about this article is nice, it is Micron upping the layer count of TLC vs chasing the QLC "pipedream" that someone like Intel is doing (likely to be priced higher than should be because of the complexity QLC is likely to have)
Billy Tallis - Monday, June 11, 2018 - link
Samsung's Z-NAND is 3D SLC optimized for latency. In the enterprise, nobody wants to go back to MLC or SLC for the sake of throughput, because you can achieve the same in a more cost effective way by adding more TLC drives.In the client/consumer space, everybody uses SLC to the extent that it makes sense (as a write cache), and the latest TLC is otherwise fast enough that there's no point spending so much extra for a pure MLC drive.
And Intel and Micron are both doing QLC, with Micron's QLC drives actually shipping first. They're also both doing the same 96 layer 3D NAND—their joint venture for flash development expires after the 96L node.
boeush - Monday, June 11, 2018 - link
That doesn't address the perfectly legit worry about data retention. It seems the option is either move all your TBs-worth of totally legitimate* audio, video, and game content to the cloud - and live at the mercy of your ISP's broadband data caps - or keep all that lifetime's worth of stuff on a local drive backup, only to discover 5 years down the road when you actually bother to try retrieving it, that all the data has 'decayed' into a pile of digital garbage...*not at all pirated, naturally ;) ;) ;)
edzieba - Monday, June 11, 2018 - link
If you're worried about long-term cold storage, then either write to to tape or to archival optical media if you're daring. Of course, you still need a climate controlled room to keep them in, so it;s not totally passive. HDDs are electromechanical devices and have plenty of failure modes even if the bits on the platter are technically intact (right up until you fire up that old drive and gouge the heads across it, anyway).For home storage, then the most feasible option is an HDD or SSD array on ZFS, with at least double-parity (RAIDZ2), with array scrubs run regularly, and drives cycles out when they start getting UBRs or when they hit MTBF.
boeush - Monday, June 11, 2018 - link
Yeah, but what you're describing is not so much a home storage solution, as it is an IT Pro's Master's Thesis. Your typical average consumer not only will fail to follow the expected best practices, but will utterly fail to comprehend all the jargon and all the essential concepts to begin with...CheapSushi - Tuesday, June 12, 2018 - link
Storage Spaces in included in Windows and it's super easy. StableBits DrivePool, also on Windows is super nice and easy too. What more do you want? If consumers care, they'll have multiple copies of something. They'll get USB/Thunderbolt external drives. Or use BackBlaze or similar. What exactly are you expecting here? For it to be spoon fed to you? Hire someone then. It's already not difficult and you don't need to be some Linux Wizard to have at it.CheapSushi - Tuesday, June 12, 2018 - link
Also, on every new Intel motherboard, you get Intel RST, a way to do RAID, that's not part of Windows. It's also easy. There's no different from what you could do with HDDs. Windows itself tries to cloud back most important things like documents and they offer One Drive. There's several options. I'm really not sure what you're expecting more of.wumpus - Wednesday, June 13, 2018 - link
A much easier system would be to simply have two (spinning platter) NAS systems*: on online in use and the other offline and for backup. Don't go anymore complicated that you need.Note: if you want RAID, then put it on the backup. The N+2 is almost certainly overkill, but the moment one decides not to power up after a long rest you will thank me. All RAID does on the primary is lets your backup go untested even longer.
* best NAS system is an old computer (either yours or from Craigslist). Possibly in a new case with more drive bays.
CheapSushi - Tuesday, June 12, 2018 - link
If you're hoarding a lot of data you want to keep, then why aren't you doing something like Storage Spaces ReFS with Integrity Streams or ZFS, keeping multiple copies (in a RAID like setup) and following the 3-2-1 rule? If it's all decayed and there's zero copy of it to get back, that means you really didn't care enough.boeush - Tuesday, June 12, 2018 - link
Great advice. Now explain all that to the average Joe Sixpack. Yeah, that's indeed the point that you completely missed.niva - Tuesday, June 12, 2018 - link
The point you're missing is that this is Anandtech, and Joe Sixpack doesn't care about data on computers, he cares about his next beer.boeush - Tuesday, June 12, 2018 - link
Because only Anandtech readers could ever have lots (multiple TBs worth of) data on computers that they'd like to keep for the long term... Yeah.AbRASiON - Tuesday, June 12, 2018 - link
I just want some really really cheap "slow" SSDs with no moving parts, run cooler, still faster than HDD.Something 6-20TB in the sub $400 US range.
MJDouma - Tuesday, June 12, 2018 - link
And you will be getting it (6TB for $400), or at least very close. The point a lot of people making comments, seem to be missing, is that modern TLC is way more reliable, and faster than it use to be."TLC...up to 3.5 GB/s sequential read speed as well as up to 3 GB/s sequential write speed". That's near the limit of PCI x4.
And most TLC drives now have a durability of 400TBW. I've had my SSD, system drive, for 2 1/2 years, I'm a programmer, I use it for work, and play (much heavier use than an average home computer), I'm up to 25TBW. So, it should last about 40 years.
So, considering the improvements in TLC NAND speed and reliability, it's very reasonable to go to QLC, just for people like you. Still plenty fast, and reliable, as long as you're not writing a lot of data to it every day.
FullmetalTitan - Thursday, June 14, 2018 - link
I had an old OCZ vector 120GB boot drive for YEARS (good old sandforce), and even after being a primary across 3 systems it was still in the 90% range for drive endurance, so it went to my youngest brother building his first custom system.Anyone who thinks they will approach drive durability limits with typical consumer use hasn't actually done any accounting of their data use/movement.