This is the result of an industry optimising for profit and not longevity. That's why SLC NAND has become almost extinct and priced beyond reason. I don't care how fast or large a storage device is if isn't reliable.
There are SSDs on the market that use all of their TLC flash as SLC cache, so you can almost use them as SLC drives if you partition them to leave 2/3 empty.
Eg the ADATA XPG SX8200. Look for whole drive fill speed benchmarks, if they use the whole drive as cache, the first third is fast (usually the SLC area is much smaller).
Even if you get this data, it is not uncommon for SSD manufacturers to change the original NAND chips for inferior chips without changing the SSD’s brand name or model number.
They use them as a boot/system drives without a big load. And when you have more than 500 drives some would die just by pure luck (or lack of it). Mishandling, static charge etc...
Yes, but you're not going to like the answer. Despite many industry professionals on this site, there is a large number who, by most definitions, never had a real job. Nothing wrong with that, but they're real loud, and they like to put down proven reliable solutions because they cost too much, and then slap on random terms like "zfs" that magically fix all problems.
The equivalent is in short "look at what the enterprise storage vendors are putting in their arrays, at 10x markup. (No, shiny things like Pure/Rubrik/Cohesity are not enterprise storage).
It all depends what you're putting on things. If you buy 5 drives from Newegg for your house and double-parity them or do zfs checksums, etc., you're going to have a bad time when a bunch fail at around the same time because it's an issue with the drive. Yet you do kinda want all the same/alike drives, because the stripe is only as fast as the slowest drive.
So look at what all the vendors picked after they tested the crap out thousands of them all. Me, I personally just mirror everything between two machines with different brand drives, and hope they won't fail at the same time. Once a year I dump an image of everything on an offline big-ass drive - the cheapest spinning big rust that I can buy - and call that my "airgapped vault."
SLC requires very little in the way of firmware, since the endurance is so high and BER so low that simple ECC and wear leveling is sufficient. TLC offers only 3x the capacity, but theoretically lasts 1/8th as long, as SLC NAND manufactured on an otherwise identical process; and the former also requires much more complex (hence more bug-prone) ECC and wear leveling algorithms too, which affects speed and power consumption. QLC is 4x the capacity for 1/16th the endurance.
Industry marketing (and accompanying irrational pricing) has basically persuaded consumers to choose an inferior product.
Physically this should be possible but I've never seen an SSD that allows it. I think very very few customers would use it, but I'm still a little surprised no vendor offers it.
I suspect it's heavily firmware-dependent, but I do wonder if taking a TLC drive and partitioning it to 1/3 of its advertised capacity will keep all the blocks in SLC mode, since the firmware, if it's behaving reasonably, should try to use SLC as much as possible just for the speed benefit that brings, and only start converting blocks to MLC/TLC once all the blocks have already been used in SLC mode.
Coincidentally I did some research on this topic the other day and I couldn't find any SSD model with unlimited SLC cache. They all have fairly low limits like 10% of overall capacity.
Why not? There's certainly a group of consumers who will pay more for higher quality. I'm one of them. 2-3x current SSD costs for a better quality SSD would be completely fine with me. You're not paying more than you have to for the same product, you're purchasing a different product.
Intel tried that with Optane with disastrous results (from a financial perspective). SLC doesn't require much separate R&D and manufacturing infrastructure beyond what already exists to serve the markets for TLC and QLC. But that lower barrier to entry still hasn't led to many attempts to serve this niche. Apparently the people with real sales volume data are convinced there's less of a market for expensive and small consumer SLC SSDs than there is for consumer 8TB TLC or QLC SSDs that cost as much as a decent laptop.
Apparently the people with real sales volume data are convinced there's less of a market for expensive and small consumer SLC SSDs than there is for consumer 8TB TLC or QLC SSDs that cost as much as a decent laptop.
They realised it's easier to keep making a profit when drives keep "wearing out" (i.e. failing to be a data storage device) on a consistent and short(ening) schedule. Just like SLC, Optane was too good.
"small" is relative. 8TB of QLC is 2TB of SLC. They will both cost the same (if anything, the SLC might even be cheaper from a firmware/controller development perspective) yet the former might last a few years, and the latter several decades.
A 2TB SLC drive is going to fill up before an 8TB QLC drive wears out, so I don't buy the planned obsolescence argument. And in reality, the kind of consumer who would spend $1k on an SSD is going to move on from it within two or three years anyways in favor of a newer drive with a faster interface.
It will fill up, and more importantly, the data will stay intact. The endurance and retention of SLC is high enough that you can trust it for more than a few years.
And in reality, the kind of consumer who would spend $1k on an SSD is going to move on from it within two or three years anyways in favor of a newer drive with a faster interface.
...or expect that it will last much longer than a cheaper one.
This is correct. "TLC" is a misnomer. 2^3 is 8, and TLC stores 3 bits per NAND cell, or 8 voltage levels. I suppose when they came up with "MLC" for 2-bit cells (4 voltage levels), which is now also a misnomer as all multi-bit cells are technically MLC, they did not expect to put more than 2 bits in one cell.
The downvotes make me wonder how much useful discussion on HN is actually lost because of ignorants who prevent valid/factual information from surfacing.
The article doesn't mention "pseudo", so I guess you're implying that these are just their existing flash that's capable of TLC/QLC, used permanently in SLC mode? 60DWPD for 5 years is basically 100K endurance, the same as true SLC.
Either way, that's great news - and the ~$0.32/GB they mention (only $600 for 1.92TB!?) for Micron SLC is absolutely amazing value, if you consider that this other article I submitted not long ago mentions ultra-cheap TLC SSDs with NAND from an unknown manufacturer costing $0.10/GB (I even have a comment there lamenting the lack of logically-priced $0.30/GB SLC SSDs!): https://news.ycombinator.com/item?id=35382252
The sellers are but one part of the larger market system at large.
If SLC NAND went extinct, that's because both the sellers and the buyers (read: customers, aka end users) didn't see value in reliability as much as other factors like storage density and price-per-bit.
You, as someone who does want reliability above all else, are an outlier.
It's more likely because the buyers have been persuaded by the marketing and attempts at deception. When 10k MLC (2-bit cells) came out, offering only twice the capacity of SLC for 1/10th the endurance of the 100k SLC that was the norm at the time, they already had to keep SLC prices artifically high (>2x) to attempt to force people to MLC, and I remember the beginning of efforts to hide the poor endurance. Old NAND datasheets proudly proclaimed their 100k or even 10k cycle endurance. Now it's basically impossible to find a TLC or QLC NAND datasheet that isn't behind an NDA or the rare few that get leaked, and even those which you can find, are extremely vague about endurance. Some parts will let you choose between SLC/MLC/TLC mode for each block, and some SSDs use this for some stupid "cache" feature, but the behaviour of that is not easily configurable --- i.e. without hacking the firmware.
I think this is somewhat misdirected because most end users don't know to think about reliability. When the storage fails, "the computer broke", they take it in somewhere and the tech gives them a fixed system with the data gone but had the CPU burned out, they would be just as accepting for the data to be gone in that case too with a "sorry couldn't save it".
The marketing might include a x-million r/w cycles in it, but it's going to be way under presented vs the speed.