Why Optane Mattered
Why the end of Optane is bad news for the entire IT world • The Register
https://www.theregister.com/2022/08/01/optane_intel_cancellation/
Forget the brand name. What matters — or rather, mattered — is the technology. To understand why, we need to talk a little but about computer architecture.
Stripped down to the barest essentials, a computer needs three core elements:
- A processor — an engine for executing instructions and performing calculations
- Storage — a place to hold programs and data
- I/O (input/output) — a way to get programs and data into and out of the system
Storage, however, is more complicated than that simple description suggests, because of the trade-off between cost, capacity, and speed. When we say "computer memory", the first thing you probably think of is RAM — or more accurately, DRAM, Dynamic Random Access Memory. It's where most of the code and data your computer is actually working on at any given time is stored while in use. But DRAM has a number of disadvantages.
First, it is volatile — remove the power, and in moments, all of the data is gone. So you can't rely on it to keep your programs and data when the computer is turned off.
Secondly, even the fastest DRAM is a lot slower than any modern CPU. This means that every time the CPU needs to access DRAM, it has to slow down. And if the CPU has to access DRAM every time it executes an instruction, it can never run at full speed. So processors contain a small amount of very fast cache memory located right next to the processor. (In most modern designs, up to three levels of it, actually, each level larger but slower than the level before.) This cache is usually SRAM, static RAM. Data is copied from main memory into the cache to be close to the processor when needed and allow the processor to run at full speed as much of the time as possible.
"If it's so much faster, then why don't we just use SRAM for everything?"
Because static RAM requires a lot more space on the die than dynamic RAM does, which makes it a lot more expensive. Your computer's RAM, these days, is measured in gigabytes, while the SRAM cache in your processor is still measured in kilobytes, a million times smaller. If all of the DRAM in your computer were replaced with cache SRAM, your computer would cost more than your house.
So you now have two types of storage, SRAM cache for code and data the CPU is actively working with, and DRAM for everything else it might need. But the DRAM won't store that data when the power is off. So you need yet a THIRD layer of storage, something that will keep programs and data available when the computer is off. This started out as punch card decks and paper tape, then magnetic tape, then magnetic storage on mechanical hard disks and drums, and now mechanical disks (aka "spinning rust") are being replaced by flash memory, which is faster than mechanical magnetic storage and consumes less power. This replacement has already progressed far enough that magnetic storage is now often considered "near-line" storage, because mechanical hard disks are so much slower than solid-state disks (but also a lot cheaper).¹ Unfortunately flash RAM has its own drawbacks: it is still much slower than DRAM, still several times more expensive than spinning-rust magnetic storage, it can only be erased entire blocks at a time, and there are a limited number of times that it can be erased and rewritten. Every erase and rewrite degrades the memory cells.² (This is why your smartphone has both RAM to run programs in, and flash RAM for storage, instead of just being all flash RAM.)
And now we have the full picture of that deceptively simple "storage" item, which turns out to have three layers:
- Fast SRAM cache right next to the processor for it to execute operations in
- Much larger, but slower, DRAM main memory to hold the bulk of code and data in use
- Bulk storage (magnetic or flash memory) for data the computer is not actively using
To recap:
- Why do we need SRAM as well as DRAM? Because DRAM is too slow.
- Why can't we use SRAM where we now use DRAM? Because SRAM is too expensive.
- Why do we need mass storage as well as DRAM? Because data stored in DRAM does not persist when the power is off.
- Why can't we just use flash memory in place of DRAM? Because it's too slow and has a limited lifetime in terms of number of write cycles.
And now we can finally ask the question: Why did Optane matter?
And the answer to that is: Because Optane (which is a kind of resistive RAM called 3D XPoint by its developer, Micron Technology) was non-volatile, like static RAM; nearly as fast as DRAM, although more expensive; and had a far longer write lifetime (many millions of write cycles instead of tens of thousands) than flash memory (although, again, more expensive). This means that it had the potential to replace BOTH main memory AND mass storage.
So why didn't this happen?
And the short answer there is, "Because to use it fully in this way required too many paradigm shifts all at once." The entire way that computers handle storage and retrieval of data would have had to be re-engineered from the ground up.
The concept is not dead. Kioxia and Everspin are working on similar technologies — Kioxia with what it calls Storage Class flash memory, Everspin with a spintronic³-derived technology that they call "spin-transfer torque magnetoresistive random access memory" , or STT-MRAM. (Don't ask me to explain what that means; without reading the technical papers on that I only understand about half of that description myself.) There's a lot of room yet for advances in persistent memory, and it may actually move faster if the players pushing the edges of the field don't have to compete with Intel (or Chipzilla, as The Register likes to call it).
Still, Optane mattered. And now you have at least some idea of why.
#computers #memory #hardware #persistence #storage #Optane
¹ Mechanical hard disks are hitting technological limits, not least in how small and how close together it is possible to make the magnetic domains, which is why essdentially every mechanical hard disk larger than 2TB now uses a technology called Shingled Magnetic Recording, or SMR. Similarly to SSDs, SMR drives can only be erased and rewritten an entire sector at a time. SMR offers high storage densities, but it is EXCRUCIATINGLY slow and barely any use for anything except near-line archival.
² That said, most modern premium-quality SSDs using 3D NAND Flash technology, the current state-of-the-art, are rated for write lifecycles of multiple full device writes per day over their entire designed service life, which is up to ten years for some models. This is partly due to better memory cell technology and partly due to wear-leveling firmware which tries to intelligently balance writes across the device to avoid write "hot spots" that rewrite the same blocks again and again.
³ Instead of storing data in the polarity of magnetic domains like hard disks do, or in cell charge like DRAM or flash RAM, spintronic devices use "up" or "down" electron spin to encode 1s and 0s. Many new technologies for non-volatile memory are in development currently, including spintronics, magnetoresistive RAM, memristors, and even solid-state optical memory.
There are no comments yet.