
| From: Russell Reiter via talk <talk@gtalug.org> | Optane was intended to be a cache memory to increase the performance of | conventional spinning HD's under Windows OS. However, I've been booting | Fedora from both a 500gb SSD and 32gb Optane Nvme as I tinker with my own | desktop. I think Optane is just a brand name and that brand name gets attached to several different things, some of which are not yet being sold. The original promise was: non-volatile memory that would get speed close to RAM and price close to flash. (They also need better durability than flash.) There's a big gap between those, and anything in between ought to have a market. They just haven't been able to accomplish much. - they wanted to produce things that fit in RAM sockets. That meant either (1) no change from RAM interface (unlikely, but suggested-by-omission in early marketing) or (2) memory interfaces that were augmented to support the new protocols (delivered with some new Xeons, I think) - the NVMe stuff has had much worse durability than originally promised. - the NVMe stuff has had quite small capacity compared with most SSDs - NVMe SSD has mostly been fast enough that the Optane stuff isn't compellingly better | Most certainly booting to a login prompt is fractionally quicker | on the Nvme than on the conventional SSD. That's what I'd expect. But I'll admit to no experience with Optane and I haven't been following it closely. | However recently I up-sized my | Nvme and have populated my M.2 slots with a 250gb WD black Nvme for boot | and now added an additional 1TB to the second M.2 slot with F29 still on | the SSD. Copying a 100gb image to the 1TB drive really hit performance tho SSDs come with different performance trade-offs. Most inexpensive SSDs have (on-board) controllers with only small amounts of RAM. This makes them slow down a lot after a modest burst of intensive writing. That's a fine trade-off for many of us but not for all workloads. You can pay more and get SSDs with enough RAM to not have this performance problem. I don't know enough to give specific advice. I have recently learned that some cheap NVMe drives can ask the OS to allocate system RAM for the exclusive use of the controller. This isn't a conventional data cache, but something much stranger. It's called "Host Memory Buffer" (HMB). I think that vendors don't explain it because they think consumers won't understand it. HMB might be a great trade-off, or a horrible hack. I don't know. What happens when the power fails? Or when the system crashes / reboots? Recent Linux and Windows support HMB. I assume that UEFI firmware does not. So use of HMB must be an optional speed-up. | and that was probably due to the lack of a decent heat sink. | I just ordered a hteatsink fro the internal 1TB Why do you think that this was heat-related? It might be, but that would not be my first guess. (I am not an expert on this.) | Perhaps with the NUC form factor heat might | be a problem on a larger sized Nvme but with USB-C you have wiggle room for | adaptation. The NUC form factor certainly reduces the heat disposal and thus limits components. But the main such component is the CPU. I've not heard of it being a problem for consumer 3.5" SSDs (what Evan intends to use) or NVMe drives.