I'd posted information regarding M.2 Nvme heating and file transfer times on another thread and I have since installed this heat-sink. It has made enough of a difference that I thought I'd update.


While the sales blah said it is mandatory to use the supplied thermal pads, it came with three pads layered in plastic backing and no instructions. I used the two thinner pink ones and didn't realize at the time that the thicker blue one came apart as little blue squares. On reflection it looks like these were intended for padding the backing-plane around the chips because the sales picture show's a blue pad protruding from the end of the enclosure but I'd have to remove it from the slot again to reapply those pads. It was trickier to install this go around with my slightly arthritic fingers and all that bulk around it so I'm waiting on that.

This copy from a 512GiB WD Black Nvme in a USB-C enclosure was done using pipe viewer in the console as opposed to the Nautilus file widget. So in comparing apples to oranges ymmv but the previous Nautilus copy from my installed SSD which was formatted to ext4 took almost 15 minutes, slowing to 95MiB/s about two thirds of the way through the copy. This copy from a NTFS file system on Nvme in an external enclosure via USB-C happened in 4min and hardly wavered from 500MiB/s. 

[R3eiter@archon ~]$ pv /run/media/R3eiter/..path to 512GiB USB-C../backup.tar.gz > /run/media/R3eiter/..path to 1TB Nvme../backup.tar.gz
 118GiB 0:03:56 [ 512MiB/s] [================================>] 100%  

However copying from one place to another on the 1TB Nvme seems to have crunched it's on-board cache. In the last minute of this copy, speed dropped from around 500MiB/s to 25MiB/s with an occasional leap up to 475MiB/s. 

[R3eiter@archon ~]$ pv /run/media/R3eiter/DATA/Nvme/bak/backup.tar.gz > /run/media/R3eiter/DATA/Nvme/backup.tar.gz 
 118GiB 0:06:35 [ 307MiB/s] [================================>] 100%  

Booting from Nvme sure is is quick. I took Hugh's advice and masked PolicyKit in order to deal with a very slow shutdown watchdog on rebooting. I also masked Plymouth, this shaved the total boot time down a bit more than a second. Around ten seconds
on the SSD and 7 seconds on the Nvme. 

F30 seems to have glossed over the grub boot screen while the Plymouth graphics widget was chewing up almost 3 seconds. I only left PolicyKit active long enough to install a proprietary Nvidia driver which is now available from the software apps repo for F30. I did this because F30 now defaults to Wayland which doesn't play well with Nvidia cards. Hopefully this will change. 

I used the F30 software install app to do this Nvidia update because I had a very odd experience with the gui for the bios setup. After my last firmware update I had to enable VT-d in bios from the keyboard without the mouse plugged in order to stop the output freezing. There was also a problem with cold booting the OS (when I upgraded F27 to 28 and then 29) while a tv was plugged into the Nvidia card via hdmi and a monitor on display port. Not being able to decide which output of the Nvidia card to recognize, the system woke neither. 

Heres what my current Fedora 30 startup info shows.

[R3eiter@archon ~]$ sudo systemd-analyze
Startup finished in 1.257s (kernel) + 1.413s (initrd) + 4.429s (userspace) = 7.101s
graphical.target reached after 4.421s in userspace

[R3eiter@archon ~]$ sudo systemd-analyze blame
          2.484s systemd-udev-settle.service
           609ms dracut-initqueue.service
           573ms initrd-switch-root.service
           569ms lvm2-monitor.service
           527ms systemd-logind.service
           412ms firewalld.service
           360ms akmods.service
           302ms udisks2.service
           296ms libvirtd.service
           251ms systemd-resolved.service
           224ms systemd-machined.service
           205ms lvm2-pvscan@259:2.service
           198ms sssd.service
           187ms systemd-udevd.service
           173ms systemd-journal-flush.service
           148ms systemd-journald.service
           129ms dracut-pre-pivot.service
           127ms user@42.service
           108ms ModemManager.service
           106ms upower.service
            96ms avahi-daemon.service
            94ms user@1000.service
            85ms bluetooth.service

This post is obviously more seat of the pants than science, but there are very few situations where a user can see with the naked eye that performance was sub-optimal. This was one of them.