
Sorry, meant to reply-all, in case any of this is of interest to others. On Mon, 25 Nov 2024 at 21:33, David Mason <dmason@torontomu.ca> wrote:
On Mon, 25 Nov 2024 at 15:38, Lennart Sorensen < lsorense@csclub.uwaterloo.ca> wrote:
On Mon, Nov 25, 2024 at 11:08:13AM -0500, David Mason via talk wrote:
I have spent more than 24 hours researching this... many rabbit holes!
As Hugh noted, on the Intel side, ECC awareness is mostly a Xeon thing. AMD is much more supportive, but only with some chipsets/motherboards. ECC memory also comes in 2 flavours: 1) Registered (buffered) 80-bit wide, like https://www.kingston.com/datasheets/KF556R36RB-32.pdf 2) Unregistered (unbuffered) 72-bit wide, like: https://www.kingston.com/datasheets/KSM56E46BD8KM-32HA.pdf and, as far as I can tell, they are not inter-compatible if you want ECC.
Apparently at the moment, most RDIMM DDR5 is EC8 ECC which is 80bit. And most UDIMM is EC4 ECC which is 72 bit. I don't believe there is any requirement for it to be that way.
You can not mix buffered and unbuffered as far as I remember. Servers with a lot of slots need buffered memory, while systems with only a couple of slots per channel don't.
I had seen the buffered one and it seemed a better price than the unbuffered, but I eventually found unbuffered at a better price yet, which was good, as that was what the motherboard/CPU combo needed.
I wanted to take advantage of the NVME SSDs, as they are an order of magnitude faster than SATA, but as I mentioned in another thread, I want RAIDZ3, so I want at least 8 SSDs. So I came across ....
Note that this will run mostly headless, apart from maintenance, so I'm
more than happy with the built-in Radeon GPU. I have one high performance SSD which will have small boot and swap partitions. The rest are the best cheap SSDs I could find. All 8 will be mostly a large partition for ZFS for a total of about 10TB of RAIDZ3. I could have doubled that for an additional $1250.
As this has turned out to be a pretty beefy box, I will likely be running Proxmos on top of Debian with ZFS.
So 4 NVMe drives each with 4 PCIe lanes then in a single x16 PCIe slot?
So the PCIe 5 x16 turns into 4 PCIe 4 x4 (because that's what the multiplexer card is), and one M.2x4 is PCIe 5 and one PCIe 4, so there are 6 drives that have direct connections to the CPU. The other 2 will be going through the x670 chip, which means they are multiplexed over one PCIe4x4 between the chipset and the CPU. This is my understanding from the specs: https://www.asus.com/ca-en/motherboards-components/motherboards/prime/prime-... but I haven't been able to find a diagram. I base some of what I "understand" on the diagram of the Taichi near the end of https://linustechtips.com/topic/1497718-basics-about-pci-lanes-chipset-lanes...
So.... not all Gen 4 x4, but probably close enough... and the best I can do without Xeon or ThreadRipper. And given that I have 64GiB of memory, it probably doesn't much matter. :-)
Should give decent bandwidth, although how much of that can your network connection even take advantage of?
After all what good is 256Gbps NVMe if your ethernet is 1Gbps or even 10Gbps
I'm happy to have the network be the bottleneck. As I mentioned, I plan to run Proxmox on it, and VMs I run there may use up some of the bandwidth. I could have saved maybe $500 if I replaced all the NVME SSDs with SATA but saving 16% to have a disk array that is 10x slower didn't make a lot of sense to me. Alternatively I could have doubled the storage, but 10TB already was a huge bump from the current storage I have. (And it's too late! I have the NVME M.2s sitting on my desk. Waiting for memory, case, and power.)
../Dave

This is the first system I've put together with UEFI. I know there is controversy about UEFI vs. BIOS. It appears that I can turn off "Secure Boot", which sounds like what I want to do to be able to install arbitrary OS (like Debian or Proxmox). This is on a [ASUS Prime X670E-PRO WiFi]( https://www.asus.com/ca-en/motherboards-components/motherboards/prime/prime-...) motherboard. Comments? Thanks, ../Dave

David Mason via talk said on Tue, 26 Nov 2024 13:44:48 -0500
This is the first system I've put together with UEFI. I know there is controversy about UEFI vs. BIOS.
It appears that I can turn off "Secure Boot", which sounds like what I want to do to be able to install arbitrary OS (like Debian or Proxmox). This is on a [ASUS Prime X670E-PRO WiFi]( https://www.asus.com/ca-en/motherboards-components/motherboards/prime/prime-...) motherboard.
All other things equal, on my desktops I prefer MBR so I don't need to deal with the complexification and botched implementations of UEFI. I have two disks, a 1TB NVMe, and a 14TB 7200rpm spinning rust. The former has an old-school MBR partition scheme, while the latter has a GPT partition scheme. However, as time goes on, this gets harder and harder, because some CDs, DVDs and USB bootables are UEFI only. SteveT Steve Litt http://444domains.com

On Tue, Nov 26, 2024 at 01:44:48PM -0500, David Mason via talk wrote:
This is the first system I've put together with UEFI. I know there is controversy about UEFI vs. BIOS.
It appears that I can turn off "Secure Boot", which sounds like what I want to do to be able to install arbitrary OS (like Debian or Proxmox). This is on a [ASUS Prime X670E-PRO WiFi]( https://www.asus.com/ca-en/motherboards-components/motherboards/prime/prime-...) motherboard.
Certainly turning off secureboot makes sense unless you intend to actually take advantage of it yourself (as in load your key and have the OS boot files signed by that key. Using microsoft's key is not what I would consider good use of secureboot). -- Len Sorensen
participants (3)
-
David Mason
-
Lennart Sorensen
-
Steve Litt