Sorry, meant to reply-all, in case any of this is of interest to others.

On Mon, 25 Nov 2024 at 21:33, David Mason <dmason@torontomu.ca> wrote:
On Mon, 25 Nov 2024 at 15:38, Lennart Sorensen <lsorense@csclub.uwaterloo.ca> wrote:
On Mon, Nov 25, 2024 at 11:08:13AM -0500, David Mason via talk wrote:
> I have spent more than 24 hours researching this... many rabbit holes!
>
> As Hugh noted, on the Intel side, ECC awareness is mostly a Xeon thing. AMD
> is much more supportive, but only with some chipsets/motherboards. ECC
> memory also comes in 2 flavours:
> 1) Registered (buffered) 80-bit wide, like
> https://www.kingston.com/datasheets/KF556R36RB-32.pdf
> 2) Unregistered (unbuffered) 72-bit wide, like:
> https://www.kingston.com/datasheets/KSM56E46BD8KM-32HA.pdf
> and, as far as I can tell, they are not inter-compatible if you want ECC.

Apparently at the moment, most RDIMM DDR5 is EC8 ECC which is 80bit.
And most UDIMM is EC4 ECC which is 72 bit.  I don't believe there is
any requirement for it to be that way.

You can not mix buffered and unbuffered as far as I remember.  Servers
with a lot of slots need buffered memory, while systems with only a
couple of slots per channel don't.

I had seen the buffered one and it seemed a better price than the unbuffered, but I eventually found unbuffered at a better price yet, which was good, as that was what the motherboard/CPU combo needed.
 
> I wanted to take advantage of the NVME SSDs, as they are an order of
> magnitude faster than SATA, but as I mentioned in another thread, I want
> RAIDZ3, so I want at least 8 SSDs. So I came across
>....
> Note that this will run mostly headless, apart from maintenance, so I'm
> more than happy with the built-in Radeon GPU.
> I have one high performance SSD which will have small boot and swap
> partitions. The rest are the best cheap SSDs I could find. All 8 will be
> mostly a large partition for ZFS for a total of about 10TB of RAIDZ3. I
> could have doubled that for an additional $1250.
>
> As this has turned out to be a pretty beefy box, I will likely be running
> Proxmos on top of Debian with ZFS.

So 4 NVMe drives each with 4 PCIe lanes then in a single x16 PCIe slot?

So the PCIe 5 x16 turns into 4 PCIe 4 x4 (because that's what the multiplexer card is), and one M.2x4 is PCIe 5 and one PCIe 4, so there are 6 drives that have direct connections to the CPU. The other 2 will be going through the x670 chip, which means they are multiplexed over one PCIe4x4 between the chipset and the CPU. This is my understanding from the specs: https://www.asus.com/ca-en/motherboards-components/motherboards/prime/prime-x670e-pro-wifi/techspec/ but I haven't been able to find a diagram. I base some of what I "understand" on the diagram of the Taichi near the end of https://linustechtips.com/topic/1497718-basics-about-pci-lanes-chipset-lanes-and-cpu-lanes-help-for-newbie/

So.... not all Gen 4 x4, but probably close enough... and the best I can do without Xeon or ThreadRipper. And given that I have 64GiB of memory, it probably doesn't much matter. :-)
 
Should give decent bandwidth, although how much of that can your network
connection even take advantage of?

After all what good is 256Gbps NVMe if your ethernet is 1Gbps or even 10Gbps

I'm happy to have the network be the bottleneck. As I mentioned, I plan to run Proxmox on it,  and VMs I run there may use up some of the bandwidth. I could have saved maybe $500 if I replaced all the NVME SSDs with SATA but saving 16% to have a disk array that is 10x slower didn't make a lot of sense to me. Alternatively I could have doubled the storage, but 10TB already was a huge bump from the current storage I have. (And it's too late! I have the NVME M.2s sitting on my desk. Waiting for memory, case, and power.)
 
../Dave