On Mon, Nov 25, 2024 at 11:08:13AM -0500, David Mason via talk wrote:
> I have spent more than 24 hours researching this... many rabbit holes!
>
> As Hugh noted, on the Intel side, ECC awareness is mostly a Xeon thing. AMD
> is much more supportive, but only with some chipsets/motherboards. ECC
> memory also comes in 2 flavours:
> 1) Registered (buffered) 80-bit wide, like
> https://www.kingston.com/datasheets/KF556R36RB-32.pdf
> 2) Unregistered (unbuffered) 72-bit wide, like:
> https://www.kingston.com/datasheets/KSM56E46BD8KM-32HA.pdf
> and, as far as I can tell, they are not inter-compatible if you want ECC.
Apparently at the moment, most RDIMM DDR5 is EC8 ECC which is 80bit.
And most UDIMM is EC4 ECC which is 72 bit. I don't believe there is
any requirement for it to be that way.
You can not mix buffered and unbuffered as far as I remember. Servers
with a lot of slots need buffered memory, while systems with only a
couple of slots per channel don't.
> I wanted to take advantage of the NVME SSDs, as they are an order of
> magnitude faster than SATA, but as I mentioned in another thread, I want
> RAIDZ3, so I want at least 8 SSDs. So I came across
> https://www.amazon.ca/Highpoint-SSD7540-PCIe-8-Port-Controller/dp/B08LP2HTX3/
> which is supports 8 M.2 SSDs in a single PCIe-x16 socket.... the only
> drawback is $1660. So I also found
> https://www.amazon.ca/gp/product/B0863KK2BP/ (and many cheaper ones without
> fan/heatsink) which supports 4 M.2 SSDs (but requires turning on
> bifurcation mode) but is just over $100. Great, I thought, I throw 2 of
> those in, and I'm good to go.... so I went off and looked for
> ECC-supporting motherboards with 2 PCIe-x18 slots. But when I looked at
> that adapter closely, I discovered I needed to worry about PCI Express
> lanes!
>
> https://linustechtips.com/topic/1497718-basics-about-pci-lanes-chipset-lanes-and-cpu-lanes-help-for-newbie/
> has a good explanation about these, but the bottom line is that these are
> how the PCIe devices get access to memory via the CPU. Intel Core chips
> have 20, Ryzen have 24 usable (Xeon and ThreadRipper have 100+). So I went
> off and looked at Xeon and ThreadRipper chips and motherboards for a
> while.... but my budget didn't extend that far, and this *is* supposed to
> be mainly a file server. So I could only have 1 of those 4x M.2 interface
> boards. So I ended up looking at boards with 1 PCIe-x16, and 4 M.2-x4
> slots. Because of the 4-lanes fewer connections of Intel, this basically
> brought me to Ryzen.
>
> So I ended up with:
> 1x ASUS Prime X670E-PRO WiFi
> <https://www.amazon.ca/gp/product/B0BF6VKQP4/ref=ppx_yo_dt_b_asin_title_o00_s02?ie=UTF8&psc=1>
> motherboard (considered Asrock X670E Taichi
> <https://www.asrock.com/mb/AMD/X670E%20Taichi/index.asp> but couldn't
> source it)
> 1x AMD Ryzen™ 7 7700 8-Core
> <https://www.amazon.ca/gp/product/B0BMQHSCVF/ref=ppx_yo_dt_b_asin_title_o00_s02?ie=UTF8&psc=1>
> CPU - 35 watts, includes cooler
> 2x Kingston Server Premier 32GB 5600MT/s DDR5 ECC
> <https://www.amazon.ca/gp/product/B0C7W4GK6R/ref=ppx_yo_dt_b_asin_title_o02_s00?ie=UTF8&psc=1>
> memory
> 1x ASUS Hyper M.2 x16 Gen 4
> <https://www.amazon.ca/gp/product/B0863KK2BP/ref=ppx_yo_dt_b_asin_title_o01_s00?ie=UTF8&psc=1>
> PCIe adapter
> 1x TEAMGROUP T-Force Z540 2TB
> <https://www.amazon.ca/gp/product/B0CGR7RNCD/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1>
> SSD
> 7x TEAMGROUP MP44 2TB
> <https://www.amazon.ca/gp/product/B0C3VCD5Z8/ref=ppx_yo_dt_b_asin_title_o00_s01?ie=UTF8&psc=1>
> SSDs
> 1x SAMA 850W ATX3.0 Fully Modular 80 Plus Gold
> <https://www.newegg.ca/sama-xf-series-xf850w-850-w/p/1HU-02S6-00030?Item=9SIB41PJT48552>
> power supply - even though 350W woul do
> 1x Corsair 4000D Airflow CC-9011201-WW White
> <https://www.newegg.ca/white-corsair-4000d-airflow-atx-mid-tower/p/N82E16811139157?Item=N82E16811139157>
> mid-tower case
>
> Total with taxes, just over $3000.
>
> Note that this will run mostly headless, apart from maintenance, so I'm
> more than happy with the built-in Radeon GPU.
> I have one high performance SSD which will have small boot and swap
> partitions. The rest are the best cheap SSDs I could find. All 8 will be
> mostly a large partition for ZFS for a total of about 10TB of RAIDZ3. I
> could have doubled that for an additional $1250.
> I found https://ca.pcpartpicker.com/list/ very useful, except it doesn't
> list ECC memory, and doesn't understand PCIe adapters, so it thought I
> couldn't hook up all my M.2s, and it didn't list the SAMA PSU.
>
> As this has turned out to be a pretty beefy box, I will likely be running
> Proxmos on top of Debian with ZFS.
So 4 NVMe drives each with 4 PCIe lanes then in a single x16 PCIe slot?
Should give decent bandwidth, although how much of that can your network
connection even take advantage of?
After all what good is 256Gbps NVMe if your ethernet is 1Gbps or even 10Gbps?
--
Len Sorensen
---
Post to this mailing list talk@gtalug.org
Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk