where’s the pics + benchmarks!
I picked my full tower case up from the climate controlled storage today. The aluminum side panels have a dusting of oxidized aluminum. Now to pull the old dual Xeon with 24gb of ddr3 so I can install the new MB.
I need to move some motherboard standoffs in order to install the motherboard. Mine are round with no flat spots. I have some channel locks that may work to remove it, but it is nearly midnight here so that is a project for another day.
Probably the soonest I can work on this is 5PM CDT on Saturday.
Motherboard installed.
The standoffs were riveted in place. Channel lock pliers pried them off.
Now one of the 8 pin motherboard connectors don’t match 8 pin connectors from my power supply. The other one does. atx+4 is connected. I need to find the additional power lines I last used in 2009. (or order spares).
I should have done this while waiting for the cpu…
What’s up with the PCI express slots for these boards? I count 5 slots total, only 3 of which are 16 lanes? Threadripper boards usually have 7 full-length PCIe x16 slots, though often one or two are x8 lanes.
Do these boards use a lot of U.2 or other things that spends the PCIe lanes?
So i searched for some specs, found the board here: H13SSL-N | Motherboards | Products | Supermicro
So it has:
- 64 lanes spread over 5 PCIe ports, only 3 of which are full length x16.
- Only two M.2 slots, only PCIe 4 and not 5 and both are not cooled at all it appears.
- Only 6 USB3 ports (including all internal headers).
- Dual 1Gbit LAN (broadcom) + 1 IPMI management LAN.
- 8x SATA ports.
However it also sports MCIO ports or connectors. Though i have never heard of those. It appears to be PCI express via cable?
So where did all the PCI express lanes go? Unused on this motherboard? Genoa has 128 PCIe 5.0 lanes for single socket systems, so why use PCIe 4.0 on the M.2 slots?
All very confusing to me. This stuff is stupid expensive but even the lowest B550 AM5 board has M.2 with PCIe 5.0, so I am a bit confused…
Some more bizarre things i stumble upon:
Please note - many Supermicro parts are only sold in combination with a complete tested system order to protect our own assembly line due to the shortages.
Also, some Supermicro barebone systems require a minimum amount of CPU/memory/HDD/SSD/NVMe.
For these reasons, some orders might be cancelled as they are not accepted by Supermico or Ahead-IT.
Can anybody confirm this? Or is this EU only and in US you can actually just buy this board?
64 64 lanes for slots
88 +24 lanes for mcio
96 +8 for 2 m.2
100 +4 10gbe is pcie3, probably 4 lanes
102 +2 ipmi
110 +8 2x pcie3 usb3 controllers with 2usb3 ports each
18 lanes un accounted for.
The 3 mcio ports each handle 8 lanes of pcie.
I got a $20 double ended symmetrical cable good for pcie4
and a $16 mcio to dual x4 m.2 adapter card.
With these, each mcio can give me a cheap pair of m.2 ports.
Unlike am4 and am5, there is not the choice between pcie and sata, I get both.
epic does bifurcation whatever way I want it, and the motherboard supports that.
I got the NT model so I also got dual 10g ethernet.
I NEED ECC. I have lost 1.5 years of my professional career to ram errors that would be prevented with ECC ram. if I have a choice in the matter, I am going to use ECC ram in my personal computers. The AM5 motherboards that support DDR5 ECC are not yet shipping.
And there are 12 channels of ddr5 with ecc. So far I am using 2x 32GB each, but in the future after they become more common I can get larger pieces. And 12x 32GB is 384GB, which is still 3 to 6 maxed out AM5 systems. 64GB pieces are only 120% the price of 32GB pieces, so 768GB is still a reasonable price, especially compared to buying 6-12 AM5 PCs.
My last PC was an intel NUC. That computer was so frustrating in its limited expansion. The computer before that was a dual socket 1366(I think) xeon workstation, which was much less frustrating to work with.
I had an urgent need to deploy a computing cluster in Jan 2023 in order to show a working proof of concept, and ran face first into the wall of unsupported hardware due to microcode updates to CPUs (lots of intel security holes). Basically all of my PCs were unusable for even a proof of concept. This brings me current. I could have gone am4 or am5, but I tend to use a lot of ram, and this build lets me add ram as needed without also having to build additional systems. I eventually got the proof of concept working, but my aging hardware held me back about 9 days. This gets me current. I have usually been in server administration. I have been supporting freebsd, linux etc since 1996. For my work, cpu usage has not been a limitation. IO and ram capacity has always been the limiting factors. This system will effectively remove those limitations.
I also have an apple M2 MacBook Air 24GB 2TB, which complements the server. I can do most things on my laptop. There are some things a laptop sucks at. For those this computer should excel at.
I am having an atx 8 pin cable shipped here by Thursday, and full pack of cables by the following Thursday. The first cable should be enough for me to turn it on.