Re: IcyDock ramble: home server case

So, @wendell asked for interesting stuff for uATX home lab servers, specifically mentioning SilverStone.

Two cases I personally find very interesting are SST’s older SG11 and SG12 cases. Same interior, but different exteriors. My use case is slightly different, relatively little data (maybe a TB), but I want IOPS, so I want an all-flash solution.

First of all, these cases are small - 23 liters. In that, they fit:

  • internal 3x3.5"
  • internal 9x2.5"
  • an external 5.25"

With how easy this case seems to be to work with, (watch, or skim this one, guy filmed the whole build process in SG12), the lack of being hot swappable for those nine 2.5" drives doesn’t seem to be of importance for a home server.

Add in an IcyDock 6x2.5" hot swappable bay in that 5.25", and you’ve got yourself freaking fifteen 2.5" drives in 23L.

1 Like

I was more thinking the CS350 with PCI-E extensions, using a ATX case’s extra slots and M.2 to PCI-E risers to get extra x4 slots, and the X570 version of the ASRock Rack board would probably allow all the ports and M.2 to be active all at once.

1 Like

My biggest issue with uATX or ITX form factor systems is the lack of SATA on the motherboards.

Until the next gen AMD stuff comes out, AMD needing an external GPU severely limited the options for HBAs and whatnot.

Essentially meaning you’d need to snag that ASRock Rack board (significantly more expensive than typical X series boards with similar durability) or use port multipliers and accept a reduction in peak performance.

Well you pay for 10GBit NICs and the IPMI. Subtract that and it’s more or less in line with other boards. The non-10GBit version is like 299$. If you accept x8 on the GPU slot, you can use x8 for a HBA on the second slot or 2x HBA if you have APU or IPMI. Same as many other boards. And even nextgen AMD won’t break with the 16+8 lane layout we have today, so this problem won’t be solved in the near term.

Imagine having slimSAS on all boards. Fits on every uATX/ITX and gets you all the SATA you need independent from form factor.

Yeah, that’s a good point. Maybe I should have another good look at it, I keep discounting 10g nics as necessary.

Disagree that it’s a problem, though I went into it in detail in the other thread.

Only helpful if they want to go to expensive breakout cables (ugly) or every case switches to backplanes (significantly more expensive)

Boards with 10Gbit NICs are usually 100-150$ more expensive than the ones without. Applies to pretty much all platforms from AM4 over Threadripper to EPYC. Sucks if you don’t need it, but there are usually alternatives. I didn’t check prices lately, but got my -2L2T for 405€ and the non-10Gbit was 295€ or something. 9 months ago, so things may have changed.

Well getting rid of M.2 and SATA ports would certainly help with limited physical space on those very small boards and may also result in an extra slot. But yeah…SFF connectors are hell, we all love them :slight_smile:

1 Like

The way I see it, great for servers, terrible for consumers.

There are non-10GB variants too. My favorite is the X470D4U. I spent quite a lot of time looking at their models and what’s available in my area, ended up settling on that one, because it has two M.2 slots hanging off of the chipset, perfect for boot drives.

That said, the slots available on the board allow are configured unusually: all three PCIe slots are direct-to-CPU, with the x16 being bifurcated into two physical x16s as x8/x8, and the physical x8 has x4, the one which usually has the M.2 on consumer motherboards.

Overall, IMO this board has a perfect PCIe layout for a home server.

There’s also 6 SATA ports from the chipset and two additional via an ASM1061 (which is a controller, not a port multiplier.

As for 10 GbE, a used NIC will be way cheaper than whatever the difference in motherboards is.


Also, wow, almost three weeks on non-activity and suddenly six replis :stuck_out_tongue: