I currently have a three node Proxmox cluster and I need more storage. I have two old HP Z620s and my former gaming rig with a 7700K. I’m contemplating ditching the HPs and adding a disk shelf with an HBA card to my 7700K.
The HPs have 2 SATA3 ports and 2 SATA2 ports which are all populated (SSD boot drives and 3 4TB Iron Wolf drives). The 7700K has the same disk arrangement except the SSD is NVME, and all SATA ports are SATA3.
I kind of want to make things simpler. So by consolidating all of the drives from the HPs into a disk shelf I can eliminate the networking for the cluster communication, the extra energy being used by those machines, and I would be able to recover all that space that is being lost to replication on those drives. It would give me expandability too.
Would a JBOD disk shelf off Ebay be the right way to go here, or would I be better off with a server with a built in disk shelf?
My disk shelf is just a bunch of JBOD drives that I pool in software. That is the normal way to do it these days (ZFS, Storage Spaces, Stablebit DrivePool) because hardware controllers have too many drawbacks. So if you can find one cheap on ebay then go for it, just be sure and check how it connects. A lot of them youll find (NetApp) use a non-standard connection and require a lot of CLI setup to make work.
Really all you need is a tower (or rack case) with a lot of drive bays, an expander card that can power off Molex or SATA, a power supply, and if your expander doesnt have an external plug as its input then a passthrough bracket like these:
No other hardware is needed in a disk shelf as all the control should be done in the main PC with the HBA that you are connecting to.
Some of the most common expanders people buy used on Ebay are the Adaptec 82885T and Intel RES2SV240NC. Both can be powered by a 4 pin molex so no PC with a PCIE slot needed.
Why you usually want to use an expander is because it allows you to plug in 8-24 drives while using a much cheaper and more easily found “-8e” HBA instead of a more expensive higher port model and connect to the disk shelf using just 2 cables for all the drives. But another good reason is that in addition to connecting to the hard drives, you can use that external to internal adapter I linked above to loop back out of the chassis. This lets you daisy chain a second chassis as your storage needs grow without buying a second HBA or a higher port count HBA.
You can sometimes find cases for this on ebay as well. Here is an example:
$700 after shipping, but it can hold 36 drives so for that storage density the price is actually good. I would bet its really loud though with dual 1U power supplies at that wattage…
I prefer 4U cases that are able to fit a standard consumer PSU in it as those keep the noise down.
Or you can use a normal tower. I don’t remember the model I have, but I use a Corsair that I was able to buy additional drive bays for. It now has 24 drives crammed into the tower and cost something like $250.
Those PSUs, if they are actually what is in the listing, are whisper quiet as they are said to be the SQ (=SuperQuiet) models. I have one of those units in the basement and you definitely can’t hear the PSUs over literally anything else. You do want to hunt for a unit with SQ PSUs though, as adding them later will be costly.
The fans in the chassis are reasonable, noise-wise, if not run full blast. However, my unit came with the fans wired to the backplanes, which ran them pretty damn loud. Instead wiring them to the motherboard with a HUB sorted that out. They’re not quiet (not anywhere near the PSU levels of quiet), but the noise is certainly manageable. Probably don’t want it in your office tho.
Another thing of note is that adding an active PSU cooler can bring down the noise level significantly as well. When using a passive CPU heatsink one of the chassis fans is likely wired to the CPU fan header an it will most likely run at higher RPM than the rest, increasing noise.
Yes you can do passthrough only, depends on your drive needs. That video is pretty much exactly what I was saying for the parts you need to do it. In that example he just uses direct connection to each drive, which works if you need 8-16 drives or less. If you want to expand more though getting a second HBA is usually a no-go because of how few lanes are in PCs today. So at that point you can add expanders to the chassis and get more drives connected.
This is what I ended up piecing together. I stuffed it all in a tiny case I had laying around and it works great. Proxmox saw the drives right off the bat. I’ll have to take a pic of the whole set up and post it too.
I used a step bit to make a hole for the switch, but I had to desolder the wires from it first. The case I put it all in was just some cheapo case that I had laying around for an M-ATX system. So it’s pretty tight in there. I was planning on using the 5.25 bay, but the power supply cables were tin the way. I do have a sixth drive in the 3.5 bay under the optical bay. So a total of six 4TB drives in there and it came out to about 19TB of usable storage with RAIDZ.