Need help on the basics of a disc shelf

Hello all, I have my own NAS server and it is running great. However recently I have hit the half way mark when it comes to hard drives and what is able to handle. In the past in various other videos I know that a disc shelf was used to expand the hard drive capabilities of a server so that it can hold more of them. I have a few questions about them:

  1. Where can I get one? I am not in Enterprise IT nor have a connection with my office for this do I need to plunder eBay or is there a website that I am unfamiliar with?

  2. I know there is a card needed to connect the server to the shelf, what type of card is it and can someone point me into more information on how I can get that hard in my preexisting server.

  3. Are there any "got ya"s about adding this to a server? I have a rack and my current machine is using the rosewell 4U case. From my prospective its add the interface card, plug the shelf into said card, it just works. Is there a thing that I am missing with this?

Thank you in advance for this, I tried to look elsewhere on the forum first but was unable to locate anything about this that would help me get started down this path. If you need any information from me let me know.

I made one with a LSI 9201-16E (4 port external HBA) and a 24 bay server case and some adapters.

1 Like

I also have an LSI 2 port external HBA (I did not pay the current price), with a NetApp (ds4243) Also in a rosewill 4U case. I’m running FreeNas and it was pretty much plug and play. I also used these cables

Ebay and Amazon will be your best bet. Anything new will probably be too expensive and overkill for homelab needs.

eBay is your best bet. Search for disk shelf

You need either

  • a SAS expander, which basically multiplies your minisas ports into multiple SATA ports, and draws power from the disk shelf expansion board. This means you need to ensure the models are compatible between cards and shelf.

  • or the “poor man” version which is a simple passthrough card that converts external Mini SAS SFF-8088 To 36Pin internal SFF-8087. That way the external ports on your HBA become internal breakout cables. This version doesn’t give you extra ports, and you will be limited to say 8 drives per external facing HBA. This means you run out of pcie lanes quickly. You will need to set the PSU to always power on the drives through as well as there are no smarts.

Generally speaking, historically disk shelves were really only necessary if you needed massive scale storage or high performance but don’t want expensive ssds.

These days you would be better off taking an alternative path than adding more drives to your storage array.

  1. Option one would be to set up a cluster, such as GlusterFS or even Proxmox. Basically build redundancy in at the server level and scale your storage with load balancing.

  2. just buy bigger disks and replace in situ. …

My preference is option 2, but this may not work for your use case.

2 Likes

Thanks for the information. At the moment I am 7/15 discs on the server with the average size being around 10 TB a disc. Like I said before no real rush in grabbing one of these now just doing the pure research mode on this.

I did have a side question. I currently am running Unraid Pro which has “unlimited” drive support. I have heard of Proxmox before and how it is able to handle really large scale things. Should I be good with just doing the Unraid thing or should I poke at proxmox if I want to eventually get crazy? Thanks in advance.

I don’t use Unraid so I can’t comment on its scalability but you can download proxmox and play with it for free to learn the ropes. Im just building my first two-system cluster now (*) and the storage options for load balanced vms are fairly powerful but I will say the documentation is oriented towards power users. From what I understand Unraid is more forgiving.

If you already have 70TiB of data and expect it to grow I personally wouldn’t trust all that to anything with a single point of failure. It gets to the point where backing up becomes challenging without rsynch or a tape drive. At that point you are doubling your hardware spend anyway.

I’d suggest if it is a pure storage server then zfs is a strong recommendation. If you are running a converged storage and VM system then Unraid is simpler. Just make sure you protect your data from any experimental builds.

(*) I was going to document my proxmox journey at some point but basically it’s all used hardware, and goes nowhere near production as it is certainly not trusted. I’m toying with converging all my storage and compute (mainly CCTV and a print server. With some other containers) but at the end of the day i need to be sure my personal data is resilient before I turn off the dedicated freenas box. So I’m learning on junk. I doubt I will trust anything else for a while. For those interested in the cluster setup:

System 1) an ancient x58 system and my butchered supermicro chassis with 9 drives of various vintages, mainly mid 2000’s SATA drives

System 2) A x79 system in a desktop chassis with even older (turn of the century) PATA drives (smallest is 10GiB) and some old SCSI drives i am amazed still run.

Using it to learn so I don’t care if the arrays all die but so far proxmox handles the various configs and drives without drama.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.