Help increasing my servers storage

current running a Dell PowerEdge R710 Server | 2x X5650 =12Cores | 128GB RAM | H700, with 2 terabytes of nvme storage on pcie cards, it has 8 total drive spots.

I was looking to upgrade the storage on the server to about 26 Terabytes but I also want so room to add more if needed, what is the cheapest way to do this, would it be cheaper to get another blade that’s a JBOD and like use that or is it best to try and find really cheap 2.5 inch SAS drives over time. or is there another option I am missing? its running a linux operating system for servers just atm I can’t remember which one, if that makes any difference. I appreciate any advice for a solution you can give since the current setup is simply not going to work for me

Do you have the 8 drives slots filled with any hard drives right now? If not, you can put two 14-18TB drives in and be at your storage already. Or go with two 20-22TB drives in a mirror and it wont quite be at your capacity but does give you some redundancy. Large capacity drives are pretty cheap these days, usually 18TB for around $280-300 each. The newest 22TB drives are a little pricy still as they just released.

Other than that, you can add a JBOD chassis like you said with an HBA in the main server. Either a rackmount or tower form factor. For a tower, you can make one with any old tower you have laying around. For rackmount you probably want a 4U chassis as they have a good drive count and can easily fit standard PSUs and expander cards. Though you can use a 2U chassis for this with a more standard server PSU and a low profile expander or passthrough bracket.

He said 2.5" drive (bays), so no chance of sticking 18TB drives in. Large capacity RAID compatible 2.5" hard drives really aren’t a thing, so he’d have to jump to multi-TB SSDs, which will get expensive.

The H700 only supports 512b sectors, so they need to be “AF” and not “4Kn” drives. Doesn’t pass-thru trim commands either, so performance and longevity with be rather hurt unless picking-up enterprise SSDs.

In short, a disk shelf is probably the way to go. Or perhaps it’s time to retire that old space-heater of a Pentium4-era server and pickup a cheap R730 with 3.5" drive bays for easy expansion and lower electric bills and cooling needs.

1 Like

I know about a HBA, would a JBOD chasis need a motherboard and power supply with ram and a processor? How does one connect a HBA to a JBOD?

hey thanks for the idea, im not too sure what a disk shelf is but I assume its a jbod, it is for school projects and personal projects so ideally I want this to be as cheap as possible. if a jbod is the cheapest thing that’s what I will go with, any recommendations and things to look out for because I dint even catch the 2.5inch drive thing till after I got it and realized oh I can’t even find large capacity drives at this size

No, an Direct Attached JBOD shelf has power and connections built into it and you just add an external HBA and some SAS cables.

The only downside is that its not redundant outside of data redundancy provided by the file system like ZFS.

Also SAS is backwards compatible with SATA so you can mix SAS and SATA drives on a SAS disk shelf.

1 Like

Do you want another rackmount form factor unit?
Do you want the extra performance of direct attached drives to an HBA? Or is connecting drives over 10gb USB-C fine?

Im just wondering cause if you want this as cheap as possible to get more space then a USB enclosure in a desktop form factor would be much cheaper than a rackmount unit usually, and if you dont have an HBA already then that adds a bit to the cost as well. Though you can find used 9300 HBAs on Ebay for like $60-80

uhh rack mount might be the best option but tbh I don’t know what an hba is, is that what a headboard adapter is or am I misunderstanding it. and usb c is totally fine

Host Bus Adapter. RAID card without RAID. A controller card where you plug in your drives. Similar to e.g. on-board sata controllers, but more sophisticated. LSI 9300 series will give you internal and/or external connectors for up to 24 drives.
Either internal cable where one connector can supply 4 drives or external connector to link the card with external chassis that has drives.

1 Like

There’s 2 kinds of SCSI drive controllers in the server world RAID and HBA. HBA give you complete control over the drives individually, ideal for ZFS and similar. RAID controllers are kind of all over the place (less annoying and really annoying ones) and they do hardware RAID and need more configuration in order to be able to use the drives, they’re popular with the Windows crowd where ZFS isn’t really a thing.

With LSI controllers (LSI is the name of the company that makes the chipsets), usually the only difference between RAID and HBA is additional nvram on the controller card, and the firmware. Usually what people do is flash HBA firmware onto RAID controllers (sometimes also called IT mode firmware), because people usually want to run Linux or TrueNAS or some such non-Windows thing on a server.

These controllers are SCSI SAS controllers, so instead of a SATA cable, typically you’d get a SAS cable with SFF-8087 connectors to go to a SAS drive if you’re working in enterprise… or… you could get breakout cables to go from a single SFF-8087 on one end to 4xSATA cables on the other end.

SFF-8088 is just an external version of an SFF-8087 connector , more metal, more grounding and sturdier.

Turns out SAS SCSI protocol is a little bit like ethernet, and with SAS there’s these things called expanders that kind of sort of work like ethernet switches. You can find e.g. a 16port expander, … you can get one cable between expander and HBA and you get 15 ports that let you hookup 60 SATA drives, to a single SAS HBA port.

They’d all share the bandwidth of that single cable between the HBA and the expander.

A disk shelf is an enclosure for lots of drives, often with 2 or more power supplies of its own, and an integrated expander. You can “slot in” the drives into the disk shelf, and connect the disk shelf to your server using 1 or more SFF-8088 cables. This PCB in a disk shelf with a bunch of connectors and an expander controller chip is also usually called a backplane.

2 Likes

thanks a lot, are disk shelfs typically cheap? or are only the not so good ones cheap these days. would I be able to use 2 connecting cables to double the potential speed or with a disk shelf do I only get one possible connection ?

depends e g. a popular one is NetApp DS4246 - which lets you hookup 24 drives… ebay has various offers around $500 a piece.

If it works, has all the parts, and you need space for than many drives - yes.

Yes.

tbh I think 10 or 12 would be enough drive space, I appreciate the help

Ill throw out some options for you:

1st option:
Stick one of these in your server:

https://www.ebay.com/itm/234987470364

and connect it to this tower form factor “disk shelf” with both cables

https://www.pc-pitstop.com/8-bay-bare-12g-jbod

That would be a direct connect to each drive, no expander and the most you can connect is 8 drives with an HBA that is “-8e”. That is your cheapest option with high performance, though gives you less drive bays than you want and is not rackmount like you want. Costs $400 and gives you 8 drive bays. If you used 18TB drives that would give you a max of 144TB

2nd option:
a larger HBA:

https://www.ebay.com/itm/374588682817

and this chassis:

https://www.pc-pitstop.com/15-bay-4u-12gbs-sas-sata-trayless-rackmount-jbod-jr1512t

I don’t know if your server has a pcie expansion slot that can take a full size card though. You might only be able to use low profile cards so maybe this one wouldn’t fit. This gives you rackmount but is expensive. You have a direct connection to each drive still, no expander, but that isnt exactly necessarily a good thing. You probably cant use all that bandwidth with regular hard drives.

3rd option:
You can use the first -8e HBA I linked with this chassis and connect to all 15 drives at a combined total of 24gb/s over two cables. This one has an expander in it:

https://www.pc-pitstop.com/15-bay-12g-trayless-rackmount-sas-expander-jbod-w500w-psu-er1512t

Also expensive though.

4th option:

https://www.ebay.com/itm/234987470364

plus an expander card:

https://www.ebay.com/itm/134431659639

and this chassis:

https://www.ebay.com/itm/155456169623

This is your 2nd cheapest option, but is used hardware. That is fine in most circumstances and shouldn’t be a big deal.

1 Like

Cheap option: Buy an old R710 that takes 3.5" drives (max: 6).
Either one where you’ll need to move some components like RAM from your current system:

Or one that’s already got similar specs as yours:

Better option: Just upgrade to a newer server with similar specs:

1 Like