Good high-end NAS cases don't exist, Should the community make one?

No, no. The drive works fine. Not sure why it says that. I have the HBA version of the card. Maybe it’s just because I didn’t initialize the drives via adaptecs tools. The drive is just exposed as a scsi device to linux and I use the drive for a ceph osd.

sdh                                                                                                   LVM2_member    LVM2 001              6n9X18-2p6P-xgH3-wOjG-Hury-15Pt-qmbRy4                
└─ceph--737b9c1e--2617--4bfb--82db--bf0d3814284b-osd--block--219fd207--ff53--485b--8e92--6392bbd88713 ceph_bluestore

For dual port/multipath failover to work, you normally need a backplane that supports dual port and then you connect a 2nd controller to the backplane.

I guess his cable probably would make both ports available, but I don’t have one to test.

1 Like

I went into the RAID utility and am receiving the same information about dual port connections. I am thoroughly impressed with Adaptec’s gui and features, in my opinion it is significantly better than the LSI/Broadcoms I’m use to, there are so many options I’ve never seen before to control drive behavior like arm momentum elevator sort with latency cutoffs for servicing IO requests.

​​​ ​ ​

Also, how are you cooling your card, I just looked at the data sheet and it is 35 watts which is significantly more than I thought it would be.

I’ve installed some server fan in my case (fractal meshify 2XL) close to the pci slots . Airflow from front fans is restricted by harddrives, so I’ve put a server fan behind the harddrives. I run the server fan about 3% duty cycle.

   Temperature Sensors Information            
   Sensor ID                                  : 0
   Current Value                              : 35 deg C
   Max Value Since Powered On                 : 35 deg C
   Location                                   : Inlet Ambient

   Sensor ID                                  : 1
   Current Value                              : 57 deg C
   Max Value Since Powered On                 : 58 deg C
   Location                                   : ASIC

   Sensor ID                                  : 2
   Current Value                              : 38 deg C
   Max Value Since Powered On                 : 38 deg C
   Location                                   : Bottom

   Sensor ID                                  : 3
   Current Value                              : 49 deg C
   Max Value Since Powered On                 : 50 deg C
   Location                                   : Top

I’ve always referenced the BackBlaze HDD failure stats when choosing drives. I’ve bought HGST Drives with low fail rates for my workstation and NAS over the last five years and not had a single issue. They were bought out by WD, but no slip in quality so far. (Knocking furiously on wood.)

1 Like

I think sata ports like the one linked are better for a case like this. HBAs have SFF- connectors on them, but normally you can use each of those ports with a breakout cable to go to four standard sata data connections. So if the HDD drive units all had 4 sata ports on them, standard 6gb/s and 12gb/s HBAs could both interface to each unit with no problems at all. Even standard expanders wouldnt have an issue

Each of these HBAs and expander cards are examples of standard connectors and can be used with a SFF-8087 or SFF-8643 to four SATA cable:

and these are the cables:

Check out “chia farming” a bunch of that hardware addresses your needs.

Personally I would not want subject a human to be in the same room as 20 spinning hard drives. If a human is not in the room, then the room can have some noisy air filtration that does not need to be on the front of the drive enclosure, so you can use standard rackmount noisy disk shelves.

If it is rack mount anyways, and a human is going to be in the same room, then at least put it in an acoustic enclosure, then noise once again doesn’t matter, though you still need an air filter on the forced air intake to the enclosure.

Last time I went down this rabbit hole, this is where I ended up:


In my experience sata cables/connectors are far more likely to have a intermittent or bad connection than SFF-8639; part of this may be due to sas devices themselves having significantly better signal integrity over sata devices though.

hdds aren’t very loud if they are being properly damped, I’m sitting ~6 feet from 12 of them going through a patrol read and can’t hear them over all the frogs outside. or maybe you meant the fans cooling them can be obnoxious.

​​​ ​ ​
​​​ ​ ​
I’ve been going down the rabbit hole trying to figure the best way to damp the drives and all the good ways add excessive weight to the case. I want to make the drive cage out of lead, but I think I’m going to have to settle for magnesium instead because I want to be able to pick this thing up:

The idea is to get something with the best vibrational dissipation ability (high up on the Y-axis) while having it be strong enough to actually resist vibration (to the right on the X-axis)

​​​ ​ ​
​​​ ​ ​
Here’s where I’m currently at on the case:

x6 5.25" bays on the front. I need to make a sliding door that will hide them from dust/debris still.

From the back you can see I did away with the x6 80mm fans and when to x4 140mm fans for the hdd chamber.
Also I’m no longer going to have welding be used to assemble the shell because I want to get the 2 halves of the case anodized different colors and have a harlequin theme going on.

1 Like

I think you mean “Mass Loaded Vinyl”.

I just sprayed side panels on the inside of my last case with the asphalt that gets applied to the inside of wheel wells to quiet road noise. It comes in a 12oz spray can. I had to mask the slides.

If you are going to use metal, you may as well make dashed lines on the fold marks so after you get it cut you can fold it without a brake.

It seems like most of the thread has been derailed. Here to derail it further, for the sake of knowledge. I am going on a rant, I would like to hear opinions.

I used to have an old Antec Sonata II filled to the brim with 3.5" drives. I posted it on the forum. It had 11 HDDs and a SATA SSD (and using an old powerhog Lynnfield Xeon x3450). It was consuming tons of power (actually doubling my power bill if left 24/7). All the drives were 2TB and I had a RAID-Z2 pool on it, with a hotspare (10 spinners pool). I had a 16TB pool, for a power consumption that was beyond ridiculous for the capacity. I can do better with a RAID-Z with 3x 10TB drives if I want to.

I moved from powerhogs to something that can actually be called portable that uses peanuts in comparison. I can power everything on my Bluetti EB70S (tested - max output possible on 4x AC plugs is 800W AC). And that includes the additional UPS where they are plugged into. Without my threadripper being on, my power consumption at idle / normal operation is not even reported by the UPS, it’s ridiculous. With the TR on, it’s 200W out on the Bluetti and about 188W on the UPS (which is plugged to the portable power station). I don’t recall what the power station reported, but I think something like 10W drawn by the UPS with the things I have on it.

My setup? A rockpro64 with 2x 10TB drives in mirror and 2x 2TB MX500s also in mirror. I moved everything off of the NAS, but even beforehand, I hated the concept of the “forbidden router” (sorry Wendell). I’m not saying it doesn’t have its purpose / usage scenarios, but it’s definitely something I don’t want to have. I have my containers on an Odroid N2+ and I have an Odroid HC4 that is waiting to be converted to a backup box (with some rubber grommets around the HDDs once I get new ones). The NAS is only a NAS. Its only duty is to serve NFS shares and iSCSI targets. A NAS shouldn’t be running VMs or containers, hence why I can get away with a meager arm cpu for my NAS.

I initially was hoping to modernize my Antec Sonata II build, so I went with the TR and an Antec P101 Silent, which can hold 8 HDDs, 2 SSDs and a 5.25" drive, but you can mod it to stuff even more HDDs if you avoid long GPUs, at least 4 more. If you avoid the bottom expansion slots altogether (like getting a microATX board), you can probably fit at least 2 or 4 more drives, for a total of 14 - 16x 3.5" drives. If you go with a SFX PSU, you can put 2 more drives on the bottom, but by that amount of drives, you’d be hard pressed to find a decent SFX PSU that won’t die at a young age (there are 800W SFX PSUs out there, but still…).

But I realized I really don’t want the heat from the TR constantly in the room (the Sonata was already horrible in the summer, I had to shut it down every now and then). The build is silent, but has no non-m.2 drives (used to have the 4 I moved to the rkpr64 official case). But even before the heat, the idea was to save power. I didn’t want such a high power bill anymore. When I built the TR, I built it with the intention of powering it down when not needed, which is what I’m doing now. And it was a really good called, it saved my behind, because all the houses in the US where I lived had bad electrical connections. The worst one in Europe I had was having a whole apartment on one fuse (when the fusebox had 3 fuses), but it was still enough to power the old Sonata and 4 or 5 more computers, a fridge and a washing machine all at once, without burning the house down. Here the fuses would trip even from my PC being powered on after a while (well, I’m exaggerating, but it did happen once - but generally, a room heater would always trip the fuse after a few minutes).

The Sonata was a NAS and Hypervisor combo. Generally, if I am to design a datacenter, I split them to a dedicated NAS hardware (lower power) and a hypervisor. And in the same spirit, that’s what I did with low-powered arm platforms. And while it wasn’t me who had the brilliant (sarcasm) idea of using HP ProLiant MicroServer as NASes for a hypervisor infrastructure at my old workplace, these low-powered celerons with 8 to 12GB of RAM served a bunch of VMs and weren’t even sweating (impressing for a RAID-10 setup, the CPUs were mostly idling, while the disks were not overloaded at all, with 20 to 40 VMs each).

I know not everyone can live with low powered arm stuff (even I had too much of it on my desktop after probably around 3+ years). But I’m here to ask questions, just to put them out in the open.

The rant above leads to the following:

  • Why do you need a beefy disk shelf?
  • Instead of buying a bunch of cheaper drives, why not go for more expensive ones, but higher capacity?
  • Is your workload really that big? Are you really going to make use of 24 drives, especially in a homelab environment? (I assume that is the case, given the desktop form-factor and quiet operation requirement - but the heat is still going to be there, even with 5W per drive, we are looking at 120W of power for the drives alone).
  • If it’s not a homelab, wouldn’t the physical security and the peace of mind (in terms of UPS, power generator and AC cooling) be better provided by co-location services? Buying a cheap 24+ drive shelf and another 2U server would fit the bill beautifully. Do you need that to be on a local network for some reason? If you don’t have a data room already built, then why not build one?
  • Would archiving instead be a better deal, saving you money both on expensive drives, hardware and running costs? You may not need to have all your data available at all times, unless you are planning to make an on-demand service for many users (by which point, you should be at least co-locating, if not building your own data room). Taking off infrequently accessed data is a really viable strategy for saving money. Using Tape Drives that have automated arms can even help you access them automatically, with the right software, if a waiting time is not too much to deal with. For that matter, even cloud archiving can be worth it (just use restic to encrypt the data going to it though).
  • Instead of 1 beefy server running all those drives, maybe going with more lower powered hardware and a software solution like ceph be a better option?

Anyone is free to comment on my setup(s), answer the questions (although they are personal questions that you should be asking yourself, I’m just pointing them out) or ask more questions.

Finally got some time to play around in CAD, here is my original layout idea with the six 5.25" drive bays:

And of course, looking at the back of the motherboard it looks something like this:

Now, any drawbacks? Well, the case interior is something like 302 x 408 x 396 mm bringing the total up to maybe 50-55 liters of fun in the end. The good part is that the cabinets will be more or less rock solid, and all you need is a way to connect 24 drives. However, with current 18-20 TB drives, even 12 drives are a whopping 240 TB of raw storage. With 22 TB drives 24 drives would allow for slightly over half a petabyte of raw storage.

Yes, these are just raw component layout sketches. Another nice part about this design is that the top / bottom of the case can easily house a touchscreen for displaying general health of the system :slight_smile:

I’m looking for any and all ideas related to case features or case use cases so we’re bound to make some extra rail stubs. I know I’m only one person so there is no way I can think of all the good ideas.

We’re an enthusiast community, I’m sure many of us have multi-hundred terabyte datasets we need storage for (and no i don’t mean chia or ṗ2ṗ).

my other thoughts:
I agree with you that most should be using larger hdd sizes in their arrays because at this point it’s kind of silly to have x24 2TB drives; but the larger 18-22TB drives fall outside of the 5-10 IOPS sweet spot for “general workloads”. I’m going to be using 16TB drives for the case which are at the ragged edge of that window. I would have really liked to get some multi-actuator drives if they were more available.

I think it’ll be possible to get a ~0.5PB (raw) storage server running under 200w idle with this; which is significantly better than the ~400w idle my current server does while only offering me ~200TB (raw).

​​​ ​ ​

It is really good at damping (basically the highest on that chart), but not good at being a structural member of the case.

The manufacture I was planning on using has cnc metal brakes that’ll happily fold the solid metal for ~2USD per fold so no need, the extra laser time from the dashes might even be more expensive than the bending fees.

​​​ ​ ​
​​​ ​ ​

The FCC would like to have a word with you about EMI, lol jkjk.

This reminds me of the 80/20 contraption that wendel showed off in one of the recent videos.

1 Like

I might have started to design myself into a corner here.

I finished the drive cage, it should have very good damping qualities for it’s size and is hung off of rubber isolators from the top; it even has little push buttons to release the hdds.
I managed to make the drives easily hot swappable without having to get a custom backplane made by using little cable retainers on the back that can hold the SFF-8639 cable ends in with just an o-ring.

My problem is that I can’t make a hole/door in the area of the case were you’d expect to remove the drives from without making the case flimsy:

So as of the current design it would be necessary to slide the case apart (like a g4 cube) to access the drives. I’m not sure if this is too much of a hassle or not.

in reality it’ll be more like sliding the case off of the guts of the PC since 80% of the weight is going to be hdd/psu/5.25" drives.

1 Like

Sliding the whole section of drives out is going to most likely be a bit of an inconvenience. While the HBA slides out with them so cable connections will be fine, sliding everything back in will most likely be tricky because you are going to probably need one hand to guide all the cable ends in nicely while you slide the assembly in with your other hand.
Could you use some sliding rack rails type things to keep it connected the whole time and sliding on bearings for ease of movement and stability? One rail at the top behind the MB, two rails at the top of the HDD bay area, and a rail at the bottom of the HDD bay area is my thought. It would still be trouble with the cables, but it would have a lot more stability and be easier to slide in nicely and support the weight.

If you get that whole assembly for the drives made, would you mind ordering a unit for me as well and I can pay you for it and shipping to me? That looks like a nice unit for 30 drives and my brother in law was talking to me the other day about doing an industrial design case out of t slot framing and aluminum panels. Im thinking about doing it and that HDD bay would be really nice to simply connect in to the whole thing.

A single HDD weighs ~650g (~1.4 pound). The weight of 30 drives is 19.5kg (43 pounds) without any mounting or cage material.

There better be some safety mechanisms that prevent that thing from slipping and falling.


The entire motherboard, hdd cage, 5.25" cage and cable management duct will slide out all together; The only wires I’ll need to worry about are the wires going to the fans which are directly mounted to the case, those will need alot of slack for sliding out… I’m thinking a mini cable chain could make that tidy:

Its kind of hard to see in that animation, but there’s a big horizontal plate that the hdd cage hangs off of with those 5 rubber isolation mounts, and the motherboard tray connects to the large horizontal plate, its on a sliding system. Right now I’m just thinking of using a 2011 aluminum rail I’ll make myself for the slide; If galling becomes an issue I could go to something like an 932 bronze for the rail (the horizontal plate needs to be aluminum for cost and weight reasons).
Alternatively I could buy a telescoping slide assembly for the whole thing to slide on, mcmaster has one that would work perfect but it is pricy.

Do you have a 3d printer? Certain portions of the drive cage could be 3d printed to drastically bring down the cost (the bottom plate, some of the side columns and the rear cable retainers/support could all be 3d printed). It’s the little blue retainer springs, some of the drive columns and the top plate that would need to be made out of metal.

​​​ ​ ​

That whole hdd cage assembly will be ~65 pounds when filled up with 697 gram hdds. each one of the 5 isolation mounts is good for 15 pounds before over sag becomes a problem. There’ll be a mechanical stop to the sliding mechanism so the case doesn’t accidentally come apart.
The part I’m having more trouble with is making the handles to pick the case up structurally sound for a ~110lb case all filled up.

​​​ ​ ​

​​​ ​ ​

I thought of another feature I want on the case: I want individual power switches for each 5.25" bay. I want to be able to turn the LTO drives off when not in use because they consume a considerable amount of power in their idle state.

I went ahead and bought some of these metal dpst light up rotary switches to test out (they need to feel satisfying or I’m not using them):

1 Like

You need a Lian li d8000. Long out of production .
Has 20 bays which can be made hotswap by adding the backplanes . Which I did .
Then there is 2 x 3x5.25" bays . There I’ve put 2 x icydock 5in3 . So total 30 drives .
10 drives of the icydocks get cooled by 2x80mm fans.
The other 20drives get cooled by 6 x 120mm fans . Case has 4 x 120mm exhaust fans
Because of the size it’s whisper quiet.
Ok I did add some sound dampening to the panels

But no rackmount chassis can mount 30 drives and keep them all in the low 30celsius AND be pleasantly quiet at the same time

I’d like to give some perspective that some have missed from the point of view of someone on a budget.

For me, a big appeal of a case with lots of HDD capacity, isn’t to put lots of HDDs in it - it’s that I can expand it when necessary without needing to go through the painful process of replacing the HDDs one at a time when I need more space.

Another option for people is to get 2 or 3 computer cases
one is the head unit and holds a fraction of your hard drives and the compute components
the other(s) holds the rest of your hard drives and has a few sas internal to external adapters in the pci slots. ie:

btw, I didn’t know that these existed, upstream it talks to a sas controller, or sas expander. Downstream it talks to either sas or sata hard drives. You can have up to 3 sas expanders between the controller(s) and drives.:
for 16 drives all internal connectors, x8 upstream:

For 24 drives, includes x8 external sas upstream:

It may be useful with:

So you have enough channels to your external units.

Remember to not put more drives in an array than you have channels on your sas controller,
ie if you have 8 sas channels, you can get performant arrays of up to 8 drives per array. If it was a raidz2 of 10tb drives, it would have 60tb capacity. If you want longer strings, and want it to be performant, use a sas controller with more channels. ie using a 16 channel raidz2 with 10tb hard drives, it would have have a 140tb capacity, ie more efficient with HDD capacity.


Yep, like a drive shelf but in tower form and DIY for a bit cheaper, but probably holding less drives overall.
Right now you can get cheapish old, used Chia drive shelfs for around $300-400 on ebay, but typically they are around $1000-1400 (with expander in it already) new for 12-15 3.5"/24 2.5" drive shelfs.

1 Like

If you’re going that route why not USB3 connected NAS enclosures? Something like this:

It’s not as if HDDs are ever going to go faster than a single USB 3.0 cable anyway…