Good high-end NAS cases don't exist, Should the community make one?

  • Stop
  • Rethink
  • Redesign
  • Stop again
  • Another rethink
  • Come to the conclusion you can’t do better then experienced case designers
  • Give up and buy a commercial case instead

I’d be pleased to be proven wrong! :heart_hands:

He has four 140mm fans blowing right across them already and right out of the case. Not sure how much more you can really improve it

You have to take a look at NCR 3434 tower first. Then perhaps one improvement you may want is to remove the bulge of the case housing the fans for HDDs. Aesthetically NCR 3434 kind of tall & clean tower ((hence appearing slim) is more pleasing to look at.

A more subtle one is effectiveness of those four fans. Without trial & error, either physically or creating CFD simulations, it’s hard to tell for sure. But here is the thought: the other side of the four fans isn’t exactly empty. It’ll be cluttered with a lot of data and power cables to the HDDs, and/or PCB back panels (I would prefer…for easier HDD insertion/removal and less cabling). Perhaps it’ll be more effective blowing air from front to back with using only two fans.

Very likely that will be the case :grin:

In general, DIY is a process of discovering and re-discovering other people’s mistakes. MODing a good case is lot easier than building from scratch since you have a lot fewer places to go wrong.

I would simply go with a sandwich design. Front, 140mm height, houses ATX motherboard and 6x5.25 bays. Back, 160mm, house drives. Drives are mounted with connectors facing the back. A version could easily be made with backplanes here, too - and with as many drives as this, you need some sort of backplane solution regardless.

In theory this approach should be easily able to handle 6x5.25" bays and 40-something 3.5" disks on the backside. Let airflow go bot->top with either 2x200mm fans top and same bot, or 4x140mm fans bot and same top.

Total dimensions would be something like 300x300x400, wider than your average eATX case but not by much, and just as large / deep. However, fitting over 50 drives in the same case opens some challenges, not the least how to connect all the drives without backplanes and SATA switches.

tbh how many people need 50 drives?

My approach to this so far is to design it modular and around 5.25in drive bays so you can install the number of ICYDOC product that make sense for the build size.

This also has the advantage of allowing the user to pick the storage medium they want, so if they want 2.5in U.2 fine, get the ICYDOC that has that spec. Want 3.5in SAS, fine get the ICYDOC for that…

Not trying to plug my own work, but rather offer it as an alternative idea and so you can see my thought process and see if any of it applies.

My personal thoughts…
I am interested in either a disk shelf work-alike or an ATX compatible case.

I’d like a case to have the following features:

  • Silence is a big deal
  • 360mm radiator support
  • 120mm and/or 140mm fan support
  • Full height add-in cards (e.g., a RTX 4090)
  • A minimum of 12x 3.5” drives with vibration dampening
  • Hot-swap (Optional & preferred)
  • Standard ATX motherboard support
  • ATX PSU support (Optional & preferred)

I’d like a disk shelf to have the following features:

  • 120mm and/or 140mm fan support
  • A minimum of 12x 3.5” drives with vibration dampening
  • Hot-swap (Optional & preferred)

I’d personally have:

  • The shell of the case slide up instead of the guts sliding out due to weight distribution
  • Stack the HDD’s vertically in the front

Just two thoughts off of the top of my head.

I have resorted to looking at the supermicro 36 bay cases and it turns out they do make a version that supports full height pcie cards… but it’s 3200usd so back to designing I go.

I would really really prefer to just buy a case but there aren’t any close to the needs with an air filter, handles, 5.25" bays ect.

​​​ ​ ​

I’m going to be removing the side bulge and redesign keeping closer with the 3434 aesthetic, good advice.

The side to side airflow will be the most efficient layout for using quiet fans since there is less air restriction so the fans can spin slower. I thought about doing CFD to make sure the hdds were relatively evenly cooled but I thought of an even easier way to make sure of even cooling: I can just make the exhaust a series of round holes, tap the holes and insert grub screws to balance the airflow for cooling using a trial and error method.

The hdds will be hot swappable without fiddling with connectors, but instead of a backplane there’ll be a holder that holds the sff-8639 connectors in place (a mechanical backplane?) so the hdds slot right in, no need to get a custom 5x6 LFF backplane made which would not be easy even if it was just a passive backplane.

​​​ ​ ​

​​​ ​ ​

​​​ ​ ​
I’m reevaluating priorities again given the size of this thing (if the case is going to be kind of large I might as well let it fit a large motherboard) and decided to make it fit the 15"x13" SM motherboard standard; ATX will obviously also fit.

I just watched the full nerd episode with Wendel today and he mentioned saturating a 100gbe connection with hdds using disk shelves upon disk shelves full of mechanical hdds. Now I’m thinking accomplishing that would be kind of cool to do with a single tower.

1 Like

It may not be a mainstream need, but as the number of enthusiasts grow that number will also grow.

Modular would be ideal, but usually it comes at the cost of sturdiness. I thought the cooler master haf stacker series of cases was really cool but when I went to a frys to see them, they felt like they’d fall apart if bumped too hard.

I’m in agreement with all of those points.
I just got a 360mm radiator for a silverstone case and it is absolutely yuge:


(120mm fan for size reference)

1 Like

100%, which is why my design so far uses the dock bay its self as structure. its not perfect but its 3d printable and on common bed sizes of 200mm ish.

I’m looking for a NAS case too. I’ve done maybe 10 large NAS builds over the years for my NAS/Plex servers.

Icydocks I’ve had poor experiences with, I found the high density 5 x 3.5" in 3 x 5.25" hot swap bays in particular had a failure rate over 50%.

I think what a lot of the suggestions so far are missing are the ability to do proper hot-swappable, externally accessible tiered storage.
My personal needs (again not met by the market) are:

Rackmountable 4U (short depth, ideally sub 500mm)
At least eight E1.S NVMe bays
20-24 3.5" SATA bays with high quality trays that don’t fall apart/bend
optionally some 2.5" SATA bays for a middle tier of SSD storage
ATX MB and PSU
Space for at least 4-5 PCIe slots (full height)
120/140mm fans

Why does nobody seem to make something like this?

Two reasons.

  1. The market for a case like that is almost nothing. Companies looking for large drive storage will want it packed full of nothing but HDDs, they don’t want a combined rack of 3 tiers or even really 2 tiers usually. Companies who can afford large arrays of NVME storage for a caching tier want a 1-2U form factor packed full of nothing but that. Home users that want as much as possible of both are the minority of home users, and home users are already a minority for this product type. So the money isn’t there.

  2. Those requirements are pushing past the absolute limit of physical possibility. The best you can do is a front with 24 3.5" drives in a 4U chassis, which is 4 columns across and 6 rows. If we take away the top 1U you have to cut 8 of those drives out, or we can just cut 1 row out in a partial unit space. Which is probably doable but then requires a whole custom backplane which further pushes out the cost to do this. If you cut out a single row of drives that leaves you with 20, so good there, and then you might be able to fit 8 E1.s drives in 2 rows in the space you have left. Since a 3.5" drive is 0.8" tall and an E1.s drive needs 10mm of space, so two of them would be roughly 0.8" taken. So you can hopefully fit two rows of 4 stacked on top of each other. Then to the side of that you could fit 4 sata SSDs. So theoretically the drives could all fit, but it would require a fully custom backplane and a lot of money while being at the absolute limit of physical size constraints. Then behind that you would have 2" for cables, a row of 120mm fans, 1" of space between the MB and fan row, and then the back of the chassis. Again very size constrained inside the chassis as well.

If Sun would make cases today for standard commodity hardware, I’d totally buy a V880



With interior space for days, wheels and plenty of hotswap bays.
We had some proper engineering in the past. Proprietary shit ,dimensions and backplanes are way out of date even if you could get a stripped one for cheap. A shame.

This would be close to my dream NAS case.

2 Likes

Currently I’m doing tiered storage at home… just with a lot more boxes. And I’m sick of all the trade offs

I have two Synology DS1821+, one loaded with 8 x 20TB drives the other with 16TB drives, both with 480GB RAID1 NVMe caches.
I then rsync those to two identical DS1821+ with the same drive loads off site at a 2nd home.

Meanwhile my actual applications are on a 4U short depth Ryzen 7950x server with a Quadro GPU which mounts the NFS shares with local NVMe caches, fuses them together with mergerfs (because I’ve hit the 103.7TiB effective volume size limit on the Synologys).

It’s all super complicated for what I should be able to do in one box.

I’m kinda thinking my next step should be a used Dell R640 (which is not far off the V880 Exard3k is dreaming of if it’s 2.5" drives you dream of) with 32 2.5" bays, I can just stuff that with 8TB QLC SSDs for ~180TiB of flash storage (RAID-Z3) and plenty of PCIe lanes for NVMe and GPUs and do the off site backup to the existing DS1821+s (selling the 8 x 16TB pair)

Pricey… but I hopefully wouldn’t have to deal with this kind of stuff anymore:

I do. I still see a point in 3.5" and HDDs. But if I go 2.5", it will be NVMe, not SATA. NAND Flash for 65$/TB on 4TB enterprise disks…market is moving rapidly. We just need backplanes and cables being mire widely adopted. And I’d rather run HDDs than Samsung QVOs tbh, I like to have reliable writes.

If you have multiple NAS running and several machines to manage, it kinda defeats to point of having a NAS. Consolidation of storage.

Why not just stick an HBA in that and run to a 2nd 4U chassis that is just a drive shelf that can hold 36 3.5" drives? If you really want you could also make a mount and bolt it in where the MB would go in the chassis (since a shelf doesnt need one) to hold a dozen or more E1.s drives as well. While it is technically two 4U chassis which isnt your wish, it would take less space than you take up now with more drive space than you have, get around Synology volume limits, have much higher bandwidth, and do away with software hacks to merge things.
If you had a “16e” HBA model (or two 8e models if you had the PCIe lanes for it) you could have 24gb bandwidth shared between all 36 HDDs and another 24gb bandwidth shared for the NVMEs you mounted in the shelf. While not crazy amounts of bandwidth, 3GB/s sustained read and write speeds is nothing to scoff at for mass storage array.

Yup, moving quickly. At my work we are using E1.S NVMe sticks everywhere now with drives costing us less than €70/TB for 8 & 16TB. I’ve seen samples of 30TB E1.L coming in. I think within 5 years the 150-200TB of data I’m having so much trouble using and keeping backed up will be easily stored in a few NVMe drives.

QVOs are fine for me as it’s mostly just data at rest, rarely changed. I even have a few 8TB QVO already and could finance the remainder with the sale of the DS1821s (I get 40%+ Samsung discounts through work)

Currently I’m in a city apartment and I can’t have anything in the rack longer than 450mm. I don’t think there are any JBOD enclosures that meet that. Also most have loud fans/PSUs.

I will be moving to a large house on the edge of the city during the summer. But I’m adding the problem of fiancée/future wife acceptance for noise/space at the same time.

Currently this all fits beside my washing machine and dryer:

3 Likes

All trayless 3.5" bays/enclosures I’ve tried have failed long term or negatively affected hdd longevity.

Other than the rack mounting, I want atleast all the same.
Icydock has an 8 x EDSFF E1.S to 5.25" bay device that they may produce called the CP121.
5.25" bays are extremely versatile, I don’t understand why they are being removed from modern cases.

​​​ ​ ​

​​​ ​ ​

It’s the reads with the QVO’s that are what I’d be most worried about. NAND charge decay affects them more than TLC. Another worry about QVOs is that NAND charge decay happens faster the more PE cycles the NAND has been through and the QVOs have less PE cycles to begin with.

I wouldn’t trust them to be able to read the data faster than a hdd would for any block on the device written more than a year ago assuming the ssd was at 100% wear health.

1 Like

I didn’t take that aspect into account. Valid point :+1:

How are your Toshiba MG08s doing? Got 9 in total (6 in pool, 3 as backup in rotation). No issues yet, running well over a year now.

This, I actually agree 100% with. While 3.5" will still be a thing for another couple of years, I think even the mighty 100 TB 3.5" drives will be no match for 32 TB m.2 form factor drives eventually. Now it’s just a matter of time for cost curves to do their thing. The last 6 months 2 TB m.2 drives in my region have dropped 20% on the consumer side, it is just insane.

With current speeds, it would not surprise me if I in 2030 could buy a 64 TB consumer m.2 drive for ~$500 or so. Time will tell, but I think HDDs are about to be pushed to obsolete status!

Then again, Sarah Connor and her Pops might have a thing or two to say about that!