Storage server serving 100+ clients

We are exploring options for a storage server in environment where everyone works directly from an old storage server network share, even for video editing, so I waned to get a NAS with 64TB+ and dual 10GbE LAN.
I was looking at Synology RackStation RS4017xs+, due to my positive experience with their home nas devices, but I am not sure if it has place in Enterprise environment, even though this one is advertised as one. Another one I’ve been looking at is Aberdeen ABERNAS N32W (Windows Server, as most servers here run Windows Server for AD, DNS, Deployment…). What is your opinion on RackStation, and do you have something else that you can recommend?

You check out IXsystems?

What kind of budget do you have? Because that will limit options for sure.


Thanks! I have asked them for a quote. No budget in mind yet, just exploring the options for now. That Synology solution is about 10K and I did expect it to be higher

45drives and iXSystems are my recommendations. iXSystems uses TrueNAS for their OS (inhouse version of FreeNAS), so you have exceptional resiliency of your data with ZFS, runs on pure COTS hardware, data and OS are hardware agnostic*, AD integration, great snapshots, etc

45drives is an inhouse setup of CentOS with ZFS, also a great option, but I have less experience with their hardware requirements and OS setup, so can’t speakmuch more about it.

  • = Drives need to be attached to an HBA/pass on smart data without interference (NO RAID CARDS. In general, you shouldn’t be running software on top of hardware raid anyway)

Also, not sure if you’ve done so already, but establish your video editors computation requirements. It might be beneficial to have two NAS’s, one with a smaller SSD array, and a larger HDD array that’s periodically backed up to and holds most of the other basic share data


NAS: Only if you can survive with a single box crash (none of my clients can!!!)… My advise: 3 x commodity servers (we use Supermicro) running CentOS Linux + GlusterFS. Not only it will perform well if configured and tuned rightfully, it can scale as much as you want, it is a solid SDN solution and very flexible. It integrates well with OpenStack, oVirt, etc. It can be configured to provide clustered Samba services with CTDB. It can be configured to provide clustered NFS. It works well with KVM. etc etc etc.

I’m just finishing a migration for KVM Virtualisation (60 virtual servers) on a 48 TB GlusterFS 3 way replication cluster: Server redundancy with Quorum, Hard drive redundancy (RAID), Scalable and flexible. And it will do geo-replication too!

Gluster Present and Future


What OP descibes sounds mission critical, so going with rugged data storage is a must.

The problem I see is the really failsafe systems require careful administration, wich not all companies have the will and/or budget for.

1 Like

Indeed, there’s no free lunch but we end up with a affordable solution not too complicated to manage!

1 Like

Interested in what you finally got @milos and how its working out?

1 Like

@ideastormer I ended up getting Storinator Q30 turbo from 45drives, redundant power, dual xeon, redundant OS drive, 30x8TB drives, running Windows Server 2019. It arrived just the other day and I have not even plugged the drives yet.
Others had ridiculous prices and this being a Windows shop, it made the most sense. Although our main IT screwed us on switches stack, so we will have all this on only 1GB for now. I’ll get back when I set it up

Rip that network connection

Thanks for the reply, @milos now I learned about a new (to me) storage company.

Speaking of network woes, recently forcing 10 gig connectivity at the switch has benefited us a ton, needless to say after getting network people to get quotes and discovering that 10 gig options are not that much more so why not to increase bandwidth where it counts.

1 Like

Adding a NIC and setting up LACP should help a lot, even on 1Gbps.

Or break everything and cause a broadcast storm :stuck_out_tongue:
Jokes aside, a second NIC should help.

I ordered it with already two NICs, both with two 10Gb SFP+. So it will go 4x10Gb fiber to the switches, but workstations are on 1Gb so this won’t be enough for video editing, but at least many will be able to access it simultaneously.