Need advice for fast network storage

Hi guys,

I'm looking to build a server for ~30 users.
Now, I've been looking at a lot of different options and would still prefer going with a FreeNAS box built to last.

Only thing is... It has to be a lot of fast storage.
It's for a 3D Animation pipeline in a school so there's a lot of small and large reads and writes going on all the time coming from all the users on the network. There's two classrooms with 13 computers each.

All of the workstations will have 2x 1Gb connections, so I'm looking for some good (silent) switches that can handle LACP, which I could then link together with 10Gb cables between them. Although after my initial search, LACP seems limited to a set number of ports on most mid-range switches, so if all of the workstations are limited to using just the 1Gb I don't mind as long as I can at least saturate the connection in most use cases, getting performance comparable to an internal HDD.

So I just wanted some feedback on if my crazy plan is at all viable for the budget or has a better idea of how to approach this. I have a fair amount of networking knowledge and have dabbled with FreeNAS in the past, but it's nice to get a second opinion.

So here's the plan:

1 FreeNAS box with 16 to 32 TB of storage (attached with either 1 10GbE or 3-4 1Gb LACP connection to the first switch)
3 24P switches: 1 per classroom and 1 dedicated to the NAS and a license server.
All switches linked by 10Gb connections.

The budget: ~€4k for the NAS, ~€2.5k for the network equipment.

Am I crazy or can this be done?

If I'm sane, what kind of hardware should I best be looking at for the NAS?

LACP will work for the server but not for the clients. Unless the clients are accessing multiple servers you will not get more that 1gbps.

1gbps for the clients is still pretty fast, but if you need more that 100MB/s then you might need to look at 10gb Ethernet.

I don't know much about ZFS but I'd think you'd need a pretty decent cache system aswell as fast storage. I'm not sure if raidz increases performance like RAID 5 does.

Thanks for the LACP info. 100MB/s should be fine for most of our purposes.
It's just the number of parallell users that I'm concerned with.
All the wiring will be CAT6 anyway, so if I want to upgrade in the future there's a possibility to change out the network hardware..

I'll have to look at some RAIDZ vs RAID performance numbers to get the most out of it.

RAID5 and RAIDz operate in different manners to each other.

RAID 5 for good throughput requires a nice RAID card. Otherwise you will end up with terrible read and write performance.

RAID z on the other hand does not require a controller, but is designed to be used in a box that is only handling storage. If you have a box with plenty of RAM and a good processor (if you want to run compression as well, go faster as well).

RAIDz has the benefits of ZFS and data not going corrupt because of bit rot. However this is more of an issue with longer standing data. But will also aid against disk failure causing corruption issues.

A good RAID card will allow you to use SSD's to cache data moving between client and array, for RAIDz however this will likely benefit you way more due to the read loss encountered.

Your main problem here is disk speed, if all your clients hit the server at one your going to require a lot of disks to handle the load. If you want there to be continual capability for all the clients to read and write at 100MB/s and a disk on average reads at 100MB/s (this differs from disk to disk) then you would need an average disk array of 26 disks (effectively) At which size for standard RAID it would be essential to run more redundant methods of RAID. (eg RAID 60) To overcome the risks of multiple disk failure. (A point at which you would also want to look at RAIDZ3)

Or you could shrink the array sizes and rely on SSD caching to get the work done. (could use RAID 0 on multiple SSDs for temp caches of reads and writes).

However, I will say how often is an animation program hitting the disk once the project is loaded? Literally the above maybe over kill if the only times it goes to disk is for loading, saving and quick saves.

Thanks, that's exactly the kind of black and white explanation is was hoping for. :)

You're right, hitting the full 100 MB/s will rarely happen, I was just looking for the worst case scenario.
Realistically, every now and again there will be some large files but most of the time we're looking at a lot of file sequences of rendered images.
26 disks won't be in the budget I'm afraid, so I can rule that out. But with the information I have now I can start looking at a more basic setup.

I came across this article for a fast mid-range NAS, thinking something similar might be able to handle our needs.
It describes a board with an Atom processor, would it be enough for what I want to do with it or am I better off looking at a Xeon setup?

26 disks is stupidly overkill, but as stated that would provide the lovely 100MB/s for all at once. But as stated, is that really going to happen other than rare occasion.

As for the build linked, that's a very nice example of what can be done. If you want to keep the small form factor. Consider the ASrock board that uses the same processor.

Reason being, to get around the lack of SATA ports on that supermicro board. A HBA was used, but it limited the build to using 4x 1Gb NIC's (in LACP). Using the ASrock board would allow you to keep the single PCI-e x8 free either for later exapansion or supporting a 10Gb Card.

One improvement over that build is using 2 SSDs for caching instead of just one. Would aid slightly.

As for the Atom or Xeon, the build linked they seem to be getting rather good speeds with it. A xeon would be of benefit if you want the server to double duty with another task or you want to run compression (do not know how effective it would be for your workload) on the data on the fly. Going to a Xeon on a larger form factor board will give you greater longevity in case you need to add additional hardware cards or support more memory.

Yeah, the ASRock board was something I was looking at, nice to know I was on the right track.
With a 10Gb card it will probabaly be more than enough.
I found a PCI-E made by StarTech, never heard of them so I'll have to look into if its a reputable brand and also compatible with FreeNAS. Here's the link

The SSD's I was looking at as well, but I gathered it's important to get SLC-based SSD's for reliability?
Entrerprise SSD's are a little over budget though I'm afraid.

Just to chime in regarding the SSDs: Shouldn't matter that much. See this link [techreport] for more info. I would rather go for affordable since you might need to swap them every once in a while IF your transfer volumes are really high.

Interesting breakdown, thanks for the read.
Even with these older models of SSD it's nice to see their lifespan is still quite long.