Trying to put together a file server/NAS solution for a small business

I have read the raid10 vs raidz# argument a few times now and have waffled back and forth. I have also heard that you can Mix Striped and RaidZ to improve performance AND your cake.

What I haven't seen anyone address is the fragmentation issue in a multiuser production environment.

1) Is this an issue or an advantage?

In the video tuning the size or the read/write was a thing in tuning database performance. Can you pick an good size?

Spreading the data across multiple disks on the read/write (fragmentation) seems like it should/could(?) be faster for the zfs to build the packet size that it is sending across the wire using an lba to access each of the disks on a different thread.

I thought that ZIL/SLOG and L2ARC were designed to help with the inherent latency and low iops of spinny disks. In a multiuser environment this seems like it is a necessity otherwise it seems to me you would get a deep queue of requests waiting just based on the limited number of iops available with spinny disks.

Enabling the appropriate compression is also a things.

On the network side Link aggregation to the switch might be a thing as well if you can push the bottleneck to the network.

mmmm bottleneck... Identifying them and removing them is also a thing. This gets a little past my fu but knowing where you are limited and attacking the bottleneck seems like the appropriate order of attack and would let you develop a plan.

If a pool is old, refreshing it can remove some sins.

I haven't read the book, let alone written it. We should ask the guy for a ZFS tuning video. This first one really wetted my appetite for FreeNas even if ixsystems has stumbled a bit. Looking at solaris might even be interesting seeing how enterprisy/multi-user it is by design.

1 Like

Thank you so much for the elaborate answer!
We do indeed use SMB. But AFAIK, that's really the only option you have when accessing a FreeNAS server from a windows client, isn't it? So the solution then would be to find the CPU with the most single-core performance I can get my hands on and hope for the best, right?
Or is there any way to spread the workload out over multiple cores and further optimize performance?

Since I do have limited experience with FreeNAS but almost no experience with Linux in general, I'd gravitate towards FreeNAS & ZFS just for the sake of having less trouble setting it up. Fragmentation still worries me, but I assume it should be less bad if we put in enough storage overhead and try not to fill it past the 80% mark.

Anyways so, in a nutshell the plan would be:
-FreeNAS & ZFS
-RAID 10, 6 drives
-As much CPU horsepower as possible

That leaves me with just the question of hardware. Ryzen sounds good, but I don't know why I'd need so many cores when SMB is limiting all traffic to being handled by a single thread. Removing parity RAID from the setup should free up even more CPU recources, making the other 15 threads even more useless, wouldn't it?
Motherboards also worry me, I haven't seen any non-"gamer" mainboards for Ryzen yet, and I've seen enough of those "super gamer hyper blaster 9000"-mainboards kick the bucket to be uncomfortable having one of them in our NAS.
I'll keep a look out for boards that specifically support ECC though, maybe those will be better.

On a more general note, do you (or anyone else) have recommendations on where to find a case that supports at least 8 3.5" drives in a hot-swappable configuration?
I was looking into getting a 19" case for the whole thing since we've been looking into getting a small noise dampended server cabinet anyways, but I'd need at least 8 drive bays and the ability to use at leat 92mm (preferably 120mm) fans.

1 Like

Pro Tip: If you highlight a section of someone's post, a "Quote" button comes up that will quote only that portion of text. i.e.

Clicking that button produces this:

Minimum effort quotes. :smiley:

No, actually. Microsoft has a Network File System (NFS) client for Windows.

https://technet.microsoft.com/en-us/library/cc754046(v=ws.11).aspx

Or you can go Open Source:

The NFS Client is part of Services for Unix which you can find under Windows Features. You need to enable it by checking a box.

This is on Windows 10 Pro. Just search for "Turn Windows Features On or Off" and click that when it appears:

You just need the Client ofc. Note that doing this probably won't solve all your problems if the parity data being written is part of the problem and I would guess it is. It may help but I can't be sure.

That's the only real solution imo for small businesses. The others just require too much effort if you don't already know how to manage it.

Yep, though the CPU having horsepower is less important if you do use NFS instead. Realize you've solved both bottlenecks at that point. You aren't using Parity RAID so no parity bit calculations. You aren't using SMB shares, so no single-thread only for the server side.

I would still get something decent just because "we might do something more intensive later". Also, it should speed up Scrubs (ZFS looking for bit rot and correcting it).

Yep. You could totally save money by getting a lesser RyZen CPU (Say the RyZen 5 4c/8t). I only recommend RyZen for Price:Performance reasons.

The other 15 threads aren't more useless if you use NFS, and for doing other stuff on the NAS like scrubbing for bit rot. The way ZFS looks for bit rot is by reading the disk and check summing the data, then comparing the checksum to it's record of what it should be. This is part of why you want ECC RAM. Running a scrub with non-ECC RAM theoretically could cause your ZFS system to think the data is corrupt when it's not and thereby cause it to corrupt the data unintentionally. This only happens if your RAM has an issue, but ECC's purpose is to account for when it might.

If it makes you feel more comfortable, you could try using your current NAS with RAID 10 and NFS to see if that solves your issues. Of course then you'd need somewhere to stick all your current data since you'll need to take apart the RAID 6.

Theoretically, that should make a notable difference. If it doesn't, you might want to run a test with lots of operations happening on the NAS then login to the FreeNAS web interface and check what performance looks like there.

It's entirely possible you have a weaker switch in your Network infrastructure and that switch just can't handle the throughput you are going for. Some Gigabit switches say they're gigabit but don't have the CPU power to actually fascilitate gigabit throughput. Just an example.

The only real option you have for that is server rack cases. i.e. these:

I'm actually building a server in this case right now. Note these just come with 8 open 5.25" bays. Then you'd have to buy two of these:

Actually... wait. Those are only 3x5.25" bays. There are normal ATX Full size cases with 6x5.25" bays.

Nevermind. It won't require a Server Rack specific case.

So you could totally stick two of those hot swap enclosures into something like this:

If you don't like that case, here is a list of normal computer cases with 6x5.25" bays:

http://pcpartpicker.com/products/case/#t=4&G=6,12&sort=a8&page=1

Oh. Well then.

19"

Not sure what you mean. I only refer to server rack cases by the number of rows they take up. i.e. a 4U server case takes up 4 rows in a server rack. Googling shows me 19" is a 4U.

Then I'd definitely recommend that Rosewill 4U case I linked above. It's pretty decent. Has front USB 3.0. Comes with two 120mm fans in front. Which would incidentally be where you'd stick the 8 hot swap drive bays so they'd get air flow.

It's pretty good.

Except that they removed it from Pro in 1603, only works in enterprise now. It is still there to check but it won't mount any connections.

1 Like

I'm personally looking at a used Supermicro SC846E (24 Bay) with an eye out for one with the SAS2 backplane, LSI HBA and probably another E5 2670 with a bunch of ecc ddr3.

On SAS1 with 24 Drives 2TB SAS RAIDZ3 (mostly archival data) I would have my needs covered for the forseeable future. I would add SLOG and L2ARC for a bit of performance tuning and SAS2 for bandwidth and >2TB drives later when the price comes down.

I have a short stack of 600GB s3500 ssds that I can pool mirror/stripe for fast data and I will be adding an NVME drive to for those jobs that require iops.

I am torn between FreeNas and some flavour of Solaris. Solaris is more enterprise oriented than BSD. ZFS on linux seems like too much playing about for me when there are a couple of better alternatives.

I intend to get a VMUG evalexperience subscription to integrate ESXI into my lab. I'm also a bit unclear on how I will integrate my vm pools but based on my use of Desktop VMS in VMWare Workstation I will probably put Desktops in my SSD ZFS Pool and Server VMS on the spinny disks.

Database transactions are likely to go on the SSD Pool as well (assuming I have room).

Spinny Disks will get archival vms, large video projects, archival stuff so in my case fragmentation is not nearly as much of a concern.

https://social.technet.microsoft.com/Forums/office/en-US/c1b1d99f-ba29-41f7-af4c-e5ec2e5f8b69/client-for-nfs-is-not-licensed-for-use-on-this-version-of-windows-error-windows-10-1439310?forum=WindowsInsiderPreview

Client for NFS was added to the Professional SKU in Windows 10 Anniversary Update (1607).

Previously it was included only in Enterprise / Ultimate SKUs.

We've identified a bug in that change causing it not to function on Pro SKU, which we'll be fixing and releasing in a future Windows Update package.

Sorry for the inconvenience.

Regards,

Tom Jolly

That turned out to be wrong, but they added it in later updates.

can confirm it works now mounting NFS share in Windows 10 Pro now , after update to version 14393.576,
both with mount command (mount -o anon \\10.42.0.1\export\home\share2 N: )
and with entering "\10.42.0.1\export\home\share2" in explorer address.

Yet to try out with some security and still haven't measure speed,
but for work in progress it is ok and at least it is working now. :slight_smile:

I am updating Feedback Hub case, to reflect new NFS client functionality started working in Windows 10 Pro.

ZFS on linux is not hard at all

2 Likes

:sweat:

Tuning is often evil and should rarely be done.

Just looking at the stuff in your sig, you've got 32GB of RAM. Just like the FreeNAS manual says, if ZFS is slow the single best way to increase performance is to add more RAM. That is, and will always be true, until you get to stupidly expensive quantities of RAM.

And using Linux is, I dare say, a far sight better than dealing with Slowlaris. ... Maybe that's personal preference though. :slight_smile: