Test FreeNAS Box - working toward 20 gigabit + scalability

So I'm just goofing off with some hardware.

I'm going to do a video on FreeNAS and setting it up properly. For now I'm experimenting before buying any hardware to round things out.

I expect that the Netgear XS712T and the Asrock X99 10G motherboard will work well as I've already tested them and they work great. I will be using Jumbo Frames (eventually) and link aggregation as I have already tested that and it works great with this combination.

Now I am testing the minimal drive configuration for maximum throughput and redundancy. I suspect, but am not certain, that once I get my hands on some PCI Express NVMe SSDs I will be able to configure a NAS/SAN for absolutely stellar speeds. However, in the past, I've hit limits with SMB and the FreeBSD kernel (8.something) that required extensive tuning.

I think Linus' team is also working on some kind of set of behemoth network servers, but I suspect I'll be able to build a Linux or FreeBSD based box that matches the performance but for a fraction of the cost.

So I thought I'd share what I'm up to here, and where I expect to go.

The name of the game is
1) At least 2 gigabytes/sec sustained read
2) Ideally, 2 gigabytes/sec sustained write
3) Try to approach this speed with silly windows file shares (e.g. cifs -- these speeds are way easier to achieve if you're doing it with iSCSI which I have in the past and it was a cakewalk)
4) Gracefully handle about 4 I/O heavy operations simultaneously

Hardware I'm using or expect to be working with:
Netgear XS712T
Asrock X99/WS Mobo
Xeon E5-1650v3
64gb ECC DDR4
Some additional Intel 10gigabit ethernet adapters

For Drives we have:
4x Samsung Evo 850 (well, actually, I have quite a lot of these on hand, but 4 is the minimum to clear 2 gigabytes/second, and the current config )

8x Older 750gb 7200 rpm Spinning Rust WD Blacks and Seagate 7200.12 HDDs

For the case, pictured here is an old SuperMicro case with 8x 3.5 HDD bays (probably will use a Norco or SuperMicro 20-24x 3.5 bay HDD case later). I have configured it with 8 2.5 hot swap bays using the two internal 5.25 bays.

I think in another chassis I'd probably do 16-18 spindles + 8 SSDs for caching/speed (or 4 NVMe SSDs because they are 2x as fast+). Knowing how to size it is part of what I hope to get out the experiments here.

This is just a test box -- to check and see if the assumptions I have are correct, and what's necessary to thread the needle for having something crazy high performance but also not wasting resource on some kind of insane overkill.

Anyone have a link to the tweet with Linus' groups numbers so far on what they're building? IIRC it's two storage boxes + recycling their old server into a freenas box but freenas sill needs a bit of tuning out of the box to be faster than 2-3 gigabit.

I've gotten to a max of about 7.5 gigabit with 5 minutes of tuning and I haven't even done anything really special yet (and no link aggregation so far).

So for the disk speeds I already have with this really weaksauce configuration, the network is the bottleneck so far. (8xhdd in RaidZ1 pool + 4 SSDs for cache)

3 Likes

"The same thing we do every day Pinky " hehe

hey i want shenanigans to !!!

Hi Wendell,

I'm going to be setting up a home-server in the next few weeks. If you're looking for items to discuss in the video or related video topics, I have some requests/questions:

1) Hard drive stress-testing or reliability testing process including acceptance criteria.

2) Is there anything else that should be tested for reliability before a build?

3) Are hot spares available yet for ZFS other than on openSolaris? Is a hot spare something that should even be considered? (It seems raidz1 + hot spare would be a nice compromise between raidz1 and raidz2)

4) Can a single partitioned SSD or vdev be used for the OS, cache, and ZFS intent Log? (I realize that it may not serve any purpose to have the FreeNAS OS on an SSD since the whole thing gets loaded to RAM, but if you were running ZFS on Linux...)

5) Opinions or caveats about ZFS on Linux or other ZFS options.

6) In measured performance, how much faster is raidz1 over raidz2? or raidz10, raidz20? I see "faster" and "slower" all over the internet but very few data. I found one data set but it was all tested in a synology 8-drive NAS, and I'm guessing that the drives may not have been the bottleneck for the larger arrays in that test.

7) In what cases may the cpu be the bottleneck? What processes on servers or NAS can be cpu hogs? Also maybe discuss core utilization with VMs and/or jails.

8) cache and ZIL size needed?

1 Like