FreeNAS All SSDs?

I think you’ll end up with a nice and fast setup doing that provided your controller can keep up(older controllers were generally built assuming they’d never see something as fast as an SSD, so they can have surprising bottlenecks).

The question is whether your speed/cost/size/app performance trade off is optimal? Also whether you care about optimal? And then how much you are willing to invest in time for experimentation? When it comes to disk space for me, whatever number I think I can live with in trade for speed eventually becomes too little. Whatever tolerance I think I have for failure also understates the pain of recovery from failure that happens a lot with disks, though more so with spinning, at the worst possible time.

I have to qualify all that with the observation that getting to a caching and redundancy setup that I am satisfied with has been a long process that really isn’t complete. I have learned that you can produce a high performance setup with fast nvme and/or SSD as cache, backed by a redundant array for size and security.

BUT:

In terms of absolute dollars to produce the headline 2GB/s round number you are headed for, it will likely cost as much if not a little more. It wold potentially have significantly more space (12 spinning 2T or 4T with 2 nvmes for cache wold be 20 or 40 TB respectively with 2-drive fault tolerance).

2T disks are not going to produce per unit throughput as high as 4 which won’t be as high as 8 of course… but 150is a decent round number per unit (reflecting the difference between inner and outer tracks and 2T vs 8T rates). I do know 10-12 HDD in raidz2, particularly with compression enabled can deliver 1GB/s+ throughput. So your sustained sequential is that with nvme front end delivering much more for cached entries and write.

Once you’ve gone past 4 drives, you can start to do some nifty things with such arrays.

1 Like

Why NFS and not samba ? Since 4.? it should support RDMA (SMB Diret) on connectx3 and above.
Even Connect IB has SMB Direct support.

But i have to admit that i didn’t get to test RDMA on Ethernet.
On IB it is night and day vs IPoIB (captn obvious).

Guess i gona join the 40GbE club in a month or so : )

All linux on this sub-net, so I didn’t even ponder using SMB. MSFT has a long track record of terrible I/O, so there is no associativity in my brain between high-throughput file systems and windows anything. :wink:

RDMA + NFS doesn’t look too difficult, I just hadn’t had the cycles to sort it out given that I am having issues with getting IB ports to auto-configure and stay at full speed.

I got things working point-to-point via 40GbE and setup the IB switch on secondary ports for experimentation, so once I sort out getting everything running a 4x10Gb rather than 4x2.5Gb through the switch, I can try again.

When I forcibly set the ports to 4x10Gb they stay there, but reboot and they are back to 4x2.5G… Well, some of them… Others are always at 4x10G, so I need to go through the exercise of swapping cables to see if its just a bad cable (though none kick out errors, they just don’t auto-negotiate to 4x10 aka: 40Gb) for some reason).

These cards (mellanox 354A VPI) with the latest firmware auto configure well with ethernet - given any excuse, they will switch over to ethernet and 40Gb. IB less so, it appears to want manual config for that.

1 Like

interesting.
thumbs up for everything linux, i wish i could do that too.

turns out, thouse cards don’t support SMB Direct, at least not according to their product brief.
And interestingly those cards are listed as 56GbE capable with Mellanox switches.

What switch and cables are you using ?

And why would you want to use those cards in IB considering less software hassle on Ethernet and the same capabilitys ?
If you want to mainly use IB you might want to take a look at connect IB cards : )
I think i know a US seller that is cheap and tricky to find.

If someone went crazy right now…

Patriot Ignite 480G SSDs are on sale for $95

1 Like

Oops - missed this…
IB because I had an IB 40G switch handy. No other reason. 40G Ethernet switches are not cheap.

Ethernet IS definitely easier and preferable.

2 Likes

No problem,
what switch is it?

There are “cheap” emc fdr switches out there who are only infiniband but someone figured out how to flash those to MLX os with Ethernet support.

Is it one of those?

  1. Not aware of a means of doing a soft conversion to 4036e.

well that might be right.
i meant this post: https://forums.servethehome.com/index.php?threads/beware-of-emc-switches-sold-as-mellanox-sx6xxx-on-ebay.10786/

For the 60XX series.
Do have a SX 6018 Sitting around …

Wow you guys, thanks for the info. I’ve been at this for a couple years and this thread sums up my experience. In my brief moments of lucidity (when not dealing with Microsoft issues) I’ve come to the conclusion that the following may be best for a FreeNAS ZFS setup.

RAID6/ZFS2 with 4-8x12TB SAS drive backed by NVMe or Optane, dual 10Gbe NICs and about 64GB of RAM. If you get an 8 Bay Server with that setup that should cover both a Photography and Video server.

For VM hosting, a similar setup, except in 4x2TB NVMe/Optane in RAID 10, with 2-4x12TB SAS in RAID6/ZFS2 as backup/bulk storage. Use iSCSI of NFS for connectivity to the Compute cluster.

I am still debating whether enterprise class SSDs are an absolute must. Is it safe to go with a WD Blue, an intel m.2 or Samsung pro, or does it matter?

Does this sound optimal? I have yet to see why this wouldn’t be the way to go. Thoughts?

This might get closed because it is pretty old. If you’re interested, I went a different route and built a pure SSD NAS. I am running it just as storage and only use NFS and the apple protocol. Currently it is almost fully populated with 10 Crucial MX500 2TB SATA drives. And I’ll probably switch the RJ45 for SFP+ since I now have a switch in the mix.