My theory exactly. OP has a goal of 400MB/s. If there could be a second user (or application) accessing data while OP is importing data, it’s going to absolutely kill import performance. The solution to that is to make a dedicated disk which doesn’t get a performance hit when read requests happen.
The only concern is that I’m not sure if that 2670 will be able to handle a 10G NIC. I remember reading somewhere about lower speed CPUs not being able to fully saturate a 10G nic, due to processor frequency.
Not that it will put him below 4gbps, but it’s definitely something to watch for.
Yeah you’ll definitely want a quality network adapter, jumbo frames, a switch that isn’t a pile of shit, and maybe some OS tuning.
Just because a nic says “10 gigabit” on the box (and gets carrier at 10 gig ) , doesn’t necessarily mean you’ll get that in the real world without some fiddling
Don’t know if anyone’s mentioned this, but if you plan to run a large ZFS storage appliance, you should probably run it on FreeNAS or FreeBSD – ZFS on linux is a bit slower and not entirely feature complete
AFAIK, the ZIL on the SSD definitely doesn’t function as any sort of cache: it will only ever be written to (and never read from), unless the server loses power mid-operation or something. In that case, the last write operations that weren’t yet committed to the actual storage will be read from the ZIL upon the system starting again.
So basically, its function is to improve resiliency of the storage to the system crashing.
The speed improvements it introduces are not from caching, but from sync writes being completed earlier (as far as external systems are concerned), since ZFS will report the data as committed to disk when it reaches ZIL. The regular data path is still write call->RAM->storage pool - the ZIL is only backup.
Just one user. Me moving data across the network
Thanks, I’ll prob check out FreeBSD then
Your number of spindles should be fine then.
Definitely look into the benefits/drawbacks of
RAID10 striped mirrors
And how the number of VDEVs vs. the number of drives in a VDEV affects your performance and disk space before creating the pool though.
The initial inclination for many users to to create one big RAIDx VDEV for their pool and that is normally a mistake - from the perspective of upgrades, random IO performance (Not so much an issue for you at the moment, but) and fault tolerance/rebuilds.
I’ve read about this too. For cpu and board, I’m stuck on that hardware. If it caps out less then 400 with that gear, then I’ll have to live with it
FreeNas is a spin specifically designed for your application, less hassle and setup involved.
Thanks for the info, and real-world numbers from your setup. I’ll check out the link