I don’t know if I can set the sector size on the LSI card, but the stripe size is either 64k or 128k. (I forget at the moment.)
XFS
Not sure, but unlikely as I probably used the defaults.
Varies.
zpool iostat
shows that it can hit 100 MB/s whilst the data is being pulled over the network.
Never really benchmarked the system otherwise.
*edit
ran time -p dd if=/dev/urandom of=10Gfile bs=1024k count=10240
and was getting about 51 MB/s write speeds with that.
WIth time -p dd if=/dev/zero of=10Gfile2 bs=1024k count=10240
, the system was getting 639 MB/s write speeds.
Read speeds for 10Gfile
to /dev/null
(from server to server) was 2168.4 MB/s.
Read speeds for 10Gfile2
to /dev/null
(from server to server) was 670.6 MB/s.
No, the client, I know, can write at a maximum of 800 MB/s (tested), which, for four HDDs at about 200 MB/s write speed for each, in a RAID0 array, makes sense.
I haven’t thought about trying that.
It didn’t even dawn on me to try that.
Maybe, but my current hypothesis is that it points to the server side being the issues because I have other Qnap NAS units (which also runs some variant of Linux) and the client runs CentOS (also Linux), and those systems have no problems hitting GbE line speeds.
TrueNAS, on the other hand, is the only “odd ball out” at the moment.