Exploring NAS build and testing

Update … #10?

Anyways. After engaging with ASRock support, we were able to get the OS to stop complaining about the downgraded links. Not only that, there is a bug in their beta bio that was causing a 100% thread lock when using slot 6. This issue went away when I down graded to a stable bios. I have to say that ASRock’s support has been amazing to work with compared to some of their competition. Not to name any names…cough Gigabyte cough

Now that I have been able to get everything stable, the SSD pool still performs as expected(stripped raid-z) and can max the 9400-16i without any issues with fio. This will be the main NFS pool and possibly app storage.

Also purchased a RIITOP 4 Port M.2 NVMe to see if it would work with the bi-furcation on this motherboard. I installed 4x Silicon Power(SP) 256GB NVMe gen 3 for testing and set the designed port to 4x4x4x4. The motherboard has no problem picking up each drive as well as TrueNAS. And yet again fio was able to max out the throughput with out any issues. Here a few tests and configurations I did.

Raid-z
Run -1

fio --bs=128k --direct=1 --directory=/mnt/test --gtod_reduce=1 --ioengine=posixaio --iodepth=32 --group_reporting --name=rw --numjobs=12 --ramp_time=10 --runtime=60 --rw=randrw --size=128M --time_based 
rw: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=posixaio, iodepth=32
Run status group 0 (all jobs):
 READ: bw=15.5GiB/s (16.6GB/s), 15.5GiB/s-15.5GiB/s (16.6GB/s-16.6GB/s), io=929Gi (997GB), run=60003-60003msec
WRITE: bw=15.5GiB/s (16.6GB/s), 15.5GiB/s-15.5GiB/s (16.6GB/s-16.6GB/s), io=929GiB (998GB), run=60003-60003msec

Run-2

fio --bs=128k --direct=1 --directory=/mnt/test --gtod_reduce=1 --ioengine=posixaio --iodepth=32 --group_reporting --name=randrw --numjobs=12 --ramp_time=10 --runtime=60 --rw=rw --size=256M --time_based
randrw: (g=0): rw=rw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB,  ioengine=posixaio, iodepth=32
Run status group 0 (all jobs):
READ: bw=14.4GiB/s (15.5GB/s), 14.4GiB/s-14.4GiB/s (15.5GB/s-15.5GB/s), io=866GiB (930GB), run=60003-60003msec
WRITE: bw=14.4GiB/s (15.5GB/s), 14.4GiB/s-14.4GiB/s (15.5GB/s-15.5GB/s), io=867GiB (931GB), run=60003-60003msec

Stripped Mirror
Run-1

fio --bs=128k --direct=1 --directory=/mnt/test --gtod_reduce=1 --ioengine=posixaio --iodepth=32 --group_reporting --name=rw --numjobs=12 --ramp_time=10 --runtime=60 --rw=randrw --size=128M --time_based
rw: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=posixaio, iodepth=32
Run status group 0 (all jobs):
READ: bw=15.4GiB/s (16.5GB/s), 15.4GiB/s-15.4GiB/s (16.5GB/s-16.5GB/s), io=923GiB (991GB), run=60003-60003msec
WRITE: bw=15.4GiB/s (16.5GB/s), 15.4GiB/s-15.4GiB/s (16.5GB/s-16.5GB/s), io=923GiB (991GB), run=60003-60003msec

Run-2

fio --bs=128k --direct=1 --directory=/mnt/test --gtod_reduce=1 --ioengine=posixaio --iodepth=32 --group_reporting --name=randrw --numjobs=12 --ramp_time=10 --runtime=60 --rw=rw --size=128M --time_based
randrw: (g=0): rw=rw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=posixaio, iodepth=32
Run status group 0 (all jobs):
READ: bw=17.3GiB/s (18.6GB/s), 17.3GiB/s-17.3GiB/s (18.6GB/s-18.6GB/s), io=1038GiB (1114GB), run=60003-60003msec
WRITE: bw=17.3GiB/s (18.6GB/s), 17.3GiB/s-17.3GiB/s (18.6GB/s-18.6GB/s), io=1038GiB (1114GB), run=60003-60003msec

Now these are just some quick tests and could use some tuning depending on how I plan to use this pool. Thinking about using it for iSCSI between proxmox and vmware. Also will be looking at 2TB NVMe but something with a better IC seeing than the Silicon Power NVMe because they bounced up and down drastically on performance. Hopefully a better controller will help to flat line the performance.

Now with everything in place, its time to start getting serious and seeing how much I can push this hardware. The plan is to utilize every slot and add a JBOD for 8TB SAS3 drives I have waiting to be used. Those will be my cold storage pool and possibly provide storage to storj network in hopes to make a little money to pay for power. Also, have a MSL 4048 with an LTO6 drive that I would like to use for backup. Anyone use a tape library with Scale? If so, what software did you use? Bacula?

Anyways, will keep updating as I continue to expand. Just hope sharing what works and what doesn’t helps someone else design and build their NAS. Safe someone else from going bald like me. ha ha!

1 Like

Just a small update.

I purchase 4x 2TB Curcial P3 NVMe for the RIITOP 4 port NVMe card. No problems seeing the NVMe and consuming them as a new pool in TrueNAS. Writes and Reads are much more sustained with closer to flat line performance. Where using the SP NVMe, performance with bouncing all of the place. Now they both come close to the same average performance, the Crucial NVMe IC looks to perform more stable. Now to get everything setup on VMWare for testing.

New update - Jbod Addition

So now that the TrueNAS server is stable and running with multiple NFS shares, plextv app and VMWare with iSCSI, it is time to turn my attention to cold storage. Currently have a LSI 9300-8e(IT-mode) in route and have been looking for a jbod to connect. Sadly, all the jbods, including used, are way out of my price range. Lucky I ran across a DIY build on this forum which lead me to another DIY build on that exact same site. This got me thinking if I could build one for under 1K. After days of exhausting researching and verification, here a build that might work. Parts list below.

Adaptec 82885T 12Gb/s Expander
Athena Power 2U 12-Bay Chassis
Athena Power 2U 400W PSU

The reason for choosing the Adapter 82885T is due to the fact that it can be powered by either a motherboard or a molex connection. This can be found on the microchip manufacturer’s manual.

“The expander draws power from the PCIe slot (requires four or more lanes), but
there is no data transfer to the slot. Alternatively, power can also be supplied
to the expander card through a standard 4-pin auxiliary power connector”

This means the expander can be installed into the chassis above and powered by a molex without any issues. Then connected to the LSI 9300-8e externally giving me over 90TB(8TB SAS Drives) of rust storage for archiving/backups. Hopefully I will have time this weekend to pull the trigger on purchasing the parts. Once everything is here, I will post my finds. Fingers crossed this works without any issues.

Newest update!

So the design worked flawlessly for my custom JBOD. And it was under $600 where pre-builts were going for $1200+. The most expensive parts were the chassis and PSU. On the down side, the chassis is a little flimsy and is easily bent. Also, it is a little complex to take apart if you are not paying attention. But here are some pictures of the build.

The final specs are as follows.
Adaptec 82885T 12Gb/s Expander
Athena Power 2U 12-Bay Chassis
Athena Power 2U 400W PSU
12x Seagate EXO 8TB SAS drives

These drives have been provisioned in a Z2 for redundancy and the performance isn’t to bad but there isn’t going to be a huge amount of activity on them. It is basically for bulk storage for my muzak, backups, and plex media. As well as my family’s backup storage and TrueNAS replication dumps. Now that this system is complete, I am going to play with the APPs catalog to see what can be useful.

Update… Whatever.

The new NAS has been working like a dream. Fighting a little heat issue but outside of that it has handled everything I can throw at it. BUT… DO NOT USE PNY CS900 SSDs FOR ANY KIND OF RAID SETUP. 4 out of the 8 drives have already failed within the 7 months of operation even though their documents says the CS900 SSDs will work in a raid configuration. After doing some extensive testing, my theory is the IC is having issues keeping up with the IO requests. Sadly I have to blame myself for buying such cheap drives and the quality of products after the pandemic. The 8 Crucial MX500 SSDs have no problems and are preforming like champs.

Side Note: In my professional life, we are seeing that same quality issues across all products, brands and/or manufacturers.

2 Likes