I like Lego's and black magic too

I really liked Wendel’s comments about a Lego build with hints of black magic. Been watching Level1 for a little while now and honestly used several good ideas form you. I know this is so over the top but I had a few requirements:
No spinning rust
I have 3 gaming rigs all have ISCSI harddrive (steam Libraries)

So many device’s! 5 tv’s, 6 cell phones, 2 playstations, 2 xbox’s, 4 tablets, 3 gaming pc’s, network printer, phone line(fax mostly)… All these connecting through a comcast modem that currently operating at 1.2Gb/s download - 40Mb/s upload. The modem is connected via 2.5Gb/s to a 8 port multigig 10Gb unmanaged switch (supports 10G, 2.5G, 1G) and a couple of other unmanaged switches and stuff… Anyway my point being the amount of data these devices produce is out of control so I built a NAS.

Truenas Scale server build:

AMD Ryzen 7 PRO 4750G Processor

Asrock X470D4U2-2T motherboard (dual 10g nic, IPMI, PCIE bifurcation,

128GB (4x32) ECC UDIMM DDR4-2666 PC4-21300 Memory

14 - Micron 2TB 2048GB 1100 2.5" SATA SSD MTFDDAK2T0TBN (2 x 7 ssd raidz2 vdev’s, 17.21 TB total)

2 - Solidigm Gen4 2TB NVME’s (Mirrored Pool PCIE slotted 4x4x 2TB TOTAL pool)

2 - Intel 120GB SSD’s in Mirrored (OPSYS)

M.2 slot 1 (Gen 3x2 16Gb) - ASM1166 6Gbps Ph516 6 Port Expansion Interface Card with Smart Indicator Gen3x2

M.2 slot 2 (Gen 2x4 20Gb) - ASM1166 6Gbps Ph516 6 Port Expansion Interface Card with Smart Indicator Gen3x2

Motherboard 6 sata III Ports

Total sata ports 18

3 - Icydock 6 bay hot swappable SSD drive caddies, total 18 hot swap bays

Thermaltake 600 Watt power supply

Rack mount chassis

This server test data: idle load 40 watts +/-, max load 110 watts +/- , idle temp 32c, max temp 75c

This file server and my lab server both have dual 10g nics connected to the network. The 3 gaming rigs also have 10g connection to the network

As I am just a basic user all of my network equipment is unmanaged, everthing is Rj45 cat6 (however there are 12 wall drops that are dual cat 5e the longest being 60 feet) future plans to upgrade the wiring is in the works…

There are 2 NVME 2gb drives in a mirror format for VM use but no VM’s are configured at this time

I did test this configuration with a LSI 12gig 16i expansion card, yes the LSI card performed better than M.2 cards not by much though, but the power consumption difference is HUGE! with the LSI card average idol of 60 watts and max load 160 watts. The over all temp in the server case was also significantly higher.

I have been testing this build for a couple of months and finally took it off the test bench and put it in a server case and mounted it.
Is this a pretty decent score for this system? The crystalmark score is from a gaming rig with a 3TB ISCSI hardrive.


CrystalDiskMark_4-20-2023 ISCSI

2 Likes

The Network also only a few months old.

2 Likes

Looks like a nice homeserver to me. You put in quite an effort to plan how to use the available lanes and get the most out of it, considering the limitations of x470.

Idle load is way lower than my server which runs with 6xHDD, 2xNVMe, 2xSATA SSD, although full load is 120W, so very similar. Otherwise comparable specs. I think going for SSDs really is the gamechanger in idle (despite having double the amount) and will accumulate savings over time.

I’m a bit concerned about the 4k reads and writes though. I would expect 12MB/sec from a HDD pool, not from SSDs. Might be related to your M.2 controller or the rather wide + odd-number RAIDZ config and RAIDZ in general. 880MB/s may be windows-related overhead or your network is just capped at this rate. It certainly is beyond rounding margins of what I expect of 10Gbit network + SSD pool.

I suggest testing disk performance at the source to get a better picture. Crystal Disk mark is nice for a one-click benchmark, but is limited to Windows and many other aspects. Tools like fio should provide more insights. Checking raw network throughput via e.g. iperf3 will provide you with certainty regarding those 880MB/s sequential.

And without knowing the volblocksize and the filesystem on your iSCSI LUN, numbers don’t mean shit.

My first guess being RAIDZ and “wrong” volblocksize dragging things down.

1 Like

Used to do command line back when dos was popular, LOL… Thats how old i am, have been pretty much a windows guy building game rigs. Many other misguided server attempts made over the years but this is my first successful deployment. I guess I better brush up on them skills. I have seen fio used but seemed to complicated to try.

1 Like

I was considering trying a mirrored pool to see if get better results. I do not have any savable data on the server yet as I am trying to tinker with a few different configs

1 Like

And I see a lot of passion and work. It will serve you well, and the forums here will help with cleaning up some rough edges and optimization.

And you offer me a good opportunity to update my ZFS guide here with some RAIDZ advantages and drawbacks section :slight_smile:

If you can use familiar tools that do the job, that’s always good.

iSCSI is great. But if Windows has to writes a 4k file into a 16k box that is ultimately stored in little tiny boxes all across your SSDs + writing parity, that’s a lot of work for comparably little amount of data.

Without going to write a Ph.D. thesis and going into low-level storage concepts…

Check what Windows uses as cluster size (most likely NTFS cluster size) and match the volblocksize of your zvol. Also RAIDZ isn’t good at random writes. Especially not when talking Zvols or using 7-wide RAIDZ config. You also waste a lot of space, defeating the purpose of choosing RAIDZ in the first place.

If you don’t have important stuff on the pool, I suggest you test performance with different RAID configs.
Mirrors are obviously the best performance. You can also do 4x 3-wide RAIDZ1 with 12 disks. Other “sweetspots” being 4,6 and 10 wide RAIDZ2 or 3,5,9 RAIDZ1. The wider the RAIDZ, the more drawbacks you will face with low record/block size workloads .

Just to give you an example: If you use 4k blocksize, both mirror and 3-wide RaidZ1 will have the same available storage efficiency of 50%, not 67% like one might assume for a RAID5/RAIDZ config.

I’m pretty sure you can push those 12MB/s well over a hundred with this. And 4k reads is the name of the game when talking about things like steam library. I wouldn’t want Games like KSP, ARK or any other larger indy game to rely on 12MB/sec random I/O because they all use a million files.

this part I got right then, when setting up the ISCSI drives I did choose 4k block size as that was recommended for a steam library. If you mean how I formatted the drive in windows, NTFS default block size.

The nice thing about NTFS is that the default is different depending on the size of the drive. Check Microsoft on what the default is for you. I try to avoid NTFS as much as possible.

And with 8 cores, enable compression. LZ4 is basically free but I recommend checking out ZSTD as well and see how your CPU handles ZSTD in practice. Should give some extra performance and space. That’s what cores are for after all.

1 Like

sounds good, I will start with that info and do a reconfig to mirror as planned. any recommendation on the file format? I have only used NTFS in windows. I should have this back up running in a few hours.

You don’t really have a choice.

This will at least give you the maximum ZFS can pull from these drives. Together with volblocksize=NTFS cluster size will provide you with the best case for using a zvol.

No need to stress it. Wendell won’t shut down the forums any time soon :slight_smile: . Creating pool and adding vdevs is pretty much point+click in TrueNAS. I’d check 5-wide RAIDZ1 too just in case for a still reasonable option with higher capacity. Not sure if you have 15 disks to make 3 vdevs. And 3-wide with TBs worth of 4k steam library is just worse than mirrors.