My new full NVME homeserver

Hi all,

I was asked by @CJRoss to post a bit of details about my new homeserver build. Feel free to ask questions, about the build performance etc. Note. English is not my first language, so there might be grammatical errors.

A bit about my old setup first:

For many years ive been running an old Intel 875K system, with a a total of 10 3TB harddrives, serving my home as a media server, and backup location for documents and photos.

Ive been running Unraid all this time and been very happy with, and especially when it got docker support, i started using this extensively.

Then trouble started appearing, my old 3 TB drives died, and ive run out of replacement options for this. Since the way unraid works, if you want to add higher capacity, you’ll have to replace both data and parity drives… I ended up running a rather unsafe configuration with a dead drive for a period longer than im comfortable with.

Meanwhile, my company has decomissioned alot of older dell servers, i got the option of getting hold of older Intel P4500 and P5500 drives. In sizes of of 3.84 and 4TB. this ofcause required me to think about about… how can i utilize these drives as my homeserver. And also make it into an installation that is feasible to run in a small apartments, where my son sleeps in the same room that the mediaserver is running in. So no Rack Server with Noisy fans.

The Build:

Ive looked through the market for Motherboards with support for many PCie lanes, and i ended up with Epyc being the obvious solution, and since i dont have a big need for a very high core count, a 16 core 7302P CPU is more than enough for my need. There were three contenders really for mother boards. The Supermicro H11SSL or the H12SSL, or the Tyan S8030. Since the H11SSL only supported PCie 3.0 i wrote that off (more about that later), and the H12SSL is insanely expensive at the moment. So Tyan S8030 it was. It comes with 5 PCIe 4.0 X16 slots. 2 Nvme slots and even 2 slimsas NVME connectors. i was able to find some Memory from an old decomissioned server at work and off it went with the build.

The Components:

Tyan S8030G2ME motherboard (i dont need 10gbit really, and i can also use an addin card if i ever up needing it)
AMD EPYC 7302P.
256GB SK Hynix 2667Mhz ECC DDR4 memory
8X D7-P5500 3,84TB SSD’s
6x P4500 4TB SSDs
3x JEYI PCIe 16x - with 4 slots for u.2 disks
A boot USB (this is what unraid runs on)
Fractal Design Torrent Case
Corsair 850RMe PSU

all in all not a cheap setup, but it being manageable.

The Issues:

Since ive run unraid for many years and have been very happy with that, it was the obvious solution again. So i made an installation of it for testing, and i quickly discovered that my hopes of running PCIe 4.0 bifurcated on the slots holding the D7-P5500’s was not going to happen. I had alot of timeout errors on the PCIe devices, and if benchmarking i would end up having my storage pool breaking, due to device drop outs. I dont know if its the JEYI cards, the motherboard itself, the bios or settings. But ive looked up the errors as much as possible, trying to implement corrections, and nothing really fixed the issue. What removed -all- errors was simply setting the PCIe slots to run at 3.0 speed instead. And honestly, i never did it for the pure performance, i mostly did it because it was a way for me to get silent quick and stable storage, for free.

The second issue was unraids implementation of ZFS. Im sure there are fixes for it, but atleast in gui, it is impossible to make one storage pool with two different sized vdevs in. which ofcause with my drives was an issue. With two pools, i would not be able to host my shares spread out on all the disks. which is kind of what i wanted.

Benchmarking the disks also quickly showed me a second issue with unraid. its badly optimized for NVME, and i believe alot of the issues surround their FUSE filesystem. Its very optimized for the old array setup of spinning disks.
doing fio tests, showed i would only be able to get around 1.2-1.5GB/s throughput… which would be fine, but i was ready to try and see if i could find other ways to deal out media.

The Solution:
I looked at different solutions, and it became quite obvious there were two solutions i should look at. HexOS and Truenas. and to be honest, i did not want to pay money to betatest HexOS. So TrueNAS it is. Their electric eel release seems to offer alot of the same stuff that i currently use in unraid, albeit the app support is limited compared to unraid.
Professionally i work in enterprise IT, and im not that worried about running a linux shell command so i can live with having to tweak stuff a bit more to get it to work. And since the docker support(and apps) i need is fairly limited, it basically checked the boxes i needed to checked.

TrueNAS also allowed me to create a storage pool with two different sized vdevs (i made a pool with two RaidZ1’s), it does give me a mixed capacity error, but i can live with that.

And the performance, especially on sequential write speeds is much much higher.

If you have any questions. Let me know

5 Likes

Is that a ‘normal’ USB stick or a DOM?

As for grammar errors, I’d say “finders keepers” :wink:

You can always make your own usb stick with some optane. :slight_smile:

The 2242 16gb optanes make a good size flash drive. But you can use the bigger 2280 as well.

1 Like

Ive used a San Disk extreme USB for years as boot media for unraid, but im guessing that truenas might do more writing to the USB since your asking?

I have some leftover 2 TB SSDs from the unraid build, that i could move the installation to, if it matters?

In my servers that use boot SD’s I run either sandisk max endurance (not sandisk high endurance) or samsung pro endurance, there’s a good thread about it here.

Hey @DarkingDK thanks for sharing and the well written recap.
The picture is awesome on top. :slight_smile:

Not suggesting you go back to unRaid because you are happy but I noticed you mentioned FUSE and the u.2 etc…
Did you try running your nvme’s all as cache or unsupported devices and leave the array empty (thats a new’ish feature in unRaid)?
Running all disks as several caches takes FUSE out of the equation, it it is mainly needed for moving the storing data from the cache to an array which is meant more like less accessed storage.
It is my understanding that you can have disks of diff size in an unRaid cache pool.
Multiple cache pools is nice if you want to control what is stored on which and who has access.

Anyway, you cannot go wrong with truenas as long as you are willing to learn the not so user friendly interface for some things like file shares. Great OS and for free it is even better.
HexOS is a neat idea and is exactly meant for folks who want to use trueness but without dealing with having to figure out the general/best configuration options. A typical home user is my vision of the HexOS person. Like the idea, the implementation is meh. My guess is in a few years it will fold or otherwise remain a short lived and obscure alternative OS overlay.

Cheers!

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.