AMD Ryzen + Freenas - Succesfull

Hello everyone,

Good day. Just one of the normies reporting. I wanted to share with the community my build for a home server FreeNAS. Because I am not a computer savvy, It was difficult for me to find on the web about successful builds for this type of machines with Ryzen. So, I wanted to share for the rest of us normies my successful story.

On one note, some might tell me that this build might be a little overpowered. But I wanted to be future proof for my needs at least.

  1. Hardware:
    CPU -> AMD Ryzen 5 1400 3.2GHZ - YD1400BBAEBOX
    Motherboard -> ASUS PRIME X370-PRO
    RAM -> Corsair Vengeance LPX 32GB DDR4-2133 32GB DRAM 2133MHz - CMK32GX4M2A2133C13
    Power Supply -> 430W Corsair CP-9020058-NA
    M2 Drive for OS (like SSD)-> 120GB Corsair MP500 - CSSD-F120GBMP500
    GPU -> EVGA 750 Ti - 02G-P4-3753-KR
    Case -> 4U Rosewill RSV-L4412 - 12 Hot swap bays
    SAS Card -> HighPoint RocketRAID 2760A 24-Port PCI-Express 2.0 x16 SAS/SATA RAID Controller
    And 12 3TB HDD of multiple brands.

Note: To my knowledge the Ryzen CPU does not come with an integrated graphics card. You would need to add one. I used the 750 Ti because I had it lying around. But I would recommend a cheaper and less power consuming card like Zotac ZT-71304-20L GeForce GT 710 1GB GDDR3. Yet again, FreeNAS do not make use of the full GPU for other task. It just use it for displaying the OS data interface.

  1. Installation
    Just assemble it as any other PC. Please update you bios before intalling the OS.

Regardless, I came across an issue with my configuration. The RocketRAID card would not be seen in the secondary PCIe Slot. The GPU was installed on the first PCIe lane (the one the manufacturer recommend for GPU). I believe the root cause was that the GPU took all the bandwidth of the shared PCI lane. So I switch them and problem solved. That is why you will se first the SAS card and then the GPU.

  1. FreeNAS - Installed version 11.0-U2 burned to a CD.

Bad Things:

  • I had an issue with the installation. It showed a “mount root installation error” with my external CD Drive :scream:. I used another CD Drive I had lying around and it solved the problem.

Good Thing:

  • I successfully installed the OS on the machine and into the M2 Drive. :slight_smile:
  1. 2 week after build, It has not crashed. The data integrity of my drive is intact. Flawless, one of my few and best builds.

  2. Power Consumption -> Around 110W ~ 130W, just put it on power saving mode. Good for paying less to the electrical grid provider. Measured with a KillAWatt device.

Conclusions:
You can successfully build a FreeNAS with Ryzen. It is stable and work like a charm. Hope this helps you lose your fear with non intel build for FreeNAS.

Thanks for reading this bible post. Regards.

8 Likes

Wonder how much of that power consumption is due to the GPU alone?

1 Like

I use a gtx750ti daily, for power use you couldn’t really pick a better card, most don’t even have any additional connectors beyond the PCIe lanes.

If you have a reference version its around 5-6w with 1 screen or 8w with several, my strix version saves almost another watt off that at factory settings by having the fans actual turn off not low speed idle. At typical US $0.12 per kilowatt hour, thats like $6.30 per year for the refernce card, so don’t worry about it too much.

1 Like

Cool build. Personally, I would have went for an Lsi card as I’m not the biggest fan of the HighPoint branded stuff. But, I am just nit picking after all.

What Raid Z configuration are you using? I hope your using Z2. Z1 is scary for a build like this.

I have the same motherboard but have a Ryzen 1700, I’ve tried FreeNAS a few times and it keeps randomly locking up where as with unRAID or Windows 10 it’s rock solid.

Have you made any bios changes or anything at all to make it stable? Which bios version are you using? :slight_smile:

Thats a good question. I really do not know how to measure that part. Regardless I will concur with SheeplnACart. Although I do not have any evidence.

1 Like

Actually I am not using any RAID configuration. The card is in pass-through and I allow the FreeNAS manage the ZFS. I have hear many times from other people that is better to have a separate backup (other machine or usb). So I have stand alone hdd with a copy of my media on the NAS. Unfortunately, that means I would need to have 12 additional hdd to the ones that I already got. I am on that project but I haven’t been able to complete it due to budget :disappointed_relieved:.

Hi, I am using currently bios 0810. As additional configuration, I have defaults. Anyway I took some pics of the configuration. If you need an additional one please let me know. Kind regards.

  1. EZ Flash Utility

  2. Screenshots.

I know, we usually refer to the Different ZFS configurations as different Raid Z levels. Raid Z1 is a single drive of fault tolerance, alike to a traditional Raid 5. Raid Z2 is 2 drives of fault tolerance, alike to a traditional Raid 6. This goes on and on. My question is basically what Zfs Raid level are you using? With that combination Raid Z2 is appropriate, so I was curious to as if you were using Z2 or Z1. Some people use Z1 despite risks inherent to Z1 in a configuration like yours.

Hopefully using mirrors instead of raidz :smiley:

you might want to read this:
https://www.phoronix.com/scan.php?page=article&item=new-ryzen-fixed&num=1

the A12-9800 probably would have been a better option for cpu, ( integrated GPU )

& @SheepInACart, how about just taking it out and booting up? I don’t know if it will halt the boot up, but it’s worth a try. For science :slight_smile:

Thank you! I’m off work next week so will give it another whirl then.

Sweet build but wait, ZFS and non ECC memory? You sir are asking for trouble. I will link you a blog of a guy who knows his ZFS stuff.

TLDR: ZFS relies on clean correct data coming from the RAM. It doesn’t check it. If you use a non-ECC RAM you have a higher chance of data corruption. That’s why ECC is used in machines running ZFS. It adds another 0.000000 something chance of that actualy happening.

https://pthree.org/2013/12/10/zfs-administration-appendix-c-why-you-should-use-ecc-ram/

EDIT: I see you are not jet set for which raidz to use. Here are benchmarks for all of them:
https://calomel.org/zfs_raid_speed_capacity.html

Worth noting man, ZFS isn’t more likely to corrupt data than a non-ZFS storage array, be it JBOD or Raid.
Rather the error checking inside ZFS just can’t help if the error occurred in ram before the checksum was made.

The reasons most people so religiously use ECC ram for ZFS is firstly because most people building ZFS over raid are already huge geeks looking to optimize, and secondly that if your platform supports it (which most low power server boxes do) it does really even cost any more than non-ecc memory per gigabite.

Also while the article linked is interesting, they clearly don’t understand how backups work, automatically overwriting the only copy of data when ever there is a change isn’t a backup, its just a really poor RAID 1 implementation via software.

So if the ram is not bargain bin special, or being run at non-standard latency’s or voltages then its probably fine, or at least as good as any off the shelf NAS for non-enterprise use. Its just not quite as likely to prevent corruption as it could be with ECC ram. Ryzen DOES support ECC memory, as has been commented by both AMD CEO Lisa Su and Technical Marketing Head Robert, however it needs motherboard support and at the time of their statement (march 30th) was not validated as working, although still enabled if you want to try.

(EDIT) Indeed OP’s motherboard is listed by Asus as having ECC support with the Transcend TS1GLH72V1H 8gb DIMMS, they aren’t cheap but are available and would let you have 32gb in the system (the 16gb per DIMM likely also works, but is not confirmed by Asus at this time), so if it where my NAS I’d do the upgrade and move the 32gb into another machine, check the complete Qualified Vendors List for RAM to see if they have added any others here:

1 Like

As already mentioned, the article you linked is wrong and this guy is mistaken about how ZFS works. The post has been debunked by many people, including Matthew Ahrens (one of the authors of ZFS) and Allan Jude (one of the guys who wrote the book on ZFS):

http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/


https://arstechnica.com/civis/viewtopic.php?f=2&t=1235679&p=26303271#p26303271

Key Points:

  1. Every filesystem trusts RAM to some extent and will screw up to some extent if RAM is faulty.
  2. ZFS does checksums, and checks the checksums. It doesn’t trust data that doesn’t match the checksum. Most other filesystems don’t have checksums, so faulty RAM would give you silent corruption (silent because you can’t catch errors you aren’t checking for). With ZFS, you’ll catch errors, even if you don’t have ECC. That’s better than you’d do with filesystems like ext4 and NTFS that don’t even have checksums.
  3. Use of ECC memory is a general best practice for building reliable systems. Errors in memory can lead to more than just data corruption. They can lead to system instability and unexpected behavior, which is why servers (including file servers) generally use ECC. It has nothing to do with ZFS in particular.
  4. ZFS doesn’t use one single memory location for every operation, that’s ridiculous. The whole premise of ZFS streaming the entire pool through perfectly placed bad bits in RAM (and a sub-byte offset? really??) while doing a scrub is absurd. And even if it did hit some faulty memory, the code would likely crash before it ever got close to writing invalid data. If your RAM is straight up HAUNTED (it isn’t) and ZFS gets into a state where it wants to “correct” every block (it won’t), ZFS has an error threshold that will cause the pool to be put offline when exceeded, so it still wouldn’t play out as described.
2 Likes

Great write up in that first link. Some clever people have spent a long time thinking about all of this stuff!
When the weakness of the system is at the level of SHA256 hash collisions, then I guess your HDDs would be as much as a cause of errors as your RAM.
I guess the most important thing would be to memtest the shit out of your ram to ensure it is stable and working well. Then run the test again if you have stability issues, as you might on a normal PC.

I too recently updated my Freenas server to Ryzen 5 as well.

I was previously running an Intel G3258 which has performed well. I upgraded because I actually needed hardware for another build, and decided to use the newer Ryzen components I purchased in my server to have some growing room.

All I had to do was upgrade my server to the newer Freenas 11.0 U2 release (which uses FreeBSD 11). After verifying everything still worked, I managed to just swap the mobo, ram, cpu with the Ryzen hardware. After plugging in all the drives everything booted/worked fine as if nothing changed. I am really impressed at how far the upgrade process has come in Freenas. The folks at iXSystems have done a good job in that department.

I run a LOT of jails so having the additional threads from the 1600 and the additional RAM I installed has certainly scaled up my use cases for this server. You do have to budget in a GPU for Ryzen, but I just bought a gt 710 for 30 bucks on amazon brand new and it worked just fine. You could also just buy used/surplus for just as cheap or cheaper.

I think AMD Ryzen/Zen processors will become more ubiquitous for Freenas/FreeBSD builds given its low cost, high thread count, and support for ECC memory. It really is a great small server part.