First time DIY NAS seeking fit check

Hi everyone!

My (third replacement) Synology DS1815+ has died and I’ve been wanting an excuse to make my own NAS so this is as good a time as any.

This will be a single purpose machine running TrueNAS Scale on bare metal.

My compute needs are low and it will run a handful of services (Tailscale, Jellyfin, a few more).

My workloads trend read heavy; family streaming our media library to various devices in the house (and eventually to mobile devices on the go) with bursty writes every now and then.

My storage loads are mixed; large files (video), medium files (photos), and small files (music/documents). These are logically separated; one pool with just video, one with just music, etc.

It is a place to store bits with a little compute and I otherwise don’t want to touch it.

Gear

This system is probably overkill, but I am looking for something that is rock-solid set and forget with low power usage that will serve me for many years. I don’t want to fool around with old power-hungry server gear and this setup seems simple, well supported, and less of a hassle to find parts for down the line. That said, I’m open to pricing efficiency and if there are any compatibility issues you notice, or alternatives you recommend to help me spec out something equivalent for less money, I would appreciate it.

Chassis: HL15 45HomeLab Store
CPU: EPYC 4464P https://www.newegg.com/amd-epyc-4464p-socket-am5/p/N82E16819113831 or EPYC 4344P https://www.newegg.com/AMD-EPYC-4344P-Socket-AM5/p/N82E16819113835
Mobo: ASRock Rack B650D4U-2T/BCM https://www.asrockrack.com/general/productdetail.asp?model=B650D4U-2T/BCM
RAM: 4x Kingston KSM48E40BD8KM-32HM 32GB Kingston Server Premier 32GB 4800MT/s DDR5 ECC CL40 DIMM 2Rx8 Hynix M Server Memory - KSM48E40BD8KM-32HM at Amazon.com
HBA: LSI 9305-16i https://www.serversupply.com/CONTROLLERS/SAS-SATA/HOST%20BUS%20ADAPTER/BROADCOM/9305-16I_274680.htm
PSU: Seasonic Prime TX-650 https://seasonic.com/prime-tx
Case Cooling: 6x Noctua 120mm NF-A12x25 PWM chromax.black.swap
CPU Cooling: TBD
HDD: 4x 22TB (will buy) + 8x 10TB (already owned) spinning rust NAS drives
Boot HDD: TBD; some cheap NVMe thing maybe
SLOG/L2ARC: TBD; see questions below

The TL;DR of the main components is:

  • I haven’t found a better short-depth chassis that matches the specs and ease of use of the HL15. It’s expensive, and that sucks, but it will last forever and become a home for future rack expansion down the line. Initially it’s not going in a rack at all because I don’t have one yet, so this chassis seems ideal. Closest competitor I’ve found is https://genesysgroup.com.tw/s316b.htm but I’m new to this and I don’t know how reliable they are as a vendor versus the household names in this space

  • The CPU is only because Epyc has qualified ECC support. Again, I don’t want to think about this machine so I don’t wanna worry about ECC not working. I might step this down to a 4344/4244 instead since I don’t really need 12 cores here and it will probably draw more idle power even though it’s still 65W TDP. Thoughts appreciated.

  • The Mobo is mostly because of 10GbE, IPMI, and qualified ECC support

  • I (think) I understand about the RAM speed limits with 4 DIMMs on the AM5 platform, though I hear it’s better now with more recent motherboard firmware. As I understand it I need capacity not speed for TrueNAS (see my questions below and please let me know if that’s wrong).

  • The HBA is probably over-specced since I won’t take advantage of the speed any time soon, but this particular model has a good reputation as far as I can tell (it’s what 45 Drives ships in their HL15 pre-built systems and Storinators) and replacements should be easy to find.

From scratch, at current prices, this system will be roughly $5K all-in which puts it basically on par with the entry model of the TrueNAS R20 (excluding drives) and more flexible. Does that seem reasonable?

Questions

  1. I used simple 2-disk mirror pools in my Synology with BTRFS and planned to continue that with mirror VDEVs in TrueNAS. I’m not super paranoid about drive failures and I have cloud backups. This article comes up a lot and seems to concur that this is the way ZFS: You should use mirror vdevs, not RAIDZ. – JRS Systems: the blog but it’s still worth asking: does this make sense for my use case or should I look at another option?

  2. Old TrueNAS hardware recommendation guides say to plan for 1GB of RAM for every 1TB of storage; is that still a valid rule of thumb these days and is 128GB enough for my use case? Is the rule per TB of total storage or usable storage?

  3. Putting aside the question of what services I may run on this box, does the speed of RAM matter for TrueNAS itself or is the primary focus only capacity?

  4. Talk to me about SLOG. There seems to be some debate from my reading about whether a SLOG is needed or not. My understanding is that the SLOG is to give me some safety in a power outage and to make sure (in combination with ECC RAM) that I never have any corrupt writes committed to the pool, but the SLOG will make write performance slow, especially on drives which are near capacity. Generally speaking, I expect:

  • I will, at some point in time, have one or more storage pools which are greater than 90% full
  • My write workloads are infrequently bursty but otherwise fairly light
  • My read workloads are heavy random access

Does a SLOG make sense for my use case at all?

Do I understand correctly that the recommended SLOG capacity is (RAM / 8) * 2 (so 32GB in my case)? Is this total or per pool/vdev?

  1. Talk to me about L2ARC. I’m even less certain about whether I need it for my particular workloads. My reading leads me to believe that for my use case L2ARC might actually be kind of important to handle the read load. Maybe I’m misunderstanding what it does, or maybe I’m overestimating my read loads in relation to what my drives can actually handle.

As I understand it, L2ARC is per pool. What is the recommended size allocation? The docs only say, vaguely, that if you use it “more is better” but doesn’t really clarify.

  1. If I do need SLOG and/or L2ARC, is trying to source Intel Optane still the right call these days or is there something cheaper/better that works just as well for these purposes?

  2. Is a 650W PSU good enough? Real world usage shouldn’t pull anywhere near that but a sanity check would be nice.

  3. Please review my PCI-E lanes layout. Here is the block diagram for my board: https://www.servethehome.com/wp-content/uploads/2023/03/ASRock-Rack-B650D4U-2L2T-BCM-Block-Diagram.jpg

I will (potentially) be using:

1x HBA in the x16 slot (Gen5 x16)
1x M.2 as a boot drive (Gen4 x4)
1x M.2 as a SLOG drive (Gen5 x4)
1x 10G Lan (Gen3 x4)

I think that basically caps me out on lanes for this board? Will sharing the Gen4 M.2 with the 10G LAN be an issue (assuming I am reading the diagram correctly)?

  1. What’s the best way to get my data off the BTRFS drives from the Synology to transfer them over to TrueNAS? As I mentioned, my Synology is “dead”. It can power on, but at some point it will shut itself off. All three of these units have failed the same way. This time it may just be a faulty power supply, or maybe it’s the CPU bug that plagues this SKU (even though support confirmed the resistor fixes were applied); either way I don’t care to troubleshoot it but I need a way to get the data off. I could just restore from the cloud but I have to pay for egress and the drives are right here so I would prefer to just do it offline.

I’d greatly appreciate any thoughts from those of you more experienced with this.

Thank you so much!

1 Like

$5k for a home NAS seems ridiculous to me.

I could probably think through various ways you could cut the price down dramatically without sacrificing functionality. That said before I venture down that road- is price a factor here? There’s nothing wrong with spending far more than necessary just because it’s a hobby and you enjoy the products you’ve selected.

Thanks for asking!

My price estimate factors in the additional HDD I need to buy (~$1.8K), plus the HL-15 (~$900), and the ECC RAM (~$800). Once I have the part list nailed down I can and will shop around for savings on individual parts, but if it weren’t for these things the from scratch cost would probably be a more reasonable ~$2K. I need the drives, a rack chassis, and I want ECC, so I can’t escape these particular line items unfortunately, but I’m open to recommendations based on the workload description I provided.

I definitely don’t want to go any higher than 5K :sweat_smile:

1 Like

You can still use a zen5 cpu on that am5 board with ecc. The epyc is essentially the same.

I’m not sure that server board will be efficient in terms of power usage.

For power supply I would check Wolfgang’s list and pick the most efficient at 100-150W.
Can you tell us more about the use case? What kind of data, how many users and where the random reads/writes are from. And also how often you read the same data.

1 Like

Chassis: Probably fine but I do have a hard time motivating the major difference in pricing compared to lets say Rosewill RSV-L4500U, sure it’s a bit more clunky to replace drives but HL15 isn’t 650$ better…

CPU: Both Ryzen and EPYC have official ECC support (both uses Zen 4 arch), read the specs. The 65W parts are great, I have a Ryzen 7900 with ECC memory and it runs great. Given your workload a simple 6 core will do just fine and I’d have a look at Ryzen 9600/9600X (Zen 5 vs Zen 4) due to the better multimedia capabilities (AVX512 instructions).

Mobo: Unless you really need IPMI I would honestly look for another motherboard as both reliability and BIOS updates seems to be so-so a best. Amazon.com: AsRock Rack B650D4U-2L2T/BCM Micro-ATX Server Motherboard Single Socket AMD Ryzen 7000 Series Processors (LGA 1718) B650E PCIe 5.0 Dual 10G LAN : Electronics seems to somewhat reflect on the experiences people have on this forum too about that motherboard. Asus ProArt X670E-CREATOR WIFI is a “battle tested” board (and a very common one for ECC setups on this forum) that works great for this application and have ECC support, it does lack IPMI and the Aquantia NIC isn’t the best. When I got it Asus even ran a promotion which gave me 5 year warranty so you might want to cheap their promos and alsof or discounts. If you can live with a few less PCIe lanes Asus ROG STRIX B650E-E might be an option as it also offers 8x/8x layout and ECC support. You also have the much cheaper Asus ROG STRIX B650E-F which do the trick however you probably want that 8x/8x layout as you might want to pop in a video card to transcoding later on (like a low end B-series Intel ARC). As far as energy efficient goes it doesn’t matter much at between motherboards. If you want 10G or better just grab a cheap external NIC, https://www.serversupply.com/NETWORKING/NETWORK%20ADAPTER/2%20PORT/BROADCOM/P225P_311355.htm perhaps?

RAM: You also have Micron 32GB DDR5-4800 ECC UDIMM 2Rx8 CL40 | MTC20C2085S1EC48BR | Crucial.com (or the 5600MT models) which is essentially the same so either will do. That being said, 64Gb will be more than enough for your workload (even 32Gb would probably be fine). While I haven’t tried you should be able to hit 4800MT or slightly higher speeds with only 2 sticks however benchmarks shows very little gain overall. There are a few specific workloads that benefits greatly but in your case there’s little to none. Don’t expect to go above 3600MT with all 4 DIMMs populated although it works great in that configuration.

HBA: That should be fine, at least you’re looking at something more recent than SAS2008-series which is a good thing.

PSU: Probably fine, although I would strong recommend going for a ATX 3.0 (or higher) PSU these days.
These seems to do very well according to Cybernetics’s testing and much cheaper :slight_smile:

Fans: You’re in general paying a premium for Noctua, that’s fine otherwise Schythe fans might be an option (still uses Sony bearings afaik).

CPU: You don’t need anything beefy at all for the 65W, the AMD Wraith Prism will handle things just fine for example.

Boot HDD/NVME: Just grab something from Crucial P3 Plus, Solidigm, SK Hynix and you’ll be fine. Avoid the QLC variants and Seagate 530-series should be fine too. Samsung and WD have issues and avoid the cheap as chips models.

SLOG/L2ARC: It serves no purpose at all in your case and just adds costs and complexity, don’t.

1 Like

I see. I think there are a few things you could do to dramatically save costs while having minimal impact on functionality.

  1. Platform choice - I have much more experience with Intel (I like AMD a lot, I just don’t have them in my systems yet.) I think EYPC is totally unnecessary for your use case. I understand various (or all?) Ryzen CPUs support ECC memory - I’d go with Ryzen.
  2. Memory - ECC is a hot topic as you probably gather if you’ve read on this forum, the lawrence systems forum or the ix systems truenas forum. I personally do not use ECC and have been running freenas/truenas since 2017 without issue. At that time, ECC was more prohibitively expensive than it is now. DDR5 memory is also very expensive and with ZFS/NAS, I strongly believe that your RAM capacity is far far more important than your RAM speed. If you prefer ECC, I would go for DDR4 ECC. Perhaps Ryzen 5000 series? Perhaps if you want more than 128GB of RAM - then a threadripper series CPU? I’m not sure if threadripper is more power efficient and lower cost than EPYC, but I think so?
  3. HDDs - re-certified drives from goharddrive or serverpartdeals are my go to. I use re-certified drives because my NAS has redundancy and is fully backed up off site. In my view hard drives should be expected to fail at some point, given this I buy re-certified and have had fantastic experiences. I have been running and expanding my NAS since 2017 and (knock on wood) no HDD failures yet.
  4. Case - yes its expensive, yes there are probably lots of cases that accomplish the same thing for 1/3 the price. The 45drives HL cases are darn cool though. I’d like to get one at some point.

Overall - I find that for your desire “rock solid and low power usage” - I agree that old enterprise gear sucks, but I think that new top of the line desktop hardware is very overpriced for this use case. I find desktop hardware either used or new but previous generation is the best combination of reliable, low power, and inexpensive.

2 Likes

These suggestions are great and much more detailed than my post.

@diizzy do you recommend DDR5 for a NAS? In my view it seems overpriced for NAS use case and unnecessary, but I’m curious what others think. I think one can save a ton by going with a prior gen platform using Ryzen & DDR4.

I also agree that 128GB of RAM is probably unnecessary, but ZFS loves RAM and more RAM will benefit the ZFS ARC size which increases the zpool’s performance dramatically for read heavy workloads

1 Like

While DDR5 technically should be more reliable than DDR4 I don’t think you should put much weight into that argument alone however AM4 and ECC support is a mess and given that it’s a legacy (dying) platform most available hardware is mid to low-end and the few higher end boards (workstation, server) that are available are about as expensive as the AM5 counterparts or even more so it doesn’t make much sense. I would say that Intel is a much better choice in that regard but that stuff hasn’t aged well (8th-9th gen) at all.

2 Likes

For the use case you’re describing I would build two 2U systems rather than one 4U system. If I’m reading you correctly you used to have an 8 bay nas with 10TB drives in mirrored pairs, so 40TB of total storage.

The reason I wouldn’t build one big system in a HL15 case is flexibility and security. If for example the cpu dies your entire system is down. If the psu dies your entire system dies. If the psu’s final act is sending out a power spike all your drives die. If you just need to take the system down for maintenance everything is offline.

I would do something like one 2U server with your current 8 10TB drives and another 2U system with the 4 22TB drives you’re thinking of getting.

I would run something like mergerfs on both servers and configure it to always put files on whatever drive has the most available space. Mergerfs gives you one big drive pool of 80 TB on system one and 88TB on system two. No raid, no vdevs, just the full capacity of your drives. Also mergerfs stores files in a regular linux file system, easy to work with.

Let’s say you use server one as your regular server and have it do nightly rsyncs to server two. If you accidentally delete something on server one, you still have it on server two.

You could put a fancy motherboard and cpu in server one for virtualization while having a low power motherboard and cpu in server two as it only needs to store files. Not running zfs/vdevs/whatever means that the servers doesn’t need a silly amount of ram to keep up, so maybe both servers can make due with cheap, low power motherboards and cpus?

A setup like that would allow you to take down server one for maintenance while server two takes over servicing the family. If one server psu dies, you have a spare server already up and running. If a drive in one of the servers dies, you have the data on the other server.

1 Like

You mean by using a Ryzen chip instead? I’ve read mixed trip reports about this; some say it works with the right chip/mobo combo, others say it only appears to work, and yet more say it doesn’t work at all. And everything I might read/watch is old and may be out of date because of a more recent firmware release or something. It’s hard to feel confident going in, that’s why I thought Epyc was the right move. Do you have any resources I could check out for a more definitive answer?

Is it because of the IPMI, or is there something else about the board that makes it less efficient vs another board with a similar feature set? My understanding is IPMI (on board or via expansion) will take some extra juice no matter what but I’m OK with that for the utility it provides. Confidence the ECC will work was the main draw for me.

Is this the list you’re referring to? -Die sparsamsten Systeme (<30W Idle)- - Google Sheets

There’s not too much more to add from my OP. I have 2-4 users (maybe more once in a while), streaming video/music and storing documents.

A few real world examples:

  • Stream music to different rooms
  • Stream different videos to multiple iPads
  • Dump 100GB of RAW photos after a vacation one day
  • Digitize my Grandfather’s record collection into FLAC

That sort of thing. My Synology were always running low load and idle most of the time. This will be a pure NAS and not compute in disguise.

Thank you!

For those examples, SLOG is not going to be helpful.
The hard drives will easily handle a sequential write for storage. I just tested moving some RAW photos to a 3 drive zfs pool and it completely saturates the 2.5gbe connection.

ARC or L2ARC will only be helpful with data that is accessed multiple times, not a video that you will watch only 1 time. But if you are listening to an album a lot or editing a photo.

You could go for an L2ARC, but it will also take up more RAM. I would spend that money on more ram instead.

1 Like

ECC works on Ryzen, (Connectivity) etc

It’s been confirmed to work on multiple Asus boards and I can personally confirm that it works on my X670E ProArt board.

I hear you! I haven’t built with rack gear much outside of work (not my main area) and it’s harder to find reviews and the time to shop around unfamiliar territory so I don’t entirely know where to look, what to avoid, or whom is reliable (beyond the SuperMicro etc common brands). The size/features/build friendliness of the HL-15 really do seem perfect for me though.

A few people have replied saying this so I’ll definitely look into it! I want to trust everyone’s experiences that the consumer SKU with ECC do actually work. I’ll have to do some more reading, or maybe I’ll just bite the bullet and try, though I really would prefer out of the box confidence since I don’t want to tinker with this once it’s running.

That’s really great to know. I don’t fully grok why TrueNAS is so RAM hungry, but I wondered if 128GB was overkill for me. If there’s no benefits to my workload from the extra speed I’m perfectly fine with the RAM at 3600. Are there any nuances to be aware of with TrueNAS if I start with 2x32GB and then add another two (of the same model) later?

Thank you! I’ve read some posts about mirroring boot drives for the TrueNAS redundancy; is that something you’d recommend? I’m guessing this practice is more for people who want High Availability? I can’t imagine storing some config and restoring the OS to a new drive is a challenge, but I don’t know how TrueNAS works.

Understood, thank you!

This was my impression as well. I actually wanted to look into this years ago with the X470 AM4 platform but decided to be lazy since my Synology was still working fine. My first thought with this build was to go AM4 for cost savings until I read some of the things you just described.

Great question @charles7. Thank you both!

As far as storage goes you’ll be fine with 32Gb as much wont be cachable anyway due to the type of data you’re going to store/consume. That being said, since you’re going to be using more software than just NFS and Samba you need to take that into account.

Not a TrueNAS user myself (FreeBSD which would be closer to TrueNAS Core) but everything will “auto tune” itself so adding more RAM later on wont be an issue.

As for mirrored boot drive I’m going to say maybe? On the plus side you’ll have all configuration data ready to go compared to backing a backup which you need to restore but as long as you’re not doing something outside TrueNAS’s UI you should be fine I imagine. That being said, I’m not sure if there will be any value to “automatic availability” as it’s undefined how your motherboard’s BIOS will react to a faulty boot drive (its not aware of the concept of hot spare/mirror) so it might just “hang” during boot trying to use the faulty boot drive although in best case you just need to point it to the mirror drive and it’s all up n running again.

Here are a few things to read to understand this further:

  1. L2ARC | TrueNAS Documentation Hub
  2. https://www.reddit.com/r/truenas/comments/tmem86/can_someone_explain_what_is_zfs_cache_and_why_it/
  3. ZFS Caching (Scroll down to ARC section.)

In short, ZFS has an ARC (adaptive read cache) which is a RAM based read cache. ZFS automatically adds and removes items to this cache. This cache is beneficial for items which are accessed (read) multiple times in a period. I.e., if you are watching a movie on your media server, ARC will have no benefit.
L2ARC (Level 2 ARC) is the same concept except it is not RAM based, people typically use an SSD. The purpose is the same, this is just an extension as obviously your RAM is not limitless.

1 Like

Thank you, that was very helpful! I think I’ve read/watched the same information multiple times in multiple ways at this point but I think it’s finally starting to stick to help me understand what I need for this use case.

1 Like

Thank you everyone for your thoughts and advice!

To start closing the loop and hopefully help future readers, after doing some more research I’ve settled on this gear list:

I’d appreciate any further thoughts or flags for mistakes but I think this works based on all the feedback. I could probably keep reading/shopping and shift parts around to squeeze out another few hundred dollars in savings but that’s not worth my time at this moment. I’d rather deal hunt with a known parts list. This feels safe enough to satisfy my goals with some flexibility for the future.

And to close the loop on my initial questions to help anyone else out:

  1. Mirror VDEVS are good for my (very common) use case, provided you have a responsible backup solution and consider the cost/reliability tradeoffs
  2. RAM speed for this particular NAS-only use-case is not terribly important. Consider it if you will do a little compute that might benefit from the speed increase, but for serving files and a few Docker services it doesn’t matter. Similarly, start with less RAM and then add more once you know your data access patterns better.
  3. SLOG/L2ARC are unnecessary for this particular use case, but you can assess this later once you know your data access patterns better
  4. 650 watts is enough. Use this spreadsheet as a starting point to understand how models compare on price/efficiency: PSU Low Idle Efficiency Database by Wolfgang's Channel - Google Sheets but realize it’s about a year or so out of date at this time. Just do your research, which you have probably already started if you’re here.

I really appreciate how friendly and helpful everyone was. Thank you @charles7 @diizzy @nutral !!

1 Like

Boot drive: Be aware of the firmware / hw issues Samsung have with their drives
pci.c « host « nvme « drivers - linux-block - Linux 6.x block layer and io_uring tree(s) (search for Samsung)
That’s why I recommended a few others tried and tested rather than Samsung…
Crucial T500 might be a better option

HBA: You can get a much newer LSI 9400-16i at that price?

Cooler: Check clearance and this is a really a waste, a Thermalright Peerless Assassin 120 SE CPU is like 1/3 of the price and will do more than enough as far as cooling goes if you insist on getting one.

1 Like

I’ll take a look! The Crucial P3 Plus and T500 both reviewed poorly (for what that’s worth) compared to the Samsung:

https://www.storagereview.com/review/crucial-t500-ssd-review
https://www.storagereview.com/review/crucial-p3-plus-ssd-review

I’ve used Samsung Pro NVME in my (Windows) desktops with no issues for a long time but if they’re not suitable here I’ll keep looking. Is the SK Hynix P41 a valid substitute for this build? It reviewed similarly to the Samsung and still better than the Crucial.

I could, but as I understand that’s (even) more HBA than I need and I’d have to flash it myself. I’m sure that’s easy enough; I was just willing to throw money at the problem. I could get the 9305 cheaper still if I go that route so maybe I’ll reconsider tinkering.

Understood. Mostly I was thinking about future needs and having one of the Noctua’s around to cannibalize for a future build made a kind of sense. Nerd Math :slight_smile:

Thank you!