Lower (ish) power NAS & home server

Hey there, you bunch of geeks,

I’m thinking of moving from my Synology 4 drive NAS to an home made SSD NAS/Server with the plan to run a few VMs on it.

I’m planning, after seeing @wendell 's video about it, to use one of the Icydock’s 6 SATA drives in a 5.25 bay and to fill it with SSDs. As the motherboard I’m looking at has got a limited number of plugs, I’m thinking about one of these NVME/SATA cards to get additional sata ports.

Edit : I’m thinking of an NVME boot / cache drive, and 4 to 6 sata drives in ZFS Raidz1/2, depending on the amount.

My CPU choice was done after experimenting with one and a few VMs, without the NAS feature. I’ll also use that machine a router (most likely plain debian with nat and pihole) as one of the VMs with a 2x 10/25G interface (yup I’m expecting to NAT at least at several Gbps).

Obviously I’m thinking small foot print, and I’m not expecting crazy performances, just a cool machine, mostly silent, low power(ish), for a long and peaceful life.

Any comments, views, rotten tomatoes, cute kitties or advise are welcome. Thanks in advance !

- CPU : Intel i3-13100 (LGA1700)
- CPU FAN : Noctua NH-L9x65
- Ram : Corsair Vengance LPX 2x 32GB DDR4
- Motherboard : Gigabyte B760M DS3H (Micro ATX)
- Additional Network card : Mellanox ConnectX3 (Low power)
- SSD 5.25 Bay : Icy Dock MB326SP-B 
- Additional SATA port : Delock 90010 : PCIe x1 -> 4x Sata (Low Profile)
  Or 
- Additional NVME SFF-8643 to M.2 plug : 
  - https://www.delock.de/produkt/63918/merkmale.html
  - https://www.delock.de/produkt/62721/merkmale.html
- SFF-8643 to 4x SATA cable : https://www.delock.de/produkt/83315/merkmale.html?g=947

Here is a list of case I spotted at local (Swiss) suppliers. My size reference was my current Fortron CST350 .

mini-ITX :
- Sharkoon Shark Zone C10 27L
  - 27.30H 42.30LO 23.40LA 3.59KG
- Silverstone Technology SUGO 14, 19.5L
  - 21.5H 36.81LO (check) 24.70LA 6.22KG
mATX :
- Silverstone SST-SG11B Sugo 22.5L
  - 21.20H 39.30LO 27LA 4.45KG
- Chieftec Le Cube 33L
  - 32.40H 45LO 38.30LA 5.60KG
- Chieftec CI-02B-OP 30.5L
  - 45H 36LO 32.30LA 5.40KG
- LC-Power 1406MB-400TFX 14.13 L
  - 34.20H 40.50LO 10.2LA 4.16KG
- Silverstone SST-GD05B ~20L

edit : fixed the doubled SG14 case.
edit2 : splitted in mini-itx / matx, will affect my mobo choice
edit3 : added a desktop style case, also a nice airflow side to side

1 Like

Intel certainly seems the way to go for low power NAS.

The only thing I can give advice on is to not overspend on fast ram, and to check ARK for the max ram speed your cpu supports without an overclock.

Also, you can do the whole socket mod thing to reduce temps by as much as 5C! - might be worth the few dollar investment in a contact frame.

1 Like

Ho, I didn’t knew that one could make sense on that build. I’ll have a look about that, and will keep track of the ram speeds obviously. Thanks for pointing this out !

The build looks decent and should have a fairly low-power draw. I like that you didn’t go for an e-cores cpu. I have something similar from ThinkPenguin (the 4-bay NAS) with an i5 12500T and 4 SSDs and it draws about 60W from the UPS.

I’d biased towards SG11B, because it’s a case I’ve also been eyeing for a different build (once I figure out how to power it via 12v with a fairly hungry GPU).

But I’m here to question a few things.

  • What is your workload going to look like?
  • How many VMs do you plan on running on it?
  • Are containers in the plan, in case you can get away with them?

I’m saying this, because I’ve gotten away with running many VMs off of a HPE Proliant MicroServer Gen10 (in production) running 4x SSDs in RAID-10 (md) only as a NAS, over 2x gigabit in balance-alb (if you know how that works, it’s basically some connections are made on 1 port, some on the other, despite them sharing an IP, to increase throughput to multiple devices - some hypervisors were using one, some were using the other).

The ethernet was definitely the limiting factor, but everything was still really snappy. You’d get even better performance on local storage, obviously. The point here was that, with ssds, even with a limited bandwidth to them (and using NFS), VMs were still snappy.

Don’t bother, you’ll slow down the 4 / 6 SSDs. With the amount of RAM you have, it should easily be enough to satisfy the ZFS caching needs. I’d personally go with 4x SSDs in stripped mirrors, particularly because you said VMs. You could get away with raid-z2 with 6 SSDs (which would give you more space at the cost of some performance), but it’s better on a low amount of drives to go stripped mirrors.

You’ll likely tun into issues using that kind of CPU Cooler and CPU without throttling it. I shoehorned oncean Opteron 185 into a SFF case with a beefier cooler (same style) and it was a struggle to keep it within reasonable temps.

I would very much recommend a motherboard with at least an Intel NIC and I’m not sure why you’re going for DDR4 since DDR5 is just as cheap.

DDR4:
Asrock H670M Pro RS, Z690M Phantom Gaming 4, Z790M PG Lightning/D4

DDR5:
ASRock Z690M PG Riptide/D5
All will also give you better expension possibilities regarding PCIe slots too.

You might want to look at Silverstonetek ECS06 which offers the ASM1166 (instead of ASM1164) and they have a firmware available for a potential compatibility issue which I don’t think is available and/or applicable for the ASM1164 controller).

Chieftec CMR-625 seems like a much neater solution without the hassle of using adapter cables.

As for SATA SSDs I’d recommend Marvell 88SS1074 controller based SATA SSDs as they’ve proven to be very reliable but they’re becoming harder to find while still being rather budget friendly.

Not sure if you’ll save much compared to AM5 these days in terms of power usage though and given your workload you probably want to look at FreeBSD and ipfw2 (or pf) as base OS.

Good point, it was also high on the list because of the flexibility.

Not much, home nas/lab for IOT, ampache player, and no video encoding, for a geek (me) and my significant other (not a geek)

I have currently 5-6 runnings VMs/CT but low usage ones.

CT under promox are well supported, and if I need more, I’ll spawn a linux and dockerize / reverse proxy those on the same machine. As I’m working for an ISP, I’ll offload my heavy stuff at work anyway.
As I briefly mentioned it in my initial post, I plan to use a ConnectX 3 or ConnectX 4 for network. So I’m not really afraid with bandwidth. My clients machines are only GE anyway, and I wont have more than 2-3 devices hammering the NAS at the same time. But really good point.

Really Good point, I’ll think about increasing ram instead of the nvme cache.
Also good point on the stripped mirror, just thinking about if I need to increase my volume size at some point, RaidZ1 or Z2 looks more flexible for that.

I think I’m not in the same landscape for the CPU, TDP looks twice lower on the i3-13100. ( AMD Opteron 185 vs Intel Core i3-13100 [cpubenchmark.net] by PassMark Software ) . Also my load will be probably as low as what I do on my current machine, and I see my CPU temp staying around 38-40c . Probably that I can look at the socket mod that @1almond mentioned above if needed.

Really good point, when I built one of the current machines, DDR5 wasn’t so much reachable for now. I’ll dig into that a bit more.

Then I’ll need another PCIe 4 or 8x slot for the controler, as I need a slot for the SFP+ network card, but thanks for the tip.

Ho, didn’t know that product, interesting, however only 1 Fan vs the 2 of the Icydocks. I’ll see if I can procure that one around here.

I’m more familiar with Linux and Proxmox . I tried my luck for a while with FreeBSD (I ran my daily driver (X1 Carbon Gen6) with Freebsd for some years) but didn’t have time or energy to try to resolve all the bugs / understand the mechanisms (and I know some Freebsd devs :wink: ) .
Regarding AM5, I suspect my workload is lower than what would make sense for " bigger CPU ", but to be honest, I didn’t looked if there was a low power CPU on the AM5 landscape for now.

Thanks both for the insightful inputs !

…except that the Intel CPU boosts to 110W and Opteron TDP is max 110W.
https://www.intel.com/content/www/us/en/products/sku/230575/intel-core-i313100-processor-12m-cache-up-to-4-50-ghz/specifications.html
You likely want to have a T-model CPU if you want to go for SFF or more or less force it to not boost at all.

If you look at the motherboards I mentioned none of these will have the issue you’ve mentioned about your current choice.

There’s quite a bit of a difference between a daily driver and a server but if routing and firewall performance isn’t of interest then I guess you have your answer :wink:

1 Like

You don’t need to increase RAM, although that’d be a nice bonus. And with DDR5, you can get 96GB kits (2x 48GB) which are (sometimes) cheaper than 128GB kits.

How come? With ZFS you’re stuck with what you have initially in your pool, unless you stripe another pool of the same configuration (if you care about performance). So if you go with 6 ssds in raid-z2, you’ll need another 6 ssds in raid-z2 and then stripe the two vdevs. If you start with 4 ssds in z2, you can’t expand to 6, you have to add another 4.

With stripped mirrors, you only need to add 2 SSDs (or the amount of devices in a mirror you have, but in the home, there’s rarely anyone doing 3-way mirrors). If you start with 2 mirrors, each with 2 ssds, and stripe them, you get up to 4. To increase the size, add another 2 ssds in a mirror and stripe them with the others.

When adding vdevs to a ZFS pool, you won’t be getting the benefit of the whole pool’s performance as a whole, but only of the last added vdev. To rebalance the pool and get the max performance, you need to transfer data back and forth, with another zpool (you could clone small datasets in the same pool and delete the original and slowly move to larger datasets, but it’s always easier to use auxiliary zpools, particularly if you already have a backup server with spinning rust).

Fewer fans = less things to break = smaller power draw.

I wouldn’t worry about the heat on the SSDs if you’re going with consumer stuff. If you’d get old intel enterprise 2.5" drives, I’d be a little worried, those tend to emanate a lot of heat (had a dell-branded intel 256gb ssd that was steaming, I had the case fan blow some air on it).

Chieftec always seemed more available in Europe, you should be able to (I prefer the all-metal construction of chieftec products myself, compared to flimsy plastics from icydock, or the really expensive metal variants from icydock that do the same thing).

Wew, I wouldn’t attempt to run freebsd as a desktop, you were ballsy. I have to say, freebsd on servers is nice (and not as much of a PITA as desktops OS in general tend to be anyway).

I won’t try to make recommendations on the host OS. If you’re used to proxmox and it works for you, go for it.

+1 for the T model, but these tend to be harder to find and for whatever reason, more expensive, so disabling turbo is probably where it’s at.

I’d be really curious to see how far can freebsd push connect-x’es compared to linux (particularly compared with small distros like alpine that don’t do much in the background).

To be fair, for a router VM, people generally tend to go pfsense (which I really don’t recommend to anyone besides beginners anymore). You can pick your poison on the router VM, be it freebsd or debian.

For just routing and firewall, I see no harm in freebsd, unless you don’t want the hassle of learning pf (which I find superior).

True, I think I already did that on my current i3-13100 (not T), with bios tweaks, and sysctl thingies. But as said, I’m not expecting huge loads. For now, my temps over a year looked really decent (ok, without the NAS workload). And if I go the Z690M, as you said, I’ll be safe. But good point indeed.

If I need routing performances, I’ll go the VPP way (got my lab at work for that) :wink: But it’s also another debate, and I’m not advocating for Linux vs *BSD here, just saying where I’m heading with the OS for the choice to make (kind of) sense for the reader.

Last time, for a server, I went for Samsung’s PM893, and I was considering Samsung Pros or similar enterprise stuff. Didn’t checked to much the temp behaviour, that was one of the next challenges in that quest.

Yup, I got my habits with linux (and my scripts) and now I’ll need to move from ipchains^H^H^H^H^H^Htables to nftables anyway. Yeah, I could head to PF but well, I’m good with my current devils .

I’ll have a look, I’m not in a rush anyway so I can wait until one pops around anyway.

And thanks for the point on ZFS, I know my plans for growth aren’t really clear for now, I need to change that :wink:

I’m running a linux router / firewall too and I haven’t moved from iptables (I don’t use chains though, too much of a bother for such a small setup, I only have 5 VLANs and a WAN and control access from the subnets). I’m biased towards openbsd for routing and firewall, but if there’s performance to be had (like the connect-x 25gbps+ stuff), freebsd routing is preferred (I never had to deal with it, but I’ve done my fair share of research a while back, freebsd was performing better than openbsd on any NIC, but if speed’s not your goal…).

I really suggest stripped mirror with 4 drives and if you need to add more, get 2 more drives later. And that’s because you’re running VMs (I’ve ran more than 30 VMs from that single SSD NAS I mentioned earlier - their workload wasn’t much, just some OS’es and web servers on each, with a single oracle RAC database on it).

Make sure to partition your drives and not use the entire disk, because sometimes you’ll have to add different brand or model and they’ll come with slightly different sizes, so a partition of a fixed size, like 3.7TB for a 4 “TB” drive will ensure you can add other ssds to the pool, at the cost of a few megabytes.

2 Likes

So after much more digging, and with everyone’s help (Thanks !) I updated my part list :

- Silverstone SST-GD05B (20L, side to side fan flow)
- Seasonic B12
- i3-13100 (T if I can find one)
- Asus Tuf Gaming B760M-Plus
- Teamgroup Elite ddr5-4800 CL40, w/ECC 2x16GB
- Noctua NH-L9x65
- Delock 90010 (just for some additional SATA ports)
- IcyDock MB3226SP-B (Taking the risk of 2 fans, but prefer the sata power plugs, and the screwless system)
-- Noctua NF-A4x20 FLX (In case I want to replace those fans)
Stuff I'll get from my stock / Ebay auctions :
- 2x NVME 256Gb boot drives (I got some spare of these)
- 4x Undecided 2.5 SSDs (maybe Samsung PM or Intel DC)
- 1x Intel i210 RJ45
- 1x Mellanox Connect-X 3 1x SFP+ port

I plan to do a proxmox boot mirror on those 2 NVMe drives, and then create a RaidZ1 of the 4 drives. Once I want to extend the storage, I’ll have space to add 2 bigger drives to the pool, resilver, and then swap the small drives.
I don’t expect to do that soon anyway, this will be a remote storage of some backups, and I’m not going to do high bandwidth from within the network to the NAS.

I’ll have one router VM (up to 1G nat anyway, not much more expected), one other VM for the backup part, hidden behind the router. I’m not expecting more on that site for now.

And well, it’s still an experiment in the end :wink: I’ll report my failures and result after some time .

You can’t use ECC memory with B760 chipset (if you’re referring to UDIMM that is),

Interesting, seems the one I’m looking at ( on-die ECC : Model TED532G4800C40DC01 ) is referenced as supported on the motherboard (at least from their documentation : (ref : TUF GAMING B760M-PLUS|Motherboards|ASUS Global ) . Maybe I should consider a non-ecc one to be on the safe side.

DDR5 comes with on-die ECC, that’s not the same as UDIMM ECC
The motherboard comes with Realtek NIC which isn’t great…

You might want to consider another case as Z690 are currently being clear (and most are much better than the B760 offerings at similar pricing) however most are ATX

Get 5600Mhz RAM, most models are still within JEDEC spec (check) and usually available at the same price

Asmedia ASM1064 is very old, grab a card using ASM1164 or ASM1166 instead I don’t see the point in this case though as your current case wont allow for more drives?

Thanks for the tips @diizzy !

I’m adding other NICs, so I don’t really care, but the realteak isn’t a choice indeed.

ATX is a problem, I’m looking for a 20L case, so mATX or smaller. So it seems the solution is your initial suggestion, aka :

  • AsRock Z690M PG Riptide/D5

I was matching the CPU max supported speed. However checked the doc for the ripetide and I see that I have these supported :

  • Kingston KF556C40BBK2-32

Ah, totally missed that (also I’m totally clueless about these :wink: ) . I went for simple and pcie 1x. So in that mood, I spotted other chips available :

  • Marvell 88SE9215 ( on Delock 90382 )
  • ASMedia - ASM1164 ( on StarTech 4P6G-PCIE-SATA-CARD )

Any best picks ?

Crucial 32GB Kit (2x16GB) DDR5-5600 UDIMM | CT2K16G56C46U5 | Crucial.com or the Pro variant will do fine (you’d want 1.1V voltage). The Kingston ones only seem to support that partially (probably at lower than rated speeds).

ASM1164 by far, not sure about the compatibility or if you can use the firmware for ASM1166 in case you need a compatibility patch though (Google it).

The Mobo looks decent, the NIC is just a rebranded i225, not sure about the revision but given it’s a relatively new mobo it’s likely one of the later ones.

Those also clock much faster than the 32gb modules if you care. Basically for whatever reason this gen, the 24/48gb modules run much much faster.

1 Like