Cheap SSD NAS Motherboard Opinion

I’m in the process of building a new homelab using some Dell Optiplex 3070 micros that I picked up used. I already put Proxmox on those and created a cluster. I would like to build a low power NAS to go alongside it. LTT’s video about the FriendlyElec CM3588 made me think that an SSD NAS might be more accessible than I thought and also a nice way of staying low power.

I don’t need crazy storage space as I already have a spinning rust NAS somewhere else that would handle the “archival” of my files. I installed some 2.5G network cards in the 3070s using the E.key WiFi internal M.2 slot in order to help with the cluster management as well as the Ceph storage. Now those are the first 2.5G things I have, so I would also need a new switch.

While looking around I stumbled upon the CWWK Monster NAS board.

This board, in the N150 variant, would allow me to have a nice low-cost, low-power NAS as well as acting as a switch between the 3070s. For roughly the same price (maybe a bit more with the RAM) as the FriendlyElec CM3588 and a 2.5G switch, I get an x86 board, more expandability and direct connections between the cluster nodes and the NAS.

I’m not a fool regarding what I would be getting for the price though. I know the board has limited PCIe lanes, is probably not as stable as a more “standard” board, and I’ve read that it might have trouble getting into low C-states. It can also be hit or miss with the quality apparently…

Here are my questions :

  • Do you think I can achieve similar or better functionality with more standard hardware at a similar price point?
  • Could you recommend specific alternatives (including used options) that would fit my requirements?
  • Has anyone had experience with these CWWK boards for this type of application?

To recap I would like a low power, low cost CPU and motherboard combo for a SSD (or even M.2) NAS that could saturate a 2.5G connection without any issue. My budget is roughly around 250€, I live in France.

Thanks in advance for any insights!

Odroid H4 Plus which also provides some kind of aftermarket support?

https://www.kubii.com/fr/boitiers-ventiles-intelligents/4309-2059-boitier-officiel-pour-odroid-h4-3272496318656.html or Type 4 (Type 4 might need longer cables, see ODROID-H4 Case Type 4 – ODROID)

  • 16Gbyte of RAM

Done :slight_smile:

That’s an option I hadn’t considered, Thanks !
Though with the sata power cables and the power supply I’m already close to the budget, and I would still need at 2.5G switch to plug the 3 nodes of the cluster plus this board. But I do like the form factor :slight_smile:

This is how I did it: https://forum.level1techs.com/t/build-log-silent-night-my-own-take-on-quiet-and-power-efficient-nas

Been working fine to this day, hopefully will stay that way.
There should be some reason given for each part I chose. If not feel free to ask me anything.

P.S. I helped a friend with a CWWK board because he needed to control the fans hooked to the chassis but the only header available doesn’t allow for any control. So pitfalls when buying cheap chinese boards are around the corner.

You might also want to consider the fact that there are usually no BIOS updates so security goes out of the window and so does any kind of bug fixes so if you don’t care go for some random chinese board but it’s like buying a new car with defects. :wink:

1 Like

Very nice build, I really like the case !
Did you get a final wattage measurement, I couldn’t find one in the replies ?

Yeah, that’s why I posted, I couldn’t shake this exact feeling…
I made additional research on the odroid H4+ and, the prices on their website are more in line with my budget. And I discovered their Net Card
You were right, all in all it makes for a very compelling package, thanks !

honestly, if it’s just a homelab then get another sff and slam in a m.2 nvme on the board, another in a pcie to m.2 nvme adapter card, and sata drive for boot.

That’ll give you gigabytes of throughput. Add in your same m.2 E key to 2.5 gig and you’re there, but honestly I’d bump up the NAS to 10 gig and plug it into the 10 gig port of your switch (dunno if your switch has a 10 gig trunk port, didn’t look it up)

This gives you more of the fleet you already have so maintenance is MUCH simpler. You don’t get ECC RAM, but for a homelab that isn’t critical, you’ll be alright-ish.

If you want any proper NAS on a budget, snag a workstation with a xeon and ECC UDIMM memory. Slam in the same M.2’s in adapter cards and boot off SATA. This will DESTROY your budget though.

my latest workstation buy was:

P.S. welcome to the forum, where budgets come to die

1 Like

Did you even bother to read or is this another rant that’s not relevant?

Apart from the fact that it’s way out of budget, not really sure why you decided that the best (and only) solution is a mirrored M.2 based array and where your idea of 10Gbe comes into place.

Elaborate on a “proper NAS”, a N95 can saturate a 2.5Gbe connection just fine, the power usage will be much lower and it’s not EoL (also, performance hit on Comet Lake CPUs are pretty bad if you apply mitigations for known vulns). You also missed that fact that he’s in Europe (.fr) so importing from US isn’t really an open unless you’d argue that a ~1.3x markup excl shipping is still “good value”.

1 Like

I am still unclear how much and what types of storage OP is hoping to use

The 8TB Samsung SATA SSD is currently ~$550

dont let the fake Amazon “sale” discount fool you, these were down to the ~$400 range about 18 months ago. You might be able to find some good condition ones on ebay for cheaper as well.

The boards OP posted have (limited) PCIe expansion but that does open up the possibility of something like an HBA card for even more SATA.

I did not indeed. The power figure is around 12W at idle with a Tailscale container running and four machines connected, one of which is off site.

I had to make fit everything as you saw, but it’s nice and cheap. I still didn’t get around getting anything 3D printed for it to make it properly finished.

Those are NASTY drives. They’re not a good choice for a storage appliance. Prices on SATA SSDs went up like crazy since I’ve built my NAS.

You’re right, I didn’t give much info on the storage I plan to use. It’ll probably be some 2Tb 2.5" SSD’s depending on the price. I don’t need huge amount of storage as I’ve got another NAS for long term storage.

Nice ! That’s the kind of power figures I’m hoping for.

1 Like

can you give some more details on this? I put one of these 8TB Samsung QVO drives in my NUC as a torrent seed box (seeding real actual Linux ISO’s lol) and it worked really nice, no issues. I did also pick up a 4TB Crucial MX500 a while back to hold some reference files for another system and that worked well too but they dont come in 8TB sizes.

I am also under the assumption that the motherboards / systems OP is describing in the first post are also gonna be the main bottleneck to throughput for various reasons, after all 2.5Gb network should max around ~300MB/s which is still less bandwidth than these SATA SSD’s should support.

In general I tend to support the concept of using fewer larger drives rather than multiple smaller drives. It requires less connectivity (especially in the case of SATA, every drive needs data cable + power cable, this is not insignificant) and less software shenanigans to manage. More numerous smaller drives in some kind of RAID or striped volume might increase bandwidth but that is not gonna be too useful if you are bottlenecked by network speed or PCIe x1 bus so I am not sure what other advantages one might be seeking in that regard

in regardds to the Samsung 8TB QVO specifically, I do remember running basic disk benchmarks in Windows, I think its Crystal Disk? or one of those?, and the IO speeds were definitely subpar compared to e.g. MX500. But it was definitely still usable. I seem to remember read/write speeds in the high 300MB/s-450MB/s, possibly. So not fully saturating the SATA III but not slow enough to complain about for basic usage.

Sure. To keep it short the main pain points are really low endurance and sustained performances in all workloads once the SLC cache is exhausted. These two characteristics make the drive not suitable for use with ZFS that act on a block level, always managing data around for different operations. I couldn’t imagine how long it would take to do a full scrub of an array with those drives.

I referenced THIS when choosing. They’re a little better than all the other QLC SSDs, but still not that good.

This makes sense, but when making these considerations there are also performance characteristics that go beyond straight sequential reads and writes. If you use ZFS, as it’s popular these days, it’s important to have drives that can reliable operate when fully loaded for a variety of reasons. Those drives tend to drop out when put under stress like ZFS does, on a block level.

Thanks for the details. I dont use ZFS and never planned to so I never had such considerations. mergerFS + SnapRAID has been plenty for all my use cases, or, plain old RAID1 via macOS Disk Utility.