That board with a SFP+ card looks like an awesome NAS!
You can convert these MICO to 16x SATA drives and have two M.2 slots?
not worth it. The restricting airflow will drive up temps more than a thick layer of dust.
Sync data! I am not sure if OP realizes that SLOG only caches sync data, not async data like NVR!
Unpopular opinion: Unraid tries to be a jack of all trades, using two systems is way superior. Hear me out. Assuming you wan’t some fast storage for VMs and Docker and stuff, and you wan’t a data archive for NVR or movies.
Use a small little Proxmox host with two fast NVME in mirrors and with 10GBit SFP+. CPU isn’t that important, most workloads are not CPU bound. Disk speed if far more crucial in most workloads. You probably won’t need a GPU if you make use of Intel Quick Sync, otherwise just add one. If you really need extreme sync write performance, you can add a SLOG. Add it after you tested the system. You will probably notice that you don’t need more performance than a NVME mirror.
Use another system with plain old HDD and TrueNAS with 10GBit SFP+. If you care about metadata performance, add some cheap 2,5 SSDs in a mirror as special vdev.
Pros:
- You pay nothing for licenses
- Easier to get more RAM
- Less PCI lanes shenanigans
- Best software for the tasks. Proxmox is a great hypervisor, TrueNAS is a great NAS. Unraid is fine for both.
Cons:
- Power consumption
In many cases, two systems also can be cheaper than buying one big system.
I’d add complexity to the “cons” section. I can understand the justification for that, and have been using that pattern, more or less, for the last decade. I’m just trying to simplify my system at the moment. But that’s just me.
I agree with this mostly, but I dont find proxmox or truenas to be superior in any way for what I’m doing. Proxmox just annoyed the hell out of me with their UI choices, and truenas hasnt figured out what it wants to be when it grows up.
I have like 5 machines running various systems at this point though so dont feel like my flavor of autism is the best
Only if you’re using high-powered components. Temps were still in the low 60s, compared to low 50s without the mesh and cleaning / maintenance was much more of a breeze (just take off the mesh and dust it off, instead of cleaning the fan blades).
Given that I agree, I don’t think it’s unpopular.
At this point for consumer electronics, I gave up on the idea of PCI-E lanes and also went full retard into networked services (one system can handle the storage and another the workloads and maybe a small one for minimal mandatory services, like dns and ntp / tai64).
Only if you’re not building stuff from scratch. Fast switches will add to the cost pretty significantly, but shouldn’t increase power consumption by too much (which is what I’m mostly focusing on), compared to building a really powerful hyper-converged system (I didn’t think I’d ever use this term in a homelab with a single server, but here I am).
I think its important to realize the homelab is an ever moving goalpost. As soon as you get it how you think you want it is usually about the time you tinker some other thing. The best you can hope for is that you can manage the monster you create long term.
We’re getting into the weeds on what we all have learned through our own suffering and everyones path was slightly different. The real answer is to FAFO because thats where the fun is.
Yes and no. I for one even run OPnsense in a small SFF since in my opinion virtualizing that adds way more complexity than adding another case. I would also argue that running unraid is more complex than running my two systems.
Nope. Switching from these very tight drive caddies to Fractal and I saw a drop from 55° to 33° under load for the HDDs.
Dust on the other hand you can just blow off once a year and be fine.
That used to be me. Then I switched to Proxmox and TrueNAS. Added a small USV, special vdev for the RAIDZ2 pool, switched to normal more silent desktop cases. Now I sometimes wish that something goes wrong. The system is near perfect.
I will soon create a offsite PBS, just because I am bored
This is one area I wish to improve as well. I havent found anyone I can colo a box with yet… but soon™
Off-topic backup
Set a tailscale / headscale with one of your friends and use client-side encryption. Intermit.tech did a good tutorial years ago on minio + restic, but you can use restic on NFS via your wireguard tunnel. That way, you don’t have to trust that whoever is hosting your data will access it.
Or you could use restic locally to a backup server running zfs, then zfs-send to you’re friend’s house (that way, the remote backup server can run zfs without encryption or compression, saving on CPU cycles, because restic takes care of it locally - or you could use compression at the pool level and disable it on the restic repo). My backup server is an odroid hc4 running nothing but NFS (as a restic repo). I can ship it anywhere (after I’d set up some VPN on it) and if my house burns down, I can go grab my backups from there. The likelihood that your whole town would burn down or get hurricane’d is probably not that high (unless you live in Florida or something).
lacking necessary component
and I’ve deployed tailscale…
I’ve read through some of this posts and would like to add my 2 cents:
The support for DDR5 ECC UDIMMs is abysmally bad, edac-util never finds any memory controllers, so you’re left to monitor these errors via your BMC UI if it even supports it.
As opposed to that DDR5 ECC RDIMM support is quite good across the board - Proxmox, TrueNAS and if I remember correctly Ubuntu/Debian in general.
However: from all I know, it only makes sense in combination with proper ZFS setup and allows to detect errors while copying. I still am not entirely convinced what benefits full ECC support yields, but since I went with an EPYC 8024P system, I figured I should be getting it.
Man, I really want the Jonsbo N5.
https://www.newegg.com/black-jonsbo-n-series-e-atx-full-tower-case/p/2AM-006A-000F6
Takin just a bit too long to release though.
You might as well run proxmox bro
Scales a dumpster fire still
Lemme check it.
Reading that, they’ve actually fixed tbe networking complaints he has, since this happened. Its much easier. If that guy is only complaining about networking, we gucci.
Hes complaining about the entire work flow. Says that maintenance is awful and ease of setup is wildly annoying and quirky on truenas scale
K8s is a nightmare and every time I have to touch it I hate it more
I kinda have a hard time trusting his judgment on it.
I’ve been running truenas for all my mission critical stuff on the framework since september. Its been working fine, and I’ve even been rocking the RC2 for a bit.
It might be his experience on truenas sending him down that tube also you work with it so you know how to get around those things
Just keep in mind your use case seems better tuned to a hyper visor and tons of VMs with a storage backend than it does a NAS and proxmox is a hypervisor. A damn good one. TrueNAS is a nas product… A damn good one but theh arent the same
SCALE is a dumpster fire for stupid stuff that was only introduced because of cocked up management: containers, docker, VMs.
For a NAS, it is great
There are no issues at all?
The Corsair memory you’ve listed ( CMK64GX5M2B5200C40 ) runs out of spec voltage wise. It makes no sense getting memory that doesn’t run at 1.1V at these speeds.
fwiw, I would also run FreeBSD myself