Escaping the sprawl (rearchitecting homeprod)

[Note: This started out in Build a PC for build help, but devolved into a blog about the journey somewhere along the way.]


Alternate title: “It’s warmer and messier than Lain’s bedroom in here, and none of it is actually doing what I want it to do.”

I’m finding myself at a weird crossroads where if I had the ability to merge two machines I’d have a good-enough daily driver - but due to form factors (and/or lack of thunderbolt for an eGPU dock), this is not possible. One is big, heavy, hot, and old with a side of fresh makeup (big, recent-ish gpu and ssd), the other is a laptop with not enough GPU to drive my monitor and screams all the while when I ask it to do it. The other aspect to all this is that the desktop is a Win10 box, and since it’s x99 it has no path to Win11. Clock is ticking to either repurpose it or get rid of it in some way or another. The other is already Win11 since it came that way.

I also have had a recent GIGO incident with my backup scheme (active backup for business on a synology 1821) - bitrotten file was originally from 2017, backup strategy’s host is from 2021, every copy is rotten. So it goes, not syno’s fault. However, that prompted some additional maintenance there. In the updates on the syno, come to find the gear I explicitly went out of my way to research and buy off their recommended hardware list is no longer on the recommended hardware list and now the machine sasses me about it. Which, practically means that while the device still has a year of warranty it has no support (unless I play games with removing devices before calling in, if I should need to). Needless to say, not a fan.

But, I’m sitting here looking at this stack of laptops, the desktop, and the spare parts left over from upgrading them, and I’m left wondering if there isn’t a better way…

Goals
  • Unhook from dependence on synology’s random whims about what hardware they’re wanting to put up with (cause, again, I bought stuff off their list, and they’ve changed their mind even within the warranty period on the device, never mind longer term software support window). My actual data size, inclusive of backups and the snapshots thereof is currently about ~5TB. I don’t have a good measurement on growth rate since most photo/video work stopped in 2020 and those were the drivers on growth – the old logs have since fallen off. Just off folder sizes, maybe 10-15GB/yr from that.
  • (Ideally?*) Split up home server services from backup storage, other than backups of the compose files and whatnot. Right now they are cohabitating on the same hardware. This is things like media server and content, pihole, and a lancache. Some of this, though, doesn’t need to be backed up (easy example: lancache’s cache files). The * here is that it’s yet another box to spec/build/power, so maybe this isn’t ideal?
  • In the process, I’d love it if I could make the boxes smaller and/or easier to carry (cue obligatory ‘kallax-sized’ meme here, but that is roughly the dimensions i’d like to stay within if I can). This would be 13x13x13 in / 330 x 330 x 330mm.
  • I’d also like to give linux a real shot at being my primary OS, too. I’ve used ubuntu before, and dabbled with a few others, but I’ve never thrown ‘real’ hardware at it to ever give it a realistic chance at being my daily driver. I’ll probably be using Pop this time around. My main game (FFXIV) and all of the rest of the apps I use are either cross platform (steam/gog with or without proton et al, davinci resolve, libreoffice, mozilla stuff, etc.), or I can live without / find alts for. Worst case, would still have the windows laptop.
  • Gaming wise, I like max pretty settings. The big GPU in the desktop was intended for Cyberpunk, as I have a 3440x1440@60hz ultrawide monitor (and bought for 4k rather than 1440p gaming to ensure headroom for raytracing and whatnot; not that i find raytracing essential after The Experience That Should’ve Sold It To Me). I also have very minimal experience with monitors north of 60hz, but am 120-144hz curious due to the few times I’ve used my laptop’s 1080p@120hz monitor as a primary. It’s just too small for the day to day. However, on my laptop and using the ‘laptop max pretty’ default settings in FFXIV I’m seeing frame dips below 60fps on my ultrawide during boss encounters that are heavy on particles, and there is a big graphics refresh coming to that game in the next expansion (probs 8 months away) that will probably make that worse. I also get memory artifacting if i use reshade when taking screenshots when using the ultrawide, due to lack of vram on my laptop. Desktop is ‘fine’, albeit loading noticeably slower and occasionally suffers the usual big MMO city problems with stuttering. Haven’t pulled out PerfMon to know why, since it’s a non-critical zone.
  • Editing wise: I mainly use GIMP and Davinci Resolve at the moment. However, it’s not for dollarydoos, just hobby things - so if it takes a little longer to bash its brains out while I’m sleeping anyway, so be it.
Kits to bash from (hardware summary)
  • Desktop: intel 5820k, asus sabertooth x99 (PCIE with a 28lane cpu is x16/x8/x4, no bifurcation and the x4 is shared with the m.2 slot; 10 sata ports), 4x4GB ram, 512GB samsung 950 pro m.2, 4TB pcie3 x8 addon card (WD AN1500), EVGA RTX 3080, 4x 4TB WD Red (pre-SMRgate EFRX) HDD, Silverstone Fortress FT02 (4 empty 5.25 bays, 5th bay is a combo slim optical and 4x 2.5" non-hotswap bay adapter with a BDrom and no populated 2.5" bays)
  • Laptop: ryzen 5600H, 2x32GB ram, 2.5TB of SSD, RTX 3060. Originally came with 1x8GB ram stick, currently unused.
  • Backup box: DS1821+, 32GB ECC RAM, currently 4x4TB ironwolf pros in raid 10 (primary volume for docker and backups), 2x 1TB ironwolf SSDs for cache / metadata pinning. Hasn’t hit north of 40% utilized on the CPU in the last year’s worth of logs I have even with the extra services i’m running on docker, usually just sits idle.
  • Other misc hardware: 4x4TB ironwolf pros sitting unused in an atom c2000 victim DS415+, 4x14TB exos x14s doing chia things for the moment to justify having them spun up (used to live in the ds415, now in the 1821; but these are refurbs so it was a decent enough excuse to see how quickly they’d die. I’ve swapped them in and out of the primary raid array once or twice to beat them up that way, too, so far so good - but i still don’t trust them enough to expand the array to that size since every other drive i have is 4tb or smaller.), and a couple of ‘spare’ SSDs (512GB, 2TB) sitting in old laptops that are currently their boot drives (but the 512 is in a 1st gen intel core box, the other is in a sandy bridge dell xfr laptop – these are rarely if ever used anymore since being superseded by the ryzen box, so waste not want not?). Old chenming/chieftec dragon case (normal ATX thing, but 6 internal 3.5 bays and a few 5.25 bays), with some very old and dead hardware in it - but hey, case’s a case.
  • Peripherals of note: current monitor is 3440x1440@60, home network is currently only gigabit wired and 802.11ac (but only current big internal data movement is game reinstalls from lancache).
Preferences
  • I am vaaaaguely partial to an all-AMD build, mostly for the 3d vcache and maybe a smidge of excitement about SAM/ReBAR. This is probably because a little knowledge is creating that potential box for me on the extra cache size. On the other hand, the socket 1700 CPUs seem to benchmark better in my main game, and have a core count that ‘feels like an upgrade’ over the 6core/12thread boxes i have. They also usually seem to have boards with a better selection of peripheral options (and/or more likely to not be stuck with trash tier Realtek NICs that don’t even work properly in Windows, never mind any other OS). But to be clear: I don’t superfan for team $color, it’s always been a point in time price/performance/value choice for me.
  • I don’t overclock anymore (beyond ‘set xmp and go’), cause I’d rather it just worked so I can get on with what I actually want to do with the computer - running applications. I’m also hoping to knock down how much heat it puts out, and power all of this is using (i.e. yes, I might be an Eco mode heretic, if the opportunity should present itself) – but heck, even an i3 13100 is an uplift over a 5820k, with less than half the TDP to dissipate into the room.
  • I prefer air cooling (less things to fail, less disastrous results when it does), and I am probably the target market for those thermal sheets rather than thermal paste because I never pull the cooler after the initial install to repaste unless there’s a severe / runaway temperature issue. Not saying it’s a virtue, just declaring a vice. At least I clean the dust filters somewhat more frequently. :slight_smile:
  • I’ve had an unscratched itch for a miniITX build since the ncase m1 and dan a4 came out. I even bought a dan a4 at one point, and was ready to build in it about the time the ryzen 5000s and rtx30-series came out. But with the GPUs getting thicker, I ended up selling it off instead. Missed opportunities…
  • I do like hot swap in a server / appliance, but it’s not critical. Just a strong preference. Bonus points for toolless hot swap.
Potential options?
  • Build a new desktop in the FT02, convert the old one into a truenas box or similar and shove it in the old dragon case. This probably involves getting a new CPU cooler due to case height limits, a couple of 3x 3.5 to 2x 5.25 bay adapters , and a HBA for the x99 rig. Rough ballpark of an idea of the new desktop would be Part List - Intel Core i5-12600K, GeForce RTX 3080 10GB, Silverstone FT02B ATX Mid Tower - PCPartPicker. Pros are these both are pretty decent for cooling things, downside is that one of them is/remains big and heavy. The dragon is the aluminum variant, so not as heavy as the steel ones were - but still big.
  • Build something as a condensed version of the current desktop (>= 6 cores, 32gb ram, room for 4 spinning rot boxes). This is probably a node 304 build, like this: Part List - Intel Core i5-12600K, Fractal Design Node 304 Mini ITX Tower - PCPartPicker. Placeholder gpu for price, because the 3080 i have wont fit in a 2-and-a-bit slot enclosure, but probably waiting for a RX 7700 to come out Soon™. Alternate option in this condensed idea would be upscaling to a node 804 / microATX build. In either event, the current desktop would get converted over to truenas duty; hba, 3.5" to 5.25" drive bay adapters, still a tank of a thing to move – and moreso, as fully kitted out it’d be capable of holding 10 HDDs and 6 2.5" SSDs plus the rest of it. Maybe a stretch goal of swapping for a v4 xeon and loading it up with >=512GB RAM, but that’s just technolust talking.
  • Just build another big desktop, and repurpose the x99 build into a home server. Example desktop build: Part List - AMD Ryzen 7 7800X3D, Radeon RX 6700 XT, Fractal Design Torrent Compact ATX Mid Tower - PCPartPicker
  • Grab a couple of seeed reServers and repurpose those 14tb drives into mirrors, and yank the cache drives from the synology and ram from the ryzen laptop to stand them up - one to each. Set up one as home server (and/or client device backup target?), one as backup target for the other server. Then get another 8GB stick of RAM for the laptop, and a large-ish 1080p@120ish or 1440p@120ish monitor that the laptop can actually drive without as much protestation. Upside? Since we can’t get the ECC reServers anyway, and with how little CPU time i spend on my NAS, it’s a lot easier to just pick the most inexpensive one (and on sale at the time of this post). And would definitely tick the small, power frugal, and capable of big (for my needs) storage density. Downside here, given the recent rot experience, is the lack of ECC, plus then fully dependent on the refurb drives. Plus still leaves me in the line for another daily driver upgrade, though would buy time / sanity (not all fan noises are annoying, but laptop fans certainly are for me) til the main hardware requirement changes.
  • Alternate flavour of the reServer idea is Mini or Mini X from iXsystems and a deskmeet, or a deskmeet and a strictly-storage-box Node 304 build but same general setup. I would consider a NUC or something like the MeLE Quieter3c for the home server side given how little pressure I’ve put on the ryzen in my synology while it’s pulled double duty, but local storage capacity would be too small for the backup and/or media data sets at a reasonable price and/or limit reuse of the existing kerfuffle of drives I have (or put them into a USB enclosure, at which point Yet Another Box and Yet More Cables, with a side of USB problems). Mostly the same downsides, though the storage layer would have ECC again at least. I also looked at a few of the Silverstone NAS cases (8bay, 5bay, and/or the 8bay 2.5" ones) but couldn’t find a motherboard that would fit, had ECC support, and had enough ports or slots to allow all the bays to work and still have room for future network speed upgrades, or not be more expensive than just buying from iXsystems (since the motherboards that do work start at ~400, before sorting out cpu, PSU, and hba or network depending on the board). But the benefit here is I could go with truenas core for storage and let it do what its great at, and then have the other box run proxmox or whatever and run services.
Other considerations and questions
  • After all the teething issues on x99 I’ve dealt with (had the build since 2015), I’m somewhat leery of early adoption of another platform, memory, and/or storage standard. All these “will this or that memory capacity work?” discussions about AM5 are reminding me of the same problems I have with x99’s potential upgrade paths because the memory controller is so bad on a 5820k. But if I follow that logic to its extreme, I should be grabbing a new AM4 setup, since DDR4 is cheap and I have never really been in the habit of updating CPUs. I just assume I’ll have to get a new motherboard like I always have had to anyway – obvs I haven’t updated the desktop in 8 years beyond replacing GPUs a couple of times. The NAS had the priority since I knew about the atom bug, and got lucky it waited to die until after the 1821 came online. IOW: Given the starting point I’m at, is it sane to sit on the sidelines this gen and buy the good stuff of last gen, and just wait for DDR5 and the new motherboard platforms to get cheaper and hope we stop finding new ways to set PCs on fire (whether that’s riser cables, power supplies, power cables on GPUs, or burning out the CPUs with crap bios updates)?

  • Separately, am I thinking entirely wrongly about backup strategy and tiering, and duplicating data and/or devices unnecessarily by having it hop from clients->home server as backup for clients->‘master’ backup target server for that server, and instead just treat the home server as another client? Or should I just have one box for storage in a general sense and cohabitate backups and single copies of nonessential/replaceable data there. IDK, part of my brain says ‘compute separate from storage, but have two distinct sets of storage for backups’, but then there’s the part of my brain that knows HCI is a thing, too. But the current setup is just clients->backup to NAS, and NAS is hosting that as well as internal services right now cause it’s always on and (relatively) low power draw - it does have more bays than I actually need, though. Offsite backups are something I’m exploring on the side, so not currently part of this discussion – but that would maybe be the easy answer (i.e. backup clients to nas, + offsite as separate copy)?

  • Am I missing some weird gotchas about, say, which GPU vendor to go with on linux? Far as I know, AMD is recommended for linux? But I think Resolve and plex require intel or nvidia, and FFXIV generally seems to bench better on nvidia gpus and intel cpus at the moment - and doesn’t care about extra cache in the x3d parts.

Sorry for the wall of text, been sitting on this post a couple of months trying to organize my thoughts on this. But any input would be greatly appreciated, as I feel like I’m well and truly lost in the weeds right now (and/or struggling with rampant technolust / gear acquisition syndrome).

TLDR: Current two options for daily driver machines either don’t meet the needs my current setup imposes on them and scream about it, or are old, run hot, and are very thirsty for watts. Also trying to reconfigure the home server / backup target due to a poor experience with the current device’s vendor. Looking for input on options for linux gaming desktop (for 3440x1440@60) + home server (backup target and media serving) with an eye towards TCO (power draw/heat output) and noise since the machines and I have to share a small room. Bonus points if it can be kitbashed out of currently available machines/parts. Budget for now is ~$2k, but there’s wiggle room if there needs to be.

1 Like

Synology: Interesting HW was removed from the qualified list. What was it? I might want to note that kind of thing in future reviews.

So the DS1821 esp. with upgrade should resell pretty well; could resell it here even as they aren’t bad devices necessarily.

Silverstone Fortress FT02

This could make a good home server platform? You can get a suuuper cheap disk hotswap enclosure that fits in the 5.25" bays.

Alternatives:
Cheap mini PC plus 4-bay USB sata enclosure? Sounds like heresy, I know, but I’ve got a video coming up wherein a mini PC like the minisforum 5600h using its internal sata bay for steam cache and ephemeral storage then just ye olde raidz1 or striped mirror in the 4 bays for everything else. ZFS datasets is a good use case for your “important” and “not as important” datasets where you can ship your “important” dataset to another machine somewhere else, or encrypted online backups with backblaze, rsync.net, etc.

zfs pools are easy to deal with exported or imported so even tho its not all in one chassis… they DO have a vesa mount bracket and with some creativity you can just screw the minis forum machine right to the disk bay box.

This is comparable power usage to your synology but way way more cpu horsepower and way more expandability/future proofing.

For your desktop PC an ordinary microatx am5 build sounds in order?

But the x99 system if you don’t mind the heat and noise would be fine for the home server role. ZFS and well-organized datasets will help with the bitrot and other stuff youv’e outlined.

2 Likes

Synology: Interesting HW was removed from the qualified list. What was it? I might want to note that kind of thing in future reviews.

The SSDs it was complaining about were Seagate Ironwolf 510 SSDs in the ~1TB variant. But, you raise a fair point, and I re-checked the compatibility list (still not there), and on the change logs on https://www.synology.com/en-global/compatibility?search_by=products&model=DS1821%2B&category=m2_ssd_internal I’m not seeing either any non-Synology SSDs on the list nor any removals in the changelog. But when I double checked in DSM (7.1) just now, I can’t find the warnings for those nor the refurb 14TB drives it was complaining about not being on the list in the Storage Manager overview, storage pool, or hdd/ssd pages. So I checked that list too for the Exos X14 14TB ST14000NM000G-2KG103s, and those are not listed either (https://www.synology.com/en-global/compatibility?search_by=products&model=DS1821%2B&category=hdds_no_ssd_trim&filter_brand=Seagate&filter_size=14TB&filter_type=3.5"+SATA+HDD&display_brand=other). System did just install an update earlier this week, so… now I’m not certain if it was a bad update previously, if this is the bad update since the unsupported drives aren’t complaining, or the support site is gaslighting me in the usual ‘we only tell you what is supported’ way (i.e. saving themselves from blowback from whomever got pulled off the list, in a legal sense). The release notes on the update don’t mention anything about hardware list changes, either. And due to the way their compatibility list page is designed, the wayback machine doesn’t have it either as it can’t scrape the table without inputs on the drop downs apparently. And of course the default system logs on synologys… synologies? anyway… are only kept for 30 days before going to the bitbucket, so I can’t prove what happened in the past (changed that setting now, obvs).

So, tried to force it. Unmounted the SSD cache and built a new one, cause it’d always pop the nag screen during creation on a cache or volume. Sure enough, still does. These UI inconsistencies are frustrating (whee, there went 2 hrs of chasing this around in circles in the UI and their docs), and definitely not helping with the ground shifting under my proverbial feet feeling.

OTOH: I agree, the device itself is excellent, the software’s been mostly good for what I’ve needed it to do since the original DS415+ i picked up back then (on DSM6), and when I cross-compared the 1821+ against an iXsystems box of similar bay count it consumes far less power (at least by listed specs). That’s probably the real frustration here - it’s awfully close to perfect, but the company’s pivoting in weird ways and now I’m concerned about updates almost in a “you’ll pry my cold dead fingers off of my XP, maybe” sort of way. Which… doesn’t strike me as a healthy place to be with what is supposed to be helping me sleep better at night. Even though I completely get the whole supportability and semi-guaranteed performance angles on what they’re pivoting towards with the hardware list and house brand stuff.

FT02 as a server platform

It definitely could. As it sits 3 of the 5 internal 3.5" bays have the SAS hotswap backplane addons, though one of the sleds sticks and can’t really hotswap. For that one I was thinking about grabbing a 2x2.5 to 3.5 adapter so I could still easily swap SSDs out of the other side of the case in that bay. Throw a couple of hotswap enclosures in the 5.25 bays, grab a LSI 9207-8i and some cables, and between that and the motherboard’s 10 sata ports is how I got to the 10x3.5" and 6x2.5" numbers (-1 other port for the bdrom, 1 would remain free). So for this one, ~$300 and an afternoon installing some bay devices and setting up TrueNAS. Would leave me with six spare 3.5" drives for swapping around when things die, still have the 4TB pcie AIC SSD for ephemeral storage… Hm. Does still leave it on my desk though, so +$cost for a better place to keep it (or I should probably stop lifting/lowering the desk while it’s on, which it would always be).

Mini PC + USB enclosure

Honestly, the first thing that came to mind when this all started was something like a ZimaBoard 832 and a pair of 8-bay usb enclosures. The turn off on the zimaboard option was the realtek nics, since I was going to use the m.2 adapter card in the pcie slot and scavenge the SSDs out of the synology. I thought USB enclosures were a no-no because they didn’t pass SMART data through, though? Though I only came across the enclosures via chia things, and it’s very “if it dies, it dies - rma it” in that space since the data’s as made up as the parking spaces at $retailPlace in a snow storm on black friday.

Jokes aside, good tip on the Minisforum (UM560, I’m assuming?). That is a ton more performance than a zimaboard, mele quieter, and pretty much every TinyMiniMicro in that price bracket.

So… rough math here is probably ~$700 in compute and disk enclosures if I wanted to put all the disks into service right now (yay, technolust again…? Otherwise it’s ~$400 for 4 bays and ~$500 for 8). Upside is it would swallow up the entire fleet of disks, and probably cost less to run doing so than the prior 2 NAS + desktop did. 2.5Gbe for whenever the network upgrade happens (Soon™, but it’s like 4th down the list…). Would also get all of the storage off of my desk, cause I’m sure standing desk jostle does wonders for the spinning rust. Downside is…? Um. Guessing out loud: limited upgrade paths whenever that day comes (basically have to toss the compute every time), at least if it’s kitted out on ram and SSDs from the get-go? But on the other hand seems like ZFS/truenas don’t really care so long as you can feed in the pool config and show it the drives? And odds are that day is a long, long way down the road (i.e. upgrade itch will strike first).


Otherwise, noted on the gaming build, and I’ve got some (more) reading to do on zfs/truenas pools and dataset setup, and dig farther into client device backup options it seems. Active Backup for Business was the main syno-specific thing I was using. Everything else was docker, so that’s super easy to stand back up again. Or, is once Portainer’s installed anyway.

the disk mix is so eclectic I’d probably just use one 4-bay disk enclosure. The ssd enclosure is the new usb 10gbps standard that is scsi-like and I’m getting smart data. I’m not sure its out yet? I need to check taobao and aliexpress to see if I can find it. There is a name you’ve heard of planning to release it based on the same chipset, but that’s not announced yet.

your old system as server tho… just run a network cable from it? Don’t even need a switch could use a 2.5gbs/5gbps/10gbps crossover cable. stick it in a closet, bathroom, etc some out of the way place away from everything. especially if you get a dual nic motherboard you could have one for your lan and one “high speed” nic to the nas and done. ?

that way it doesnt have to be on your desk.

Color me curious on this new enclosure, but sounds like it’s as Soon™ as the midrange RX 7000 cards.

Seeming like sensible order of ops is build new gaming rig, then port the old one to a truenas box with what’s in it right now. The x99 box is already dual 1gbe nic, but one’s a realtek (and this is where my personal anti-realtek NIC vitriol comes from), and the 5.25" bay devices could even wait since it’s already got 4 drives of the same capacity of what I’m actually using in the synology. Then just re-evaluate once the enclosure’s available.


With that in mind, sanity check on gaming build?

The one area I’m particularly perplexed by is PSU selection these days. No jonnyguru reviews anymore, and corsair doesn’t get a free pass just because he works there now. Though historically (past builds) it was PC P&C, then Corsair, then Seasonic… but PC P&C got bought then disappeared, corsair’s recalled a few lines in the last couple years, and Seasonic’s had their weird issues with overcurrent protection and the rtx 30-series.

1 Like

I think Sabrent has some branding deal with them in the US. … or maybe it’s only “icybox” over there.

(personally, i like these cause you can daisy chain them, … and you don’t lose a port)


BTW I wouldn’t do a zima board, I’d grab a refurbed “HP mini” with a somewhat older but “fattier” i5 inside, and would stick to gigabit for the “mostly cold backups” case. They cost about the same as a zima board, but… are network will be slower, and … most likely, probably, you won’t care at all.

Minisforum stuff is if you want to spend more to get more (speed, more modern stuff, more capacity).

cost/value/port is not going to be great if you don’t need tons of storage.

Yeah, that definitely looks like what would be a Sabrent enclosure stateside. Agreed on liking the daisy chain option, also like the internal power supply (rather than wall wart or brick; though it’s probably easier to find replacement bricks down the line), and by poking around at reviews it does seem like it may make the SMART data available. Fan noise seems to be a recurring concern in the reviews, though it seems to be inconsistently mentioned (i.e. ‘they’re loud once they get going’ vs. ‘the HDDs will be louder’). So it’s probably fine. Also apparently some weirdness with drive enclosure power vs. hdd power being individual, and no way to tie them to the host system’s power on state. That part could be a dealbreaker, but probably is just an order of operations startup ritual after shutdowns happen for whatever reason (i.e. enclosure, then drive bays, then compute, probably).

The other option that I can find for usb type-c 3.1 gen 2 / 10gbps is this yottamaster. Which is literally a wash in every way except if you prefer 80mm fan or 120mm fan vs. the sabrent, or if there’s a preference in looks. They’re priced exactly the same.

Every other 5-bay option I’m finding is the older 5Gbps enclosures, which are maybe $150 instead of $280.

On the compute side… zimaboards and mele quieters are ~$200 but have limited upgrade options, hp minis / dell micro / lenovo tiny’s in the $200-270 range seem to be skylake / intel 6000 series, the seeed reserver lower two specs are on sale right now at $270-ish, and then there’s the minisforum on sale for ~$270ish. But for the reServer and minisforum there’s at least another $100 in ram (or maybe a bit less if I scavenge the dimms out of my laptop and just grab another 8GB stick for the laptop to put it back to at least 16GB). Making it more fair for the tinyminimicro (up to $370) ends up landing in the 8000 series intel chips, not that it’s entirely necessary for what will be a mostly dumb storage box that occasionally serves out content via plex as its most arduous task.


So, 5-bay options seem to be:

  • $650-ish to DIY with a 5bay DAS and whatever head unit (no ECC, usually 1 nic - 2.5gbe or 1gbe depending on choices)
  • (example purely if anybody else is using this legwork for their choices) $699 for a DS1522+ (ryzen R1600, 8GB ECC, 4x1Gbe). Seems to be similar prices for QNAP and terrormasters (that truenas could be thrown on if the software is as absolutely atrocious as it is reputed to be).
  • $1100 for an iXsystems Mini X (the 4x1Gbe one with an Atom CPU,16GB ECC and some 2.5" hotswap bays). But this one should Just Work for quite a while, since it seems to be built on a good foundation (at least from looking at the underlying hardware). Nearly double though, which makes it a tougher sell.

Which… fair point on cost/bay, hadn’t thought of it that way. So ~$130/bay for DIY with the DAS, ~140 for the appliances, 220/bay for the ixsystems box.

Comparing with updating the current desktop, it’s more like $33/bay for the hotswap enclosures, though the motherboard requires a GPU or wont finish POST so that will drag it up a little bit. OTOH, for just 5 bays no enclosures are needed, just plugging stuff in and go.

Other side is power usage. Bad, bad numbers to follow (as I appear to have lost my kill-a-watt so can’t get a decent idle number on my desktop 5820k so i’m using a period review):

  • 5820k system (~115W idle, plus 5 x HDDs @ 10W idle (this is high for my drives - about double vs the spec sheet, but erring cautiously to cover some operation time too)). So 165W here – roughly $150-175/yr to run with my power cost as the floor, higher when I make it do more work for me.
  • The Minisforum (~10W idle + the 50W in HDDs and DAS if it scales linearly from the 10 bay unit’s review). 60W here as the floor, ~$50-65 to run every year.

So the new stuff would pay for itself in about six years, and the newer stuff is probably more likely to survive the six years vs the already 8 year old system(?) even at the ‘just plug stuff in’ pricing (and even in a place with relatively cheap power costs). Hm. :thinking: Tangentially, interesting way to evaluate costs against (pick any cloud backup/storage provider). I’m sure I’m (several) years behind on this thought process… better late than never.

You’re right that I wont really notice network speed. As much as technolust wants 10gbe, the only time I’m currently moving data around on the network of any sufficient size is steam game reinstalls. Faster is better, of course, but it’s already maxing out the CPU on the desktop and putting the laptop at ~70% CPU utilized when I did my entirely unscientific ‘reinstall GTAV because its a stupid huge install’ lancache functionality testing a while ago, so there’s kind of a threshold on the client device side too. All the more so if they’re on wireless, as I’m only getting about 400-450Mbit even with the client in the same room as the AP with direct line of sight. Plus, going faster than gigabit would involve (at minimum) getting a new switch to support whatever flavor of faster that ends up being. Not that it’s off the table, I don’t like my networking gear at the moment either, but the daily driver and backup solutions come first since they’re going to set the requirements for the network gear :slightly_smiling_face:

I was looking at i5-7xxx | i5-8xxx / 16GB / 512SSD for 150-200 euro here, there’s sometimes 6xxx series for 130-150 … 7000/8000 series are more interesting for jellyfin, maybe look in a few more places.

5 bay 10gbps jbod USBC enclosures are coming and should be sub 150$ soon

Don’t need raid in the USB enclosure. Details soon.

5 Likes

I have been living off of Seasonic, which has been fine.

I also have an ancient computer powered by Thermaltake Munich.

Me never having encountered the SSD-Brand, overall looks good to me.

Having a bit of a rethink, maybe, on the gaming rig. The rethink is from some poking around at the X3D chips for the primary game (details in here, but for me a main highlight is same monitor resolution and same-enough GPU for where i’d be starting from: Reddit - Dive into anything). For the chart, though…


For context for those who don’t play XIV: The first four items from the left (plazas and universalis) are main cities (in order: the big hub city most people are in, the two newest expansion cities, and the nowadays-usually-empty last xpac city). Troia and Alzadaal’s are 4 player dungeons, with the latter having more background items to render outside of the field of play, 3 zone swaps / zone loads in the middle for both. Aglaia is a 24 player raid, and also includes 3 zone swaps and some trash mobs as well. All of the Savage raids are 8 player raids where you load into the boss fight and only fight the boss in an arena.

Looking at this, cache seems to help round some edges off of loading things in from memory and/or storage in the places it does a lot of that (as expected), but the non-x3d chips average slightly higher (but have more inconsistent lows) if it’s a controlled ‘load 8 players worth of gear sets, load zone, fight one boss for 12 minutes and done’ environment. When it’s good, it’s crazy good. But where it’s good is not places where it usually actually matters for this game (save Aglaia, and maybe other massive player count areas – world bosses). Is it worth paying nearly twice over a 7600X…? Cue ‘futureproofing’ arguments here? Though for cities skylines and stellaris (which i dabble with every now and again, too, but rarely) it is better… so maybe the ‘all-rounder’ argument, instead? OTOH, 7800x3d’s are cheaper than an inflation-adjusted 5820k, so…

The other side of this is my usual update cycle, or rather that it’s been 8 years since the last gaming machine build. But in that 8 years, a majority of the time there really wasn’t much competition on the CPU side of things – even staying on Sandy Bridge was ‘fine’, if you had a SKU with enough core count. But, that was then, and there’s a core war going on right now. I’m wondering if the sane answer is going for ‘cheap and cheerful’ and updating ‘big’ once the platforms stabilize (i.e. PCIE 5 has a use, DDR5 and/or memory controller teething issues get sorted, etc.), instead? Or put another way, buy an inexpensive motherboard and cpu, and plan on upgrading again a lot sooner, or a ‘good’ motherboard and updating the CPU in place instead (as was done with AM4, though the early AM4 boards were marginal for the last of the AM4 CPUs…)?

There’s other assorted musings here about whether or not finally going for an AIO makes sense if the systems aren’t going to stay together as long any more attached here, too. But that’s partially because I’m waffling between a Torrent Compact and a North right now. One is Big Air and can be had with plain side panels (and/or can have sound deadening added after the fact) but a bit visually busy (torrent compact), the other is just Dang Pretty but kind of a mixed bag on side panel options (either potential Very Bad Day With Glass Cleanup, or All My Noise Sins Reflected Off the Corners of the Room). And, sadly… why not a new build in the FT02? GPU compatibility :slightly_frowning_face: It was hard enough finding a 3080 that would fit in that case, never mind the behemoths since then. It’s really making me want to downgrade monitor resolution to get back to the sane side of GPUs (whether price, performance required, or the power use/cooler size required).

Mildest of updates, the Minisforum box (UM560) and friends turned up today. The parts install was fairly straightforward (but I like upgrading ruggedized laptops for fun), other than the fiddly case feet (the foot left the adhesive strip behind on the case so had to carefully peel that up) and the extra-fiddly SATA connector (the mainboard side of it). Also discovered Ventoy in the process, which is so much easier than Rufus – super easy to build one USB stick for installing a bunch of different distros, instead of one for each or having to fuss with rewriting the USB stick when you want to use a different installer.

For having been NUC-curious since the first one came out, this thing is so cool. 6c/12t, 64GB RAM, cheapest 128GB boot drive, and a 4TB SATA SSD (for lancache and such, since it still outruns 2.5Gbe anyway), and right now it’s just only using about 7W for all of that. The cable modem uses more… Crazy.

Still a bit stuck on the gaming build, but on a gander at the new cyberpunk requirements, more certainly seems to be more. A little baffled at the way they’ve laid this out (7800x is the floor for 1080p@60? really? no mention of 1440p at all? pairing a 7800x with a 5700XT, wat? etc.), but that’s more… I don’t know why they’ve chosen those specs at those tiers. They’re nearly-nonsensical builds. Compared with the old ones, the only real takeaways is ‘more RAM, more VRAM, faster storage’ which isn’t terribly surprising. RT’s not essential, but the 1440 ultrawide monitor trapping into a higher spec GPU is definitely a thing.

On the other hand, I finished that game’s main story line on day 2 with a 1660ti on ‘as pretty as it could’ (and didn’t realize it was running 20fps cause everybody said the game was a buggy pile of nonsense, so i just assumed the chop was normal), so… Hmm. I don’t think $300 of monitor will take $300 off a GPU price, though…? At least not if hopping to a 2560x1440 (@ 120-144hz or so)…

1 Like

Touched on this in another thread, but have been playing with TrueNAS Scale a bit this weekend. Hit the issue outlined here on like the third step of setting up the environment (install, set up first pool encrypted at parent, then set up apps…). Short version: Don’t set up your apps on an encrypted dataset right now, or you get into a bad state and will probably have to tear it out and start over.

Other landmines:

  • Plex setup took 5 hrs as I fell on top of seemingly every single landmine that exists. On one hand, it’s well documented at this point (once you know what to search for / have error text or behavior in hand), and I ended up learning a bunch about permissions management and the user account translation between charts and the host. On the other hand, when the issue is that well documented, the user account and ID in the container/chart are known, and the permission needed are known… automate the pain away?
  • Still don’t understand why plex needs to be the owner of the entire dataset to even be able to read the media files for a library. Docker/Portainer could just pass :ro on a volume mount in a compose file…
  • Can’t have a dataset be both a SMB share and used by an app at the same time. Currently have to shut one or the other service off to move files in and out, or use them. There’s a box you can uncheck to avoid this, but it comes with a ‘Here Be Dragon’s’ / ‘this is unsupported’ clause on it. Given the apps folder kerfuffle, don’t really want to stray out of this line 'til I know what the consequences are.
  • (Not entirely unexpected, but documenting anyway): Can’t assign a Ryzen’s iGPU to multiple charts (e.g. can’t give igpu to plex and tdarr at the same time). One or the other, not both; only one at a time. Not the end of the world, since I can have the other machines with better GPUs churn the GoPro videos and just let the server host the control node.

So, at least so far… I put ~5hrs into it, tripped over all of these issues, and given the back and forth with the devs on the apps folder issue the wind is kind of out of the proverbial sails with wanting to commit to virtualizing or containerizing anything on this platform (even post-rebuild).

I’ll probably play around with the truecharts repo before I flatten it and start over, but… kinda gravitating back towards separate virtualization host and separate truenas core host, and just let core do the storage stuff it’s great at. We’ll see, maybe TrueCharts is the path to the proverbial promised land, before wandering either entirely off-script or into the two box config.

1 Like

Throwing in some still bad but better than nothing (maybe?) numbers: at least by HWiNFO64, the 5820k’s pulling ~60W idle all by itself (+another 25-30W from the 3080 idling, not that it’d be left there in server use; also not counting motherboard, case fans, etc.). At full power(-ish, it was just 3dmark) it’s more like 130W through the CPU. Still not kill-a-watt accurate, but all of that +HDDs would put the ~115W idle from the review into the realm of correct-enough.

OTOH, my UPS was showing about that much (~110W) on it when the minisforum, cable modem, and synology were all booting up at the same time, and about ~90W with all of them idling (or at least, as idle as the workload allows the syno to be; there are definitely optimizations to be had there). Credit where it is due, synos are hard to beat on the power sipping side of things. Cue more frustration about why they’re trying to screw their good thing up.

And, in the process of collecting numbers, found that 2/3 of the big 180mm AP181 fans have finally spun their last (or at least, the x99 sabretooth can’t get them to spin anymore). Have to throw in the AP183’s I’ve had on the shelf now; those are PWM rather than DC, so… here’s hoping it’s not fan headers on the mobo that have given up on life. And… probably should get around to re-pasting that CPU finally, too.

So, thread got updated about the encrypted dataset and apps issue. Short term, they’re going to disable the warning – issue only affects HA configurations and their related functionality. Long term (i.e. next major release), they’ll be warning about encrypting the root dataset.

So, seems like the ‘correct’ way to do it is build a new parent dataset, let the apps do their things in unencryptedville, and make an encrypted child dataset for everything else. I’m sure all the voices in my head screaming ‘encrypt everything at rest, encrypt everything on the wire’ are just professional vestiges and not things that actually matter for my use. But habits are habits, and I’m really perplexed who the target audience for Scale is, now.

If anyone has: Can you pass a USB device through to a VM in proxmox (e.g. passing through a DAS box and its drives)? Or is that only allowed for more reliably-attached hardware like GPUs or specific sata controllers? Edit: Answered my own question, docs for it are here: USB Devices in Virtual Machines - Proxmox VE

Brief update, ish. After poking around a few forum threads and reviews here and elsewhere, pivoting off of TrueNAS entirely for now, at least as far as the minisforum box goes. The personal non-starters are the resume on power loss behavior with a DAS (the enclosures will come up, but the disks wont until manual intervention, as far as anyone’s been able to tell me in the threads or product reviews), and the native encryption issues with ZFS that break …several things, but seems like send/receive is a big one.

Dropping links here in case anyone else stumbles across this or cares:

So I stepped back a bit and had more of a think about how I really interact with my ‘NAS’. Which is to say, I don’t in the strictly ‘network attached storage’ sense – it’s more of a server for web services than a box that hosts user-facing file shares, or there is a backup client doing background tasks that writes to the storage (active backup for business on the syno at the moment). And if the minisforum is limited to single nvme and single sata SSD within this autorestart requirement (because it will be hosting my DNS), the big benefits of ZFS are basically gone – it’ll tell me things are bad, but can’t do anything about it, and doesn’t have enough space to move everything into it anyway. TrueNAS was just a web UI shim to set ZFS up in an easy way for bulk storage, and without the benefits all the frustration of running the truenas apps or trying to work around their weirdness at that point no longer makes sense.

Ended up grabbing a Debian 12 iso and setting it up headless on EXT4 for now. Docker goes in, Portainer and the agent for it goes in, copied over my compose files from the Portainer instance on my syno via the UI, done and done. 20 minutes, rather than weeks. Not a for-everyone answer, and still a few quirks on reboot to iron out (weirdness with portainer-level cifs volumes and docker containers not (re)starting correctly if the shares aren’t up yet), but… stick with comfy I guess, at least i know what the pieces are.

ATM, still don’t have an answer for bulk data storage post-synology, truenas is probably fine if i stay with what it’s good at (file shares) and have separate compute. Halfway tempted to pick up a cheap 4-bay x86 nas and slap truenas on it, or HBA + some bays in the old desktop (roughly the same price between the two; but i am growing more and more suspect of how long the old rig will last). But, for now I’ll settle for the stuff already on my shelf to get this off my list and get on with the gaming build.


tl;dr: misfired - wrong hardware and wrong software for what actually matters to me in a home server, though I don’t think I would’ve clarified and crystallized it without it. Live and learn, and hopefully you got a laugh out of “wait, headless debian is the easier answer than the webUI NAS distro?!”, because I certainly did. :slightly_smiling_face:

3 Likes

One cool thing with single disk zfs is recent versions can pull corrupt blocks from a remote volume you’ve used zfs send to in the past. So very fast restores potentially.

I’ve also setup nfs on the backup system such that prod-reppacmeent really is running from nfs because restoring the dataset was going to take ana annoyingly long time

That part does sound rather cool, I’ll have to look into it. Right now the whole debian and ext4 thing is entirely understood as ephemeral (both in the “i will probably tear this down and rebuild it” sense, as well as it has no real safety net or backup plan at the moment). The bits that aren’t are making hops via cifs over to the synology to at least get the ECC and btrfs benefits since it’s sitting on the shelf there anyway.

Must admit, not entirely tracking on the backup system portion. Specific in context to the Ultimate Home Server all-in-one setup thread, or a different environment?

Oh, neglected one other thing on the Synology front. They have a new drive line - the Plus Series. Seems to be aimed at the SOHO NAS folks, still more expensive than Seagate and WD by a few dollars a drive, but not double to triple anymore like the enterprise ones.

Does make me worry (even more) that they are going to start moving towards the same hardware lockout they’re doing on the enterprise NAS devices, at some point. Ditto when I start seeing stuff like this ( https://www.synoforum.com/threads/dsm7-1-introduces-new-drive-vendor-locking-parameters.8735/post-44385 ) pop up about their roadmap for the next couple of years. The real question is will they flub a DSM update and start locking out the older models, too.

so for the stuff where I treat the entire machine as disposable, I try to set it up so that I can just boot it off nfs as opposed to having to restore the backup and then boot that (because it takes too long).

I can in parallel be restoring to another machine then just sync what changed while the “run from network” backup clone is “production” temporarily. This strategy works well to keep costs/complexity low, too.

1 Like