Escaping the sprawl (rearchitecting homeprod)

Oh, netboot. Yeah, that would be cool, and definitely a lot cheaper to implement. :facepalm: Also solves the one problem I’ve had with the 1821+'s hardware in an elegant way (lack of igpu). Not sure how I missed those videos about the netboot RPi setup you have for that, but that was a fun rabbit hole to dive down.

I don’t immediately recall a network boot option in the minisforum bios, but I’ll have to look again. Was kind of a convoluted mess in there – even the autorestart after power loss option was in an odd place and with a not entirely intuitive label.

At a high level, though – the environment you’re talking about has a ZFS storage server, the ephemeral server (single disk ZFS, sends backups to storage host on a NFS-enabled dataset), and a box that can netboot via NFS that can just pick up that dataset (or others) as needed? Or is it two box and everything lives on the storage host and the compute is entirely run off the network (and/or the classic ‘just grab another one from the closet’ like corporate devices with roaming profiles)?


Warning: Madness, mental gymnastics, rational-lies, and being wrong on the internet to follow (x99 ramble)

On the gaming rig side of house, I set about doing some (fairly casual) performance profiling to really nail down what it is I don’t like about the systems I have vs. the work I’m asking them to do. I hit some weirdness that, in retrospect, makes perfect sense. But I may as well document my foibles, too, so others don’t make the same mistakes.

I started off by just having task manager up on the second screen (aka the laptop’s screen) and playing through some games to check threading behavior. At first I was like, ah ha! This game and that game only use X number of cores/threads. Except for the ‘weird’ bit – every other core beyond the ones in use were doing absolutely nothing at all. For some games this was 4, others 8, others would use everything I’ve got. The weirdest was my main game, FFXIV – on the initial app start, it’d hit all 12, but once it was loaded into the login screen and on through the gameplay it would only use 8 of the 12.

So I moved on and said heck it, lets run 3DMark on the various power plans for the laptop. This is when I realized it was a power limit that was causing the cores to sit idle, when using the Quiet power plan. So I went back and redid it on the High Performance plan, and sure enough – all cores all the time, in nearly every game save the older ones that had tuned for a 4 core system. But everything was playable, save Cyberpunk (usually ~75 fps, but occasional extended dips to 45 in busy scenes). Downside? Noise and thermals, to no one’s surprise. Even the Quiet profile is loud for my taste, high performance is way, way too much. But… overall, not bad for a 5600 and a laptop 3060 inside a ~155W combined power budget.

Desktop, on the other hand… I went trawling through my old 3dmark results and it’s lost ~10C+ in CPU thermals in the last year, even after taking the 5820k off of the 4.4ghz OC down to stock. But what was interesting is that at least in that test, CPU performance when overclocked is dead level with the 5600’s. That’s weird. But, does mean I could probably shimmy along and wait another CPU generation if I have to, with acknowledgements to how inefficient and hot it will be in the meantime. I have never had a noise complaint with the FT02, because I had the fan curves pretty dialed in and added some sound deadening to it, too.

Then the 'tube’s algo got drunk and showed me some things that dovetailed with what I’d tripped over on the CPU performance side, and the madness took hold…

Some videos talking about x99's current 'budget' status

$80 CPU KILLS ZEN2! Core i7 5960X MAXIMUM Gaming Performance - YouTube

BEST Budget Gaming CPU? Core i7 5960X Under $100! - YouTube

Long videos short, the 8c/16t chips that drop in on x99 are dirt cheap. And they were demonstrating some memory overclocks that at once both intuitively make sense, but also run counter to the general ‘moar MT/s’ overclocking recommendations people generally made back when x99 was current, as well as for the dual channel systems more broadly. The benchmarks were flawed (random resolution changes, graphics settings not likely putting full load on the CPU, etc. etc.), but made me curious.

Then there was the guy in the top comment on the first video talking specifically about how Haswell’s ring bus works, bandwidth limits of it vs the RAM (examples given: 4.2 ghz ring bus on 5950x and alder lake 12100 can move about 135GB/s, but dual channel 3200 ram is only good for about 51.2GB/s, quads was put forth as about 85GB/s), and how that would compare to infinity fabric. In particular, mentioned a test of adjusting the ring bus clock on an intel chip to what the infinity fabric was on zen 1, then zen 2, etc. and see how that will affect performance.

But it was a bit of a ‘duh’ moment, in the same way that MB/s per spindle vs. network speed was a ‘duh’ as soon as it was spelled out. The common advice was just ‘dont touch ring bus’ (or uncore or whatever the bios called it), because it ‘didn’t do anything’. But for my box with the GPU it has now, I’m curious if it does give more perf since it’s a 4.0 x16 card in a 3.0 x16 slot (i.e. another 32GB/s of load on that bus after it negotiates down). Ditto curious to test 2666 low latency timings vs 3000, because the IMC on these chips really did not want to go higher than 3000 without really winning the silicon lottery… with what (little) I knew back then. I mean, we were manually locking everything, forcing the high performance power plan, and shutting off all of the C states in the bios.

Anyway, long story short – ended up ordering a fractal north (so I can free up the FT02 for a storage build), a new PSU since the seasonic in it is quite old at this point, an arctic liquid freezer 280, 32gb of ddr4 4000 (well, i needed another 16gb anyway…), and one of those thermal grizzly kryosheets I’ve been curious about. I’ve been quite wrong about many things since opening this thread, and if I’m going to be diddling about with old, hot hardware it’s time to see if all my FUD from the old “koolance/danger den/tygon tubes from home depot/heater core from the junkyard” water cooling days needs to die, too. No great loss since x99 is so cheap now, and if I am as wrong as I probably am then it’s good bones to start from when I updoot the platform later on. Plus, if it dies, I still have the laptop to fall back upon.

I’ll know the madness has truly taken hold when I start poking around about enabling resizeable BAR on this board… :rofl:

Had some time today to noodle with tuning the desktop. Since it’s so stupid hot why not crank up the 500W space heater, right? At least it’s good for stability testing? :joy:

Summary
  • To no one’s surprise, choking down the ring bus / uncore speed murders performance. What was interesting was that on the flip side, the RAM write speed scaled more directly with the uncore clock than it did with the traditional memory bandwidth overclock steps. Reads were what moved most with normal memory overclock. But, at the practical level? Stock 3.0 vs the 2.0 ghz uncore test only took ~10% off of gameplay fps.
  • The laptop’s 5600H (ddr4 3200) and the 5820k bone stock (incl. 2133 memory) are within spitting distance of each other for memory bandwidth; but the 5600 is ever so slightly faster because IPC. Turn up the memory on the 5820k, and it does run ahead, but not as far as one might expect from quad channel. Again, not really a surprise.
  • The 5820k, when the power management features are actually on (lol), is only pulling ~30W idle; when overclocked it was pulling about 80W at load. I am thermally limited by cooler and ambient today, though, so I’m sure it can pull more. But this is far less than I’d originally thought, and maybe opens a door to downclocking this platform for home server use.
  • The Sabertooth x99 does expose “Above 4G Decoding” in the latest bios, so yes there is a glimmer of maybe enabling resizeable BAR on this board.
  • FFXIV Endwalker bench is weird. Under no circumstances was the game maxing out CPU or GPU, nor hitting a thermal or power limit, yet couldn’t push frames higher. I assume this is a system level round trip latency problem?
  • The killer, though: Storage performance on the x99 is bad. Even on the pci-e gen 3 x8 add-in-card SSD I have, it’s not keeping up with the ryzen’s ability to pull data off of the gen 4 x4 drives (even with them being slower in Crystal Disk Mark, for whatever reason). A 25% drop in loading time from scene swaps is hard to ignore.
  • I also had some amazingly bad behavior during the memory OCs on the desktop that explained immediately where those video files got corrupted… The unstable memory OC, I had typed some things into a notepad doc and saved them. Then reset the PC to revert it. It BSOD’d on shutdown, and when I started up again on all-on-defaults that text file had nothing in it but null characters.
A detailed tribute to wrongness (data points, presented badly)
read write copy latency
Ryzen 5600H (6c/12t) @ 4.2 Ghz, 2x32GB DDR4-3200 22-22-22-52 1T (Quiet performance plan)
memory 46166 MB/s 47175 MB/s 44036 MB/s 94.4 ns
L1 Cache 1444.6 GB/s 746.41 GB/s 1467.9 GB/s 1.0 ns
L2 Cache 763.56 GB/s 638.12 GB/s 759.00 GB/s 3.0 ns
L3 Cache 280.33 GB/s 274.66 GB/s 264.27 GB/s 22.0 ns
Time Spy CPU Score 6992 CPU FPS 23.49
Time Spy GPU Score 8130 GPU FPS T1: 54, T2: 46 Note: Laptop 3060
FFXIV Endwalker score 17156 FPS 127 avg, 55 min Loading time 12.144 s
Intel 5820k (6c/12t) @ stock clocks, 4x4GB DDR4-2133 15-17-17-?? 1T
memory 47609 MB/s 46350 MB/s 47615 MB/s 72.9 ns
L1 Cache 1348.5 GB/s 674.49 GB/s 1348.2 GB/s 1.1 ns
L2 Cache 386.20 GB/s 213.39 GB/s 299.29 GB/s 3.6 ns
L3 Cache 197.42 GB/s 136.06 GB/s 175.01 GB/s 23.4 ns
Time Spy CPU Score 5491 CPU FPS 18.45
Time Spy GPU Score 16681 GPU FPS T1: 108, T2: 96
FFXIV Endwalker score 18031 FPS 135 avg, 49 min Loading time 15.38 s
Intel 5820k (6c/12t) @ 4.2 Ghz core, 4.2Ghz uncore, 4x4GB DDR4-2133 15-17-17-?? 1T
memory 50761 MB/s 62306 MB/s 51245 MB/s 67.4 ns
L1 Cache 1576.4 GB/s 788.49 GB/s 1576.1 GB/s 1.0 ns
L2 Cache 572.78 GB/s 248.51 GB/s 330.63 GB/s 3.0 ns
L3 Cache 303.40 GB/s 175.53 GB/s 225.58 GB/s 17.1 ns
Time Spy CPU Score skipped/forgot CPU FPS
Time Spy GPU Score skipped/forgot GPU FPS
FFXIV Endwalker score 20434 FPS 152 avg, 56 min Loading time 13.599
Intel 5820k (6c/12t) @ 4.0 Ghz core, 4.0 Ghz uncore, 4x4GB DDR4-3200 17-18-18-36 2T
memory 62388 MB/s 62025 MB/s 65686 MB/s 57.2 ns
L1 Cache 1500.4 GB/s 750.42 GB/s 1499.9 GB/s 1.0 ns
L2 Cache 512.38 GB/s 237.75 GB/s 284.90 GB/s 3.2 ns
L3 Cache 287.03 GB/s 168.64 GB/s 215.03 GB/s 15.1 ns
Time Spy CPU Score 6420 CPU FPS 21.57
Time Spy GPU Score 17035 GPU FPS T1: 112, T2: 97
FFXIV Endwalker score 21355 FPS 158 avg, 60 min Loading time 13.825 s
Intel 5820k (6c/12t) @ 4.4 Ghz core, 3.0 Ghz uncore, 4x4GB DDR4-3200 17-18-18-36 2T (completely unstable)
memory 53379 MB/s 47003 MB/s 59592 MB/s 60.7 ns
L1 Cache 1650.2 GB/s 825.43 GB/s 1649.6 GB/s 0.9 ns
L2 Cache 554.30 GB/s 248.30 GB/s 358.70 GB/s 2.9 ns
L3 Cache 222.46 GB/s 143.23 GB/s 183.17 GB/s 17.0 ns
Time Spy CPU Score x CPU FPS x
Time Spy GPU Score x GPU FPS x
FFXIV Endwalker score crash FPS x Loading time x
Intel 5820k (6c/12t) @ 3.6 Ghz core, 2.0 Ghz uncore, 4x4GB DDR4-2133 15-17-17-?? 1T
memory 36630 MB/s 31457 MB/s 40189 MB/s 82.2 ns
L1 Cache 1348.3 GB/s 674.43 GB/s 1347.6 GB/s 1.2 ns
L2 Cache 362.09 GB/s 209.92 GB/s 275.92 GB/s 3.7 ns
L3 Cache 143.29 GB/s 94.04 GB/s 121.87 GB/s 30.6 ns
Time Spy CPU Score 5224 CPU FPS 17.55
Time Spy GPU Score 16506 GPU FPS T1: 107, T2: 95
FFXIV Endwalker score 16117 FPS 122 avg, 45 min Loading time 16.84 s

Anyroad, after today’s testing a few things became clearer on the build priority list, and the lifecycle for the other parts from where I sit now. High level is taking the old desktop and pressing it into home lab server use (i.e. giving the TrueNAS setup another whirl, and other tomfoolery), clearing the games off the laptop and keeping it as stable as possible for beige computing tasks, and building a better gaming rig. Also how overkill the GPU I have is, or rather how much I do not like having a 350W space heater in the room.

Did have some questions, though:

  • There’s been mention about nvme raid 0 not conferring speed boosts ‘as expected’. Given that I’m looking for game load / burst-ish performance for fetching assets in open world games, rather than sustained scratch, media serving or one load and done FPS gaming, does it make sense to pursue that, or is this just more madness? Alternately, are there any actually good Gen5 drives, or are they all overpriced, hot, and high latency ‘because new’ at the moment? Intent is to perform Apple heresy and raid 0 the boot + everything else drive, and have good backups, if going that route.
  • AM5 motherboards are kind of a wasteland for intel NICs and good PCIE layouts - at least amongst brands I’m willing to touch right now (MSI and Asrock, as ASUS and Gigabyte are both recently on team ‘start fires and blame users’ with AM5 CPUs and PSUs respectively). How bad is realtek’s current stuff, really, under pop_os? The 8125 / Dragon seems to be most common, and it’s the motherboards that are keeping me from just pulling the trigger on an AM5 build.
1 Like

Ended up flipping the 3080 to a family member in need, so now it’s just settling in for the long wait. Skipping the raid for sanity’s sake, and the B650E PG Riptide has a Killer (intel) NIC on it. Given that even a 4090 doesn’t take much of a hit from running on PCIE gen 3 x16 (NVIDIA GeForce RTX 4090 PCI-Express Scaling - Relative Performance | TechPowerUp), not much point stretching for X670 at the moment. Plus, I can lifecycle the board into home server use later on in the nebulous future since the NIC’s good and the CPU has an upgrade path available if I end up needing more cores. Couldn’t find a decent answer to the GPU space heater problem with what’s out there right now, though. So it goes - that’s what vsync is for :joy:

Put the gaming box together today, pictures later if I remember to.

Out of forgetfulness more than anything, used the nvidia image of Pop OS to do the install. Did not hit any issues with overly-helpful ‘i see you’re using an AMD gpu’ with the 7900 xt; not sure if something changed since those reviews went up or if this is a byproduct of using the wrong image. Just fired right up, ran some updates, done. (i.e. did not have to do anything with nomodeset)

Steam did hit an issue with it basically crashing to desktop (perceived user experience) every 5 seconds or so. The issue was it needed ‘PrefersNonDefaultGPU=’ to be false rather than true. Change this in usr/share/applications/steam.desktop. After that change, no further issues. Even got my main game installed and running via flatpak. Probably 30 minutes from OS install to in game. Wild.

Other thoughts: The Fractal North is kind of hit and miss. It’s very pretty, but it also could use a few mm in every direction for cable management, clearance for gpus, and clearance for fans (this one is somewhat self inflicted by cooler choice and placement; but that was forced by gpu length constraints). Also could really use another hole at the top of the motherboard tray for passing cables through, and the lower holes near the PCI slot covers are going to be blocked by the PSU ( front panel HDaudio connection was nearly a victim of this, in my install ).

If you do go with a 280mm Arctic Liquid Freezer II on the side bracket, you will have to ensure it is installed as close to the mounting ears that hook into the side panel as possible, otherwise it will catch on the radiator and prevent you from seating it to screw it in. It also, due to the ‘hey, lets make it look cool’ patterns in the tracks where the fan mounting screws go, cannot just be loosened up and fully slid from one end to the other. The middle two screws for the fans will catch.

Also, the captive screws on it are generally a pain. It is non-obvious by feel whether or not you’ve finished unscrewing from the panel vs are now unscrewing the captive screw from its seat, due to how tight they made it. Maybe just an issue with my example, hopefully – it has others, like the mesh side panel inexplicably wanting to bow out, and other (minor) issues that stem from how thin the metal is.

All in all, though, fun build, looks good, and runs quiet (after letting the BIOS do its fan tuning, and correcting which sensors it should be polling from for the chassis fans i’d put in the CPU #2 connector). Have to shake it down with something more intensive on the morrow.

Well, so far was so good. FFXIV had installed and run without any serious fuss. Cyberpunk had installed, but wasn’t able to run for reasons I hadn’t dove into just yet (yay, MMO patch content rush). Baldur’s Gate 3 installed, but hadn’t got around to playing it yet to see if it would run.

Installed some updates to Pop_OS last night that required a restart, and went to bed. Now the machine black screens after the bios splash panel goes away, and no amount of waiting will let it fix itself. So, seems like only a couple of weeks before a relatively stable distro tore itself apart? :slightly_frowning_face: Was via the GUI, and the specific updates were hidden (i.e. i don’t know which specific things got updated). Also can’t currently even get into the recovery partition, so going to have to get the USB stick out and see what can be done from there.

Hopefully a detailed tribute to wrongness to come, but given where it’s at and/or how far down the recovery steps I got this morning I’m not hopeful of much beyond flatten and start over. OTOH, it’s not like it’s a multi-year nest was built.

Anyroad, steps I’ve been following, lest someone else hit similar. Will report results later:


Late edit / follow-up: Steps at the bottom of the first article did sort it out. Still not sure what changed to have borked it in the first place, but up and running is up and running I guess. :upside_down_face:

A minor update, of sorts, but more an extended review of some things.


Asrock PG Riptide b650e wifi: My only complaint is the fan control in the bios not being as good as what I had on the Asus Sabretooth. It only has two temp sensors (CPU and ‘motherboard’), and I have zero idea from looking at the board or any of the manuals as to what the motherboard sensor is actually measuring. As result, all of the fan tuning I’ve been doing is relative to the CPU. Not as nice to dial in the quiet, but it was a cheaper board (relatively speaking), so not surprised. Also partially just fussier since adjusting the settings needs a reboot rather than having an app for that.


Pop!_OS: Generally speaking, it just works. I did have the Steam store crash issue happen again. The setting had unset or been overwritten, just had to set it again:

Otherwise? One hilarious quirk with Baldur’s Gate 3 where it wont run under Vulkan but will run under Direct X mode. Everything else has been ‘just hit install’ and it works.

I did also briefly try Wayland with it, but that tanked FPS in FFXIV from ~144 (with vsync off) down to the mid 30s. Think I’ll just wait for Cosmic desktop / Wayland to be better supported before I try again. Which, by extension, forestalls thinking any further about multi-monitor with different refresh rates.


Minisforum UM560XT: It bricked itself today, after a few months of really not being used for anything other than diddling around with TrueNAS Scale and Debian w/ Cockpit. No amount of power cycling, CMOS reset, or leaving it unplugged for # minutes helped, and no flashing on the power brick - just a solid light like always. It never got properly loaded up with any meaningful amount of VMs, containers, etc. nor was it ever really pushed hard. So that’s disappointing, but at least it never made it to prod before it elected to die. Was just about to pick up one of those Asustor DAS enclosures, too. :slightly_frowning_face:

Extra disappointing cause I was looking at their mini-ITX motherboard offerings and that six-m.2 slot intel laptop cpu motherboard was ticking a lot of boxes for server guts update. Now they’re off the menu entirely, for me at least.


Server update target’s really starting to crystalize into something like:

  • 2-5 3.5" drive bays, hotswap preferred
  • 2+ m.2 slots (beyond OS drive)
  • Needs AV1 decoder (11th gen Intel CPU or newer, or AM5; but intel probably gets the lean due to QuickSync being better supported by the software)
  • 4c/8t minimum
  • 32gb RAM minimum, ECC would be nice

Nearly there options: the seeed reServer (missing ECC - ~$250-300ish for the 1125’s atm, the hold back is the ‘is it another brick about to happen?’ since they’re a relatively smaller company), the Asustor AS6702T/AS6704T (missing av1 decode and ECC - $450(2x3.5s)-600(4x3.5s)), and slapping an A380 into the x99 box (probably jank because of Arc’s quirks about older platforms, would have to pick up a 2011-v3 xeon and new memory for ECC – $100-300ish, probably, but with higher running costs). Or a node 304 build or equivalent chenming, silverstone, or supermicro chassis for hotswap bays.

1 Like

Well, it lives again. Apparently the “trick” is leave it sit unpowered for 3 days. :expressionless: I took the 2.5" SSD out of it and threw it into my NAS for ephemeral storage’s sake in the interim, that’s the only hardware change. And that drive has been working just fine in the DS1821+. The other weird thing is that when I plugged it in again it still had the ‘restart after power loss’ set in the BIOS, so the reset CMOS button apparently wasn’t working when it was in the bad state either.

No idea, still don’t trust it. Definitely a good time to start poking around with booting from iSCSI.

1 Like

Ages later :joy:




Far as the server stuff goes, the most inexpensive entry point into a new NAS with ECC seems to be a DS723+. But I’d rather the media decode, if I have to pick. Ended up snagging a U-Nas 2 bay case and a N100 itx board for a backup target. See how it goes if I end up duplicating that build in a 4bay for the home server, or put something a little stronger in front (12100 or similar).

2 Likes

This heccing thing.

Where do I even start with the last 24 hrs. Wanted to move stuff from my shelf to a closet. Plugged everything in exactly where it had been. Bring the machines up, and somehow the docker macvlan containers on my synology can no longer reach out of the host. This creates a DNS outage, 'cause PiHole and LanCache are now down. Okay, easy enough, cut DNS over to a public resolver for now. Portainer stacks haven’t changed, output from ifconfig on the Synology hasn’t changed, and I can get to the Synology from other boxes on the LAN, so it’s not network. Okay…?

Go look at Proxmox, thinking it’s time to spin them up on there instead. It can’t get into its NFS shares. Check the IPs, nothing’s changed…okay?

Set up CIFS shares for Proxmox to talk to the Synology. These can connect, but cannot spin up a VM. Fails during deployment due to a timeout due to a lock. It then is stuck and cannot build any other VMs or containers indefinitely / until rebooted… okay?

So lets simplify the Proxmox setup, right? Move all the data off the SSD I’d moved into the Synology to let the network storage box do network storage box things. It’d been working flawlessly in there. Put the drive into the minisforum, and now it wont boot anymore, again. Okay…?

For a laugh, unplug the 2.5" SSD. Plug in power. Fires right up (bios has the ‘reboot on power loss’ enabled). Shut it off, plug the 2.5" back in. Doesn’t power on or attempt to boot. Unplug the 2.5" again, fires right up. Didn’t even have to wait any number of seconds for it to forget.

So at this point, the minisforum is plainly faulty at a hardware level. Proxmox doesn’t seem to like using network storage (at least in my environment), which with the hardware faults means the minisforum is a nice paperweight. Should’ve gotten a NUC, MSI Cubi, Asus ExpertCenter, or one of the tiny/mini/micro’s instead. Still might, who knows.

Fun and games. And now to learn Syno’s virtual machine manager…

4 Likes

I have my RB5009, which is pulling most of the strings in my network, serving as primary DNS getting its info from the PiHole (which I have axed by accident a number of times before implementing it this way round).

1 Like

Hm. The way I have mine set up at the moment is the DHCP server’s handing out the pihole, pihole talks to lancache, lancache talks to the upstream resolver. Mostly for monitoring’s sake, so I know which client’s asking for what. But, you’re right, having everybody talk to the router for DNS and then the router → pihole → lancache → out would be easier to cut over when the pihole dies. Caching might be weird, though? :thinking:

1 Like

Yeah, DNS caching is the mystery I am yet to fully resolve.

I should’ve done this sooner. All of the docker macvlan pain I’ve had over the years is abstracted away by the vSwitch. It can be set up to not care about which port(s) you have network cables plugged into. Way, way less fragile. The rest of the setup was more or less the same as the forbidden router’s guide for pihole and lancache, I just used debian instead of alma (for no reason other than ‘already had the ISO on hand’).

Though, there are a couple of things I don’t like about the virtualization on the Synology. ‘Where are the VM files?’ is one, as they don’t show up in any of the File Station or Control Panel > Shared Folder locations. It only shows information about them in Virtual Machine Manager > Storage. And even there it’s just “used” and “capacity” for the volume you set up, and then just “allocated vDisk size” for the VMs. Nothing about the actual space currently used except under Virtual Machine > the VM > Virtual Disk. That’d be annoying in a larger environment than mine. Also doesn’t give a choice on thin vs thick provisioning - everything is thin provisioned.

That being said, haven’t set up a container yet in the new setup where I care about the contents of it being persistent yet. But the host and the VM can ping each other just fine, so hopefully no huge surprises there.

2 Likes

My U-NAS NSC-201 case turned up today. Took a measurement for the sake of the sake of.

So the mixed news is that yes, ‘deep’ mini-ITX boards like the Asrock Rack W680D4ID-2T or AsRock Rack B550D4ID-2L2T should fit in this and the 4 bay variant. However, the board would be blocking most of the inlet they give you out of the box. Nothing a drill or a dremel couldn’t solve, though.

CPU cooler clearance isn’t any more than the height of the standard motherboard port cover, though. Intel stock coolers or low profile aftermarket only, not sure how tall the AMD stock coolers are.

3 Likes

N100DC-ITX and the RAM and SSD to get it going turned up today. I realized I forgot to get a sata M to molex F adapter, whoops. Getting the drive bays up and running will have to wait.

But in the rest of the teardown of this case, I have to say I kinda like what U-NAS is doing. They shipped it with an Arctic fan, and the PSU was a Delta, rather than no-name nonsense. They also had the HDD backplane’s molex and sata connectors already hooked up and ziptied in sensible locations that still gave easy access for the builder.

Mounting the board is a bit of a kerfuffle though, even in their demo video. Or I’m conflating it with Audheid’s 4 bay install video - either way, mounts similarly. The build in it, though, was fun and came together quick. I can definitely see why folks complain about the build being frustrating in the reviews though, especially if it was someone’s first build.

A few pics, and measurements for cooler height clearance in case anybody else ends up finding this breeze of madness while planning a build.




2 Likes

Murphy’s law. Can’t repro the issue once support is engaged. :facepalm: Differences were the SSDs, as the QX is still in the syno and the SP drive is in the truenas box now. So either it’s that, or … idk. Warm start bug, somehow? That’d be weird.


Since I’m (more than) kinda fed up with that box, I started moving on a minor refresh on the x99 box. I snagged myself on Xeon naming schemes, though. Picked up a Xeon E5-2650L instead of an E5-2650L v3/v4, and a stick of ECC DDR4, to turn the old x99 desktop into a server. Don’t ebay on cold meds :joy: So now that’s next weekend’s project. Works out anyway, that’s when the Arc A380 turns up for it.

More time to stand up the minisforum and see if I can break it again on camera, anyway. Joy of joys.

2 Likes

Thanks for relaying the horrors, for us to all learn from. And to live vicariously through you!

1 Like

Yeah, after I started hitting some wackiness with the mini PC I figured ‘cautionary tale’ was probably how this was going to end up. :joy: But as you said, hopefully it helps someone else out. :slightly_smiling_face:


The power cable adapter turned up and I finished the hardware side of the N100 build in the U-NAS box. One thing I will point out, though: the hard drive mounting in this case. The drives are screwed directly into the cages, there are no rubber vibration dampers. If you have loud drives (like the 14TB Exos drives I put in), you will absolutely hear them chatting away. There’s also no room to add much for damping after the fact, either.

Threw TrueNAS Scale on it, decided to play around again with apps and VMs on it for now while parts show up for the other box. Observationally, learning the apps/charts ecosystem might be required on a box this size if it were for an all-in-one as 8GB of the 16 that a N100 supports is going to go to ZFS, leaving really only 4 for a VM to run docker.

So far, either Jellyfin seems much easier to set up than Plex, or the app install workflow in TrueNAS Scale got some huge polish between the last time I touched it and now. This one more or less Just Worked, rather than being ACL hell like Plex was. QuickSync Video / hardware transcoding was also dead easy to configure, and even for how ‘underpowered’ this CPU is it seems to handle media server duty just fine for a handful of active clients.

Thermals wise… yeah. That little fan is trying its best, but it struggled to keep up if the machine’s both doing transfers and playing media at the same time. Seems to be an older variant of the Arctic F8 Silent, non-PWM. CPU hovered around 85C and spiked up to ~92C while transferring some test videos to it while playing one of them via Jellyfin, took about 15 minutes after stopping playback to get down to its steady state for transfers only (~60C), and idles around 40C. Confounding factors? I chose a motherboard with no active fan on the CPU cooler, and one that takes a DC input so the Flex PSU isn’t present to contribute its cooling fan to evacuating air from the case. A normal build wouldn’t have this problem. Disk temps were in the mid 40s, though, so that’s fine enough.

However, taping over the holes where the PSU would be, and giving more space against the wall its pointed at helped it not recirculate the hot air through the PSU holes instead of drawing through the drives and the holes in the side like it’s supposed to. It seems to steady state the CPU around 80C now while transcoding and writing files to disk, even if multiple machines are requesting transcodes. Likewise, soon as the load came off it, the temps now drop almost immediately. I think an Arctic P8 Max would be cheap insurance, though, even if I think it’s unlikely to ever hear it scream.

Fortunately a short term thing, just to see what a N100 can do – box is just supposed to be a backup target for whatever the main server ends up being, after all. Certainly tempted to just grab a few N100 mini PC’s as cattle, though. Everything else I’m doing (pihole, home assistant, etc.) easily runs on less powerful hardware.

tl;dr: Build-specific ITX case cooling problems solved with painter’s tape. N100s are surprisingly good for what they are, except for the RAM capacity limitations.

3 Likes

Adventures with Lancache

One downside of Docker and/or the DNS in a VM rather than direct on the Synology is it adds some latency, especially if the lancache is busy serving out files. Extra-especially since they’re cohabitating in the VM together. Was seeing 5m load average of >15 with spikes to ~24 on a 8 core VM just trying to push things over gigabit.

Poked around a bit, simplified DNS to just pihole and out. Lancache has scripts that auto-generate the files needed for the dnsmasq.d config folder to forward the traffic to the lancache monolithic container.

Along the way, did trip over some options for automating the update of that either via cron job or an all-in-one docker container:

But really, the sane option here is sandbox lancache monolitic into its own little world and give the DNS VM thread priority when they start getting into a slap fight over who gets to spend more time on the resource availability rollercoaster. Alternately, DNS needs to move to another box entirely if the cache is going to get hammered a lot.

Performance is still inline with the reasonable expectations (Hardware | LanCache.NET), but raid 10 of spinning rust + 1TB of SSD cache drives (that had it in cache, I tried several times) only getting 400 Mbps feels bad. Adding RAM to the VM helped a little, as it was swap thrashing on 8GB (despite reporting only using 2GB of that both in Synology’s UI and when i poked the logs via Cockpit… not sure what that’s about). But it did run better when that container was just direct on the host, and giving 16GB / half my RAM to one VM seems excessive. Alas, due to Synology quirks, running it direct again will require macvlan networking because of port reservations… I think? :thinking:

What I’ve tried since:

Summary

I tripped over this post that suggests ‘just bind the container to an alternate NIC’s IP address and port rather than all IPs’ using Docker’s host networking - https://www.reddit.com/r/lanparty/comments/i9y485/comment/g23r99e. Okay, grabbed a cable and plugged in another port, set up the port mappings to be ‘${LANCACHE_IP}:80:80/tcp’ and similar. That did let the container deploy, but ultimately did not work (nginx inside the container tries and fails to bind 0.0.0.0:80 - port’s in use).

Found Port binding does not appear to work with host with multiple IP's · Issue #121 · lancachenet/monolithic · GitHub. The suggestion to configure a bridge network also did not work, same nginx issue. But they did suggest manually editing the nginx config files. So, copied the /etc/nginx directory out of the container and tried poking around to set up a listen IP address. This was in /etc/nginx/sites-available/10_cache.conf. Then just updated the volume mappings for the container to point at my copy of this file. However this, too, failed – nginx: [emerg] bind() to [theIPAddr]:80 failed (98: Address already in use). New error, but that also does not work.

Near as I can figure out, options appear to remain at ‘use macvlan’ or ‘use separate VM’. I don’t want to turn off the appliance’s port 80 and 443 redirects to the management ports and fight with Synology’s update releases overwriting it.

1 Like

I have an SSD hooked up to my RB5009 acting as HTTP-Proxy, which helps a lot.

Would be interesting if the Odroid HC4 with two SSDs in it would work as a low-cost LanCache machine, I have my doubts though. That way, you could separate out the “internet handling” machine from the “home-core-network”.