Which arm was that? Been looking for one that can lift that high.
Expensive, but I’m digging it. What really sets these apart isn’t so much the height of the pole, but the range of both the upward and downward movement of the arms. I found others with poles taller than this, but couldn’t get the arrangement and flexibility I wanted until I stepped up to the Ergotron.
The home gym PC didn’t need anything like that, so I just went with whatever was cheap.
Amazon says gonna get this today! So excite!
I only have the 8 inch model though, we all know that the 13 inch is where its at.
hideyho.
Workstation Part 2: Return of the Waffle
There’s been more waffling.
VFIO on the riptide board was a no-go. I should have checked IOMMU groups on it earlier, or rather its single IOMMU group for everything. I replaced it with a Aorus Pro AX from Gigabyte. Checking IOMMU groups on this board while maybe not perfect was a much better experience. The only real oddity was the 6800XT and the audio controller associated with the 6800XT were in separate IOMMU groups. Not really a big issue, just weird.
The U.2 PCIe card is out also. As Wendell said, they work except when they don’t. While I didn’t really investigate it the sporadic weirdness I was seeing in Fedora is gone also, so I’m calling that a win.
I also had a chance to replace the 3070 with a 6800XT, and after some shenanigans with non-payment I ended up keeping the 4TB SSD I pulled out of the NUC and used it in one of the M.2 slots on the new board. More space for local game storage this way.
There’s fewer LEDs on this board also.
Around this time was also when the Redhat firewalling thing happened. I might have just stayed on Fedora had there not been a motherboard swap involved, but after thinking it over I’m not sure why I didn’t go with Debian from the start. As for VFIO, I pretty much just followed Wendell’s guide from 2019 on PopOS. It worked out just fine on Debian 12.
I ditched the ZFS array also. I’d rather run any other virtual machines direct from the drives rather than from qcow2 files on the array.
Oops! We’re sorry. It looks like a problem occurred.
More fail from Fedora. Some random problem from kernel-core which I haven’t bothered to investigate since for all intents and purposes it doesn’t seem to really be a problem.
Oh well, I’ll move the VM to Debian when I get a chance.
Desk Upgrade
Once upon a time I had a somewhat normal desk. It worked ok, but I needed more space. Instead of just getting another desk, or putting a larger surface on the current one I did something a bit more complicated. I got some maple veneered plywood, made some supports and custom edge banding for it, then stuck it all directly to the wall to make one big shelf at desk height. The idea with the desk-shelf was to give myself a lot of extra desk space, and provide me with storage underneath it.
Don’t cable-shame me, and ignore the slab on the floor. Different project.
The desk-shelf worked out well for a number of years, but as you can see in the picture there’s now a gap between some of the sections. Over time and through the repeated process of jamming things through the cable cut-outs in the back the desk-shelf has started to slightly pull away from the wall. It’s lost some elevation and is now a bit low.
I know how you feel desk-shelf, but I’m still replacing you with a newer model.
Wood snobbery to the rescue
I looked at buying. I didn’t want to pay what people were asking, so I decided on building something desk-ish myself(again). You might think a person wouldn’t have enough materials just laying around to make this work. I did, because I am a wood snob.
This also fits better with the general theme of these projects where I’m simplifying the things I use on a daily basis to give me more time to persue other interests outside of technology. A simple, freestanding desk is a much easier thing to manage than a desk-shelf that occupies most of an entire wall and requires its own infrastructure for support. Does it sound like I’m rationalizing here? If I am, then this is a rationalization that saved me money. Sort of. Maybe. I guess.
There’s two sections each about four feet long. The pieces were all different lengths, so I let the edges run wild and then trimmed them down after the glueup was finished. The tool you see here is called a biscuit joiner. I’m marking the location of my biscuits.
Not those biscuits.
Not those either.
Really?
Yes, those biscuits. The biscuit joiner cuts slots in the side of the board which the biscuits fit into. Adding biscuits doesn’t affect the strength of the joint, but can help a lot with alignment by helping to stop your pieces from slipping under clamping pressure during a glueup.
The center strip is cocobolo(I used it before Better Call Saul, btw), the inner strips are padauk, and the rest is hard maple. In total I should have about two thirds of the total space I had with desk-shelf.
It had been a while since I had done much woodworking, so I didn’t even know I had the cocobolo piece probably costing around $100 just collecting dust in the shop. It was probably an off-cut from a bigger piece I had used in a past project, but it was wide enough to rip it down the middle and still be usable, so I used it.
Yep, wood snob.
A good bond will often mean that the wood fails under load before the glue joint fails, but I freehand routed out three channels in the bottom and added some aluminum support pieces across the joints, because why not. It’s a little ugly, but nobody is ever going to see it, and if it’s worth doing, then it’s worth overdoing. Thanks Mythbusters.
If you’re going to do this, then oversize the screw holes a bit to allow for wood movement. Wood expands and contracts with changes in humidity which can cause you problems down the road if you don’t account for it.
This is obviously not a good bond. The thing about cocobolo and many other tropical hardwoods is that they’re very oily which can cause glue joints to fail if you’re not careful. I should have rough sanded the edges and then used epoxy instead of wood glue to help give the bond a better chance. It looked good initially, but after working with the piece for a while this gap opened up and traveled about three quarters down the length of the piece before I added those supports. I didn’t add those support pieces for this reason, but I’m glad I did or this gap would have just gotten worse until the whole thing came apart.
I fix this by following the time honored tradition of filling and smearing tinted epoxy all over things woodworkers want to make go away.
This is the bottom of the piece. If I hadn’t taped it up beforehand the epoxy would have leaked through and stuck its self to whatever was underneath it. PSA: Epoxy fights dirty.
Once the epoxy had dried I attacked it with the belt sander to remove the excess, did the final finish sanding, and applied a few coats of walnut oil as a finish.
I could have tried to color match it a little better, but with the gap being between two dark woods I figured this would be good enough. The only time I’m going to see this now is when I make a point to look for it.
The decommissioning of desk-shelf came next.
The culprit.
Next was to set things up on the new desk and do the final cabling for the monitor arm I got from Ergotron. I’m still digging it.
Don’t be like stick dude.
I attached some off-the-shelf hairpin legs to finish things off, and then piled everything on it. I need to finish the other section, and there’s a few other things to do still, but I’m calling this a win.
fukin’ LOL!
super cool desk project tho
I like the inset metal re-inforcements
NAS Upgrade
I ditched TrueNAS.
It’s a good system, and another thing I wouldn’t discourage anyone from learning, but I’m not that interested anymore in maintaining ZFS even if it’s got something like TrueNAS Core or Scale on top of it. I got a Synology DS1823xs+ as a replacement with the intention of using it for both storage and as a docker host, and I gotta say, the docker setup in DSM is pretty nifty. It makes standing up container apps pretty easy especially if you’ve already got some experience with docker.
Hmm, I seem to be doing a lot of upgrades in the name of “downsizing”.
Setup
Setup was quick. It’s fully populated with 8TB drives, and 1TB nvme SSDs for its cache. I nearly waffled on creating a big, slower array for bulk storage, and a smaller, faster array for containers or something, but I’d rather have a little more storage since none of the things I’ll be running on the NAS are very disk intensive.
It took a minute to find the bay on the bottom that’s used to access the memory modules. I upgraded it with 32GB of ECC DDR4.
Modifications
But… this is an appliance! Well yes, and that’s why I bought it, but this is the L1T forum, and I do spend quite a lot of time here. I promise I won’t be tweaking all the things. Just a few. Here and there. Really. Not kidding.
By now lots of people have heard about the deal with DSM marking drives as “unverified”. I get it. The demands of customer service on this sort of thing are real. However, if/when my NAS really does go from its healthy state to a “warning” state I’d rather there not be things in the way of figuring out what its problem is.
I also set it up to run on boot in case the changes get clobbered during a DSM upgrade.
One other modification I made was to make DSM give up port 80 and 443. DSM hijacks those and redirects them to 5000 and 5001 for the default portal. I found another script for that, so multi-container apps which have their own reverse proxy can run unmodified. I might modify this script slightly to change its backup feature, but it seems to do the job as is. It’ll be another thing I’ll run on boot just in case.
Network shares
Setting up network shares is also something that can be done entirely from within DSM. I set up shares using SMB(because Windows) which worked fine for everything except qemu-img
. I kept getting permission denied errors when running the VM backup script. It appears qemu-img
is one of the applications that does not(or cannot) follow the byte range locking used in CIFS and requires the nobrl
parameter when mounting a share.
nobrl
Do not send byte range lock requests to the server. This is necessary for certain applications that break with cifs style mandatory byte range locks (and most cifs servers do not yet support requesting advisory byte range locks).
Be aware my Google-fu points towards this being a parameter that should be used only when necessary. If you use it all over the place it’s probably just going to make even more trouble.
This should be the final configuration of the rack for the foreseeable future. The workstation runs my VM backup script nightly to image the VM disks and transfer the images to the NAS. A few hours later the NAS is replicated to the backup target. Once the initial seeding is done replication to the backup machine generally goes pretty quick.
Misc workstation stuff
Once I had everything up and running it didn’t take long before I waffled on a 10G nic for the workstation. I picked up an X550 dual port 10g nic, and I put a 120mm fan below it to help with airflow just in case. I also took the opportunity to put a new heatsink on the CPU since the fan on the old one was kind of annoying, and I pulled the two WD Blue SSDs since I won’t be using them.
Can someone explain to me why in 2023 I often still have to take the entire machine apart to swap out a heatsink? I don’t see how this cutout is going to do anyone any good.
I also created another network in KVM configured for bridged networking allowing me to connect directly to the VMs on it without having to mess with routing in and out of the NAT’d subnet of the default network.
An end to waffling?
That should be the last of the NAS and workstation hardware updates for the foreseeable future, and just in time also since by the time you’re reading this I’ll probably be back in school again.
Quick update to the above. A month or so in and I’m still digging the Synology. The daily backup setup is working well, and I’ve started ripping discs to it for eventual transcoding and streaming through jellyfin which I’ve got running in docker.
It turns out I hate the Web Station app, and so to lessen the shenanigans required with DSM I ended up reverting the changes that script I posted earlier did to release port 80 and 443. DSM has a way to add entries to its built in nginx install from the GUI, so I created local DNS entries for jellyfin and syncthing and used this to forward connections to the docker containers on whatever local port they’re listening on. This will work just as well for any other multi-container apps which I may want to test from an external machine.
I have a weakness for setups with outside views. When eyes get tired, makes it easy to just take a look outside to “stretch your eyes” (since technically you’re not relaxing them when you change focus).
Something interesting was pointed out by @ThatGuyB in their blog thread that I didn’t cover here about the way drives for the VMs can be specified.
The way to use a raw disk instead of a qcow2 file is to provide the device as the source.
<disk type="block" device="disk">
<driver name="qemu" type="raw" cache="none"/>
<source dev="/dev/disk/by-id/ata-INTEL_SSDSC2KB960G8_BTYF205603WR960CGN"/>
<target dev="sdb" bus="sata"/>
<boot order="1"/>
<address type="drive" controller="0" bus="0" target="0" unit="1"/>
</disk>
Notice I’m not identifying the device as /dev/sdX
. The order in which these devices are enumerated during boot can change. Since I have two SATA drives in this machine sometimes the drive containing this VM will be /dev/sda
, and sometimes it’ll be /dev/sdb
. To get around this problem you can use any of the IDs you find in /dev/disk
also. I think this was covered in one of the guides posted here, but I don’t remember which one.
The exception to the rule appears to be nvme devices. I have two VMs running on /dev/nvme1n1
and /dev/nvme2n1
in this same system, and it’s never been a problem to use their device names as sources. I assume the kernel is using different rules to enumerate nvme block devices during boot compared to sata devices.
I think the nvmes are counted on their pcie root or iommu group. If they’d be connected to a chipset instead straight to the cpu, I think they’d have the same issue, because it depends which initializes first, idk. Sata however are all under one controller, so it’s kinda random which one wakes up first during boot device enumeration.
My problem is that I’m passing through iscsi luns that on the system appear as /dev/sdX and the by-id enumeration is also random for iscsi, because, similarly, some luns might show up earlier than usual and all my luns appear as scsi-1FREEBSD_MYDEVID000X
, with X being the order the luns got enumerated on the system (sometimes my 500gb windows target is sdb, sometimes it’s sda and the X changes accordingly as well from 0 to 1). I was looking into how to force ctld in freebsd to show a custom device ID instead, if possible. I want to change the WWN per lun per iscsi target.
lol, I am many.
There, their, they’re. It’s ok.
One is connected to the CPU, and one to the chipset.
Possible the chipset one wakes up later, given all the devices initialization that happens on it, making it always nvme1 (assuming the direct one is nvme0). I’d be surprised if it’s the other way around.
Built a 2 meter 1/4 wave ground plane antenna for a new ham who lives in an hoa.
Small enough to not bother anyone.
Anyhow got report back on it.
Performing better than a commercial antenna.
Shes very pleased.
Based from this site.
The Threadrippering
This is a bit awkward. A number of months ago I bought the following system used:
CPU: Threadripper Pro 3975WX
Memory: 128GB ECC DDR4
Motherboard: Supermicro M12SWA-TF
Chassis: Supermicro SuperChassis 743AC-1K26B-SQ
I would later find out, there was a reason why it was priced low enough to tempt someone like me, but even after replacing the motherboard(boooo) I still came out ahead(yay). At least until I added another 128GB of memory and filled out the rest of the system.
This time there will be no waffling. Really.
Specs:
Threadripper Pro 3975WX
256GB ECC DDR4
Supermicro M12SWA-TF
Supermicro SuperChassis 743AC-1K26B-SQ
LSI 9300-16i SAS controller
LSI 9210-8i SAS controller (reused)
EVGA FTW3 GeForce RTX 3090
Nvidia Quadro RTX 4000(Turing, because single slot)
3x 5.25" drive bay to 6x2.5 drive bay adapters
2x Intel D3 4510 240GB SSD (reused)
16x Intel D3 4510 960GB SSD
2x Intel Optane P1600X 118GB SSD
2x 2TB Solidigm P44 Pro SSD (reused)
7x 8TB mix of old mechanical hard drives (reused)
Plus fans, lots of cables, and a stupid amount of time arranging and rearranging said cables.
I gotta say, I really like the chassis. It had feet which allowed it to be used in desktop orientation, but I took those off, and stuck rails on it instead. Another nifty feature of the case is that if you decide to rackmount it you can remove the cage housing the 5.25 drive bays and rotate it 90 degrees so all your drive bays aren’t sideways.
The slot for the optical drive in this enclosure doesn’t work, but I didn’t find that out until everything was put together. It’s quite cramped inside the case with all its drive bays populated and all the cables that go with it(did I mention the cables?). I’m not going back to tear everything apart just so I can fix that. USB drive to the rescue.
Let me draw your attention to where I used zip ties. Every good project has a few zip ties.
The 16i SAS controller was quite toasty when running, so I strapped a fan on it and was very pleased with the result. This made a big difference.
This is a USB over ethernet extender which I’m using for peripherals. Past experience told me this was the sort of thing to cause shenanigans, but there’s been zero issues so far. I zip tied it to the back of the rack and haven’t had to mess with it since.
The plan for this machine was to primarily be a virtual machine and container host, but before getting started on that I needed to decide exactly how I was going to make that work. I wanted a decent amount of space for virtual machines at reasonable speeds, but didn’t want to pay nvme prices. There are limits in my ability to consooooome hardware.
The 4510 SSDs are basically ewaste as this point, so buying 18 of them was much more economical than buying a similar amount of nvme storage. I set up a 4x4 raidz1 pool for VM and container storage, and mirrored the optane drives as a slog. If we consider 80% to be “full”, then that gives me about 8.5TB usable storage space which is enough for the foreseeable future. The motherboard has four m.2 slots onboard, so in addition to the two optane drives, I also added two 2TB P44 Pro SSDs I had from previous builds. Those, plus the mechanical storage got passed through to the TrueNAS Scale VM. The SSDs got mirrored into a pool I’ll use for project files while the mechanical storage will be my bulk storage pool.
This is an HDMI dummy plug. Windows won’t behave properly without it when running a VFIO gaming setup with Looking Glass. Props to gnif and the other developers on the Looking Glass project. It’s pretty nifty.
The rest of the setup was to install portainer, jellyfin, and a few other services along with the virtual machines I use for development. It’s a pretty nice machine which should do the job for quite some time.
Now it seems I have another purge to do.