Marelooke's mess

Just had a weird pfSense failure where it suddenly decided to stop NATting port 80.

Only thing that really worked to fix it was to reboot the entire machine.
Not exactly my preferred solution as it doesn’t really explain why it started misbehaving in the first place. Other ports kept working just fine (eg. 443), so not sure why 80 suddenly became a problem…

1 Like

Finally got snmp_exporter configured so I can track the power consumption of the stuff in my rack:

There are two banks, however Bank 2’s power consumption is below 1A, which isn’t worth tracking according to APC, so it’s calculated based on the total and Bank 1’s power consumption.

1 Like

Ran into an annoying issue with Proxmox. By default it apparently brings up docker before finishing the importing of ZFS pools.

Ended up going with the solution proposed in this Stackoverflow post of having the docker-wait-zfs.service.

Have to say that fixing a similar issue in OpenRC (Infiniband starting after NFS mounts getting mounted) was a lot simpler…

1 Like

With the launch of FreeNAS Scale’s Alpha version I started digging into some of the things I had hoped a Linux launch would give us that the FreeBSD based version doesn’t, and which have lead to me using Proxmox for my file sharing needs.

One of these is Infiniband support, specifically RDMA-over-Infiniband support. The former lacks the tooling in FreeNAS Core, the latter is not compiled in, as far as I’m aware.

I’ve been working on an article on Infiniband, and related technologies (CIFS over rDMA, and iSER, aka iSCSI over RDMA) for a while, so that will probably be finished…someday… So I won’t got into detail on configuration etc.

After installing TrueNAS Scale in a VM this is what I got:

truenas# lsmod|grep mlx
mlx4_ib               221184  0
ib_uverbs             163840  1 mlx4_ib
ib_core               397312  2 mlx4_ib,ib_uverbs
mlx4_core             376832  1 mlx4_ib

Well, that looks promising, let’s see if the Infiniband tooling is available in the repositories:

apt-get update
apt-get install rdma-core infiniband-diags

That worked fine, so let’s see what we have now:

truenas# modprobe ib_ipoib
truenas# lsmod|grep mlx
mlx4_ib               221184  0
ib_uverbs             163840  2 mlx4_ib,rdma_ucm
ib_core               397312  10 rdma_cm,ib_ipoib,rpcrdma,mlx4_ib,iw_cm,ib_iser,ib_umad,rdma_ucm,ib_uverbs,ib_cm
mlx4_core             376832  1 mlx4_ib

RDMA modules loaded, promising.

Let’s see if everything is actually working, though at this point I’d have been surprised if they weren’t.

truenas# iblinkinfo
CA: superbia HCA-1:
      0x0002c903004c1ba3      1    1[  ] ==( 4X          10.0 Gbps Active/  LinkUp)==>       2    3[  ] "Infiniscale-IV Mellanox Technologies" ( )
CA: avaritia mlx4_0:
      0x0002c903002826db      4    1[  ] ==( 4X          10.0 Gbps Active/  LinkUp)==>       2    2[  ] "Infiniscale-IV Mellanox Technologies" ( )
Switch: 0x0002c902004c8578 Infiniscale-IV Mellanox Technologies:
           2    1[  ] ==(                Down/ Polling)==>             [  ] "" ( )
           2    2[  ] ==( 4X          10.0 Gbps Active/  LinkUp)==>       4    1[  ] "avaritia mlx4_0" ( )
           2    3[  ] ==( 4X          10.0 Gbps Active/  LinkUp)==>       1    1[  ] "superbia HCA-1" ( )
           2    4[  ] ==( 4X          10.0 Gbps Active/  LinkUp)==>       5    1[  ] "truenas mlx4_0" ( )
           2    5[  ] ==(                Down/ Polling)==>             [  ] "" ( )
           2    6[  ] ==(                Down/ Polling)==>             [  ] "" ( )
           2    7[  ] ==(                Down/ Polling)==>             [  ] "" ( )
           2    8[  ] ==(                Down/ Polling)==>             [  ] "" ( )
CA: truenas mlx4_0:
      0xe41d2d0300dc9771      5    1[  ] ==( 4X          10.0 Gbps Active/  LinkUp)==>       2    4[  ] "Infiniscale-IV Mellanox Technologies" ( )
truenas# ibping -c 3 -G 0x0002c903002826db
Pong from avaritia.(none) (Lid 4): time 0.015 ms
Pong from avaritia.(none) (Lid 4): time 0.027 ms
Pong from avaritia.(none) (Lid 4): time 0.021 ms

--- avaritia.(none) (Lid 4) ibping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 3000 ms
rtt min/avg/max = 0.015/0.021/0.027 ms

And for kicks, and a little sneak peak about why Infiniband might be worth a bit of the pain to set it up: a gigabit Ethernet ping between the same two machines:

truenas# ping -c 3
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.233 ms
64 bytes from icmp_seq=2 ttl=64 time=0.150 ms
64 bytes from icmp_seq=3 ttl=64 time=0.137 ms

--- ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2044ms
rtt min/avg/max/mdev = 0.137/0.173/0.233/0.042 ms

Assuming iXsystems won’t actively remove these features, and they will use the default Debian Samba/NFS packages (this will require more testing) this might finally mean TrueNAS with Infiniband/RDMA support for our homelabs!


Still struggling with this on and off, as it takes a really long time to build a kernel on this machine.

I tried setting up distcc (those 64 cores in the T5120 would help quite a bit…) but it wasn’t too happy about being in a chrooted environment. So I guess that’s out for the time being.
Distcc also isn’t enabled on all ebuilds, gcc explicitly disables it, and that build took over 2 days last time. Not looking forward to that upgrade…

An additional issue I’m facing is that the keyboard doesn’t work in Grub2. Not quite sure what’s up with that. So I gave SILO another go that just resulted in the same “can’t find config” error as before, so back to Grub2…

As an interesting sidenote, this machine, despite being a workstation, and having only two fans (system + PSU, CPU has no fan), is pretty darn noisy (around 46dbA, measured very unscientifically with my phone from around 30cm away). The sound also carries pretty far, as I have stuff that is objectively louder, but can’t be heard halfway across the house…

Wonder how much of that is the HDD and how much is the fan. The Seagates these came with were known for being noisy (and slow), but after all this time the fan might be a little bit past its prime as well.

Made some major progress. Turns out having the driver for your chipset in the kernel helps, who’d have expected? Bit odd as I remember running through lspci in the past but things might have gotten lost in all the trial and error.

After a short kernel compile (10-ish hours…), and a reboot, to fix that little issue I heard the machine boot, but there was no video output. At this point I just wanted to be rid of the (extremely noisy) boot dvd, so I figured just getting networking and sshd up and running would be a good next step.

The first hurdle was getting the machine an ip address, since the IDPROM battery is dead the ethernet adapter doesn’t have a (valid) mac address (the dhcp server didn’t like seeing 00:00:00:00:00:00 :wink: )
To be able to set the mac address I had to emerge net-analyzer/macchanger and add the full mac (no dropping of leading zeroes!) in /etc/conf.d/net:


(it can also be done manually with ifconfigh enp0s12f1 hw ether <mac>, which is what I’d been doing during the installation)
Another reboot, and yay! We have networking and ssh! …aaaand I forgot to set up a user to ssh in as… :wink:

That leaves figuring out what’s up with the console, I compared the startup logs of the regular boot vs the live dvd and noticed that the console switches to a framebuffer interface on the live dvd but not on the installed system.
The appropriate driver is compiled as a kernel module kernel, and I do have, what should be, the appropriate kernel parameter set in /etc/default/grub:

GRUB_CMDLINE_LINUX_DEFAULT="rootfstype=ext4 video=atyfb:[email protected]"

There must still be something missing but I’m not quite sure what at this point.


Decided to just upgrade the software on the Blade 100 before getting back to messing with the console, I already had distcc setup on the T5120 so it was just a matter of configuring it for the client.

There’s just two issues.

First is that the Blade 100 is underpowered enough that it can’t leverage the full power of this fully armed and operational bat… I mean, the 64 cores of the T5120.

Initially I strictly configured distcc with what the Gentoo wiki said, namely the total number of cores in the pool, times two (so (64 + 1) * 2 + 1 = 131) and then a load limit equal to the number of cores on the client (so 1). Ended up bumping that second parameter to 2 after some testing, which at least increased utilisation of the distcc server.

But I can still compile a kernel on the T5120 while building something with distcc on the Blade 100 and the server is still like “Oooh, that tickles!”

Screenshot just because seeing this just doesn’t get old :wink:

The second issue is some of the packages I’d really really wanted to use distcc for have it forcibly disabled because it tends to cause issues, gcc being a case in point:

$ genlop -c

 Currently merging 1 out of 5

 * sys-devel/gcc-9.3.0-r1

       current merge time: 1 day, 13 hours, 37 minutes and 35 seconds.
       ETA: 20 hours, 56 minutes and 52 seconds.


Looking into the process of creating binary packages has moved up in priority on the todo list, as I have a few other ancient systems I want to breathe new life in. Though at least for those there might still have be support with some binary distributions.


Prometheus and Grafana, running in Docker (using docker-compose) are how I monitor my systems.

For GNU/Linux on x86 hardware all data is scraped by Prometheus from node_exporter instances running on each machine. Unfortunately node_exporter is written in Go, which does not support the UltraSPARC architecture on GNU/Linux (it does on Solaris).

This means we need a different way to get metrics from a Sparc. Collectd does compile on Sparc and we can use Prometheus’ collectd_exporter to accept the collectd data and expose it to Prometheus for scraping.

Setting up collectd_exporter

The devs recommend running collectd_exporter on the same machine as collectd, given how collectd_exporter is a Go project, and Docker isn’t really supported on Sparc either. We can instead run it on the Prometheus host (though it can be run anywhere, really) and have collectd send its data over the network to the exporter.

We can use docker-compose by converting the docker command line from the documentation:

version: '2'
  # accept metrics off of luxuria
    image: prom/collectd-exporter:latest
      - '--collectd.listen-address=:25826'
      - '9103:9103'
      - '25826:25826/udp'

Configure Prometheus to scrape collectd_exporter

Next we need to update prometheus.yml to scrape information from collectd_exporter. “luxuria” is the name of my UltraSPARC server.

  # omitted for brevity
  - job_name: 'collectd-exporter'
      - targets: ['<collectd_exporter_host>:9103']
          name: 'luxuria'

After starting the collectd_exporter container and restarting Prometheus we can already validate the connection between Prometheus and the collectd_exporter by going to the Prometheus dashboard, selecting Status and then Targets. The collectd_exporter should be listed and its State should be “UP”.

Install and configure collectd

I use Gentoo on my Sparc systems due to a combination of familiarity and it offering good support for modern-ish UltraSPARC systems. As far as I’m aware the only other Linux distribution that still offers Sparc images is Debian.

To add additional plugins, the COLLECTD_PLUGINS variable needs to be set prior to installing. The easiest way to do this is by adding it to /etc/portage/make.conf like this:


Collectd can then simply be installed with an emerge collectd.

Note that we need the network plugin to connect to the collectd_exporter, so make sure it is enabled, on top of any other plugins you’d want to enable.

We are now ready to configure collectd, uncomment the network plugin in /etc/collectd.conf and add the following section to tell collect to send its metrics to our collectd_exporter container:

<Plugin network>
        Server "<collectd_exporter_host>" "25826"

Now we can start collectd with /etc/init.d/collectd start. With collectd running we should see additional metrics when we visit the collectd_exporter scrape URL
http://collectd_exporter_host:9103/metrics, these should have names like collect_*, for example collectd_load_1 (if the collectd “load” plugin is enabled).

If all you have are go_*, process_* and promhttp_* metrics something has gone wrong.

What tripped me up initially was that my Sparc machine’s time did not match that of the Prometheus host. Setting up ntp solved that, and the metrics started flowing.

Additional notes

According to the documentation it is possible to send data from collectd to collectd_exporter using JSON over http. I did not have much luck with this approach, but once I had something working I did not pursue this avenue further.

If you do want to give this a try you’d need to enable the collectd `write_http’ plugins and use a configuration like this:

<Plugin write_http>
  <Node "collectd_exporter">
        URL "http://<collectd_exporter_host>:9103/collectd-post"
        Format "JSON"
        StoreRates false

Finally got the console up and running. Turns out having the correct driver in the kernel helps. Who’d have thunk? :wink:

I was, for some reason, focused on the Ati Rage 3, but that is the onboard video.
The card I actually want to use is the “Intergraph Corporation Sun Expert3D-Lite Graphics Accelerator”, which is an 3DLabs Wildcat chipset card. So enabled that driver and a “quick” (only 4-5 hours with distcc) kernel compile later I’m up-and-running.

Now onwards, to configuring X :slight_smile:


Create new partition table by cloning the partition table of another disk.

  • sdb formatted as ext3 as source parition
  • md1 formatted as ext4 as the target

rsync over all data, aaaand:

rsync: write failed on "/mnt/root-new/var/lib/ntopng/0/rrd/192/168/130/13/bytes.rrd": No space left on device (28)
rsync error: error in file IO (code 11) at receiver.c(374) [receiver=3.1.3]

I expected ext4 to be less efficient, but not by this much:

# pydf
/dev/md1         291G  291G     0 100.0 [###################################################################] /mnt/root-new
/dev/sdb3        292G  242G   35G  83.0 [########################################################...........] /mnt/root-orig

Hmm, let’s try that again with ext3 on md1 and see what happens. sdb contains an old version of ext3 as well, so there might still be some issues there. Only one way to find out…

Turns out I was tripping over Docker’s devicemapper folder, which rsync doesn’t quite seem to like very much.

Excluding it now, let’s see how that goes…

So the reason for the rsync shenanigans were that I had noticed my old server I am migrating stuff off of hadn’t been running smart tests in a while due to a mistake I made when merging configuration files. I had merged a DEVICESCAN statement into the smartd configuration, resulting in my manual settings further down no longer getting used. Oops… :frowning:

So after fixing that I kicked of a manual long selftest, which got stuck at 90%. Apparently that’s a known issue with these drives. And, in case you’re wondering about the age of that ticket, the warranty on this drive expired in 2012 (I put the serial in Seagate’s website for laughs… :wink: )

Model Family:     Seagate Barracuda 7200.10
Device Model:     ST3320620AS
Serial Number:    
Firmware Version: 3.AAC
User Capacity:    320,072,933,376 bytes [320 GB]
Sector Size:      512 bytes logical/physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA/ATAPI-7 (minor revision not indicated)
Local Time is:    Wed Dec  2 22:45:49 2020 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
...snipped stuff...
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
  1 Raw_Read_Error_Rate     0x000f   117   088   006    Pre-fail  Always       -       134917032
  3 Spin_Up_Time            0x0003   096   095   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   099   099   020    Old_age   Always       -       1422
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       1
  7 Seek_Error_Rate         0x000f   078   060   030    Pre-fail  Always       -       59414292
  9 Power_On_Hours          0x0032   011   011   000    Old_age   Always       -       78048
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   099   099   020    Old_age   Always       -       1375
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   068   049   045    Old_age   Always       -       32 (Min/Max 26/32)
194 Temperature_Celsius     0x0022   032   051   000    Old_age   Always       -       32 (0 13 0 0 0)
195 Hardware_ECC_Recovered  0x001a   062   053   000    Old_age   Always       -       204144318
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0000   100   253   000    Old_age   Offline      -       0
202 Data_Address_Mark_Errs  0x0032   100   253   000    Old_age   Always       -       0

SMART Error Log Version: 1
No Errors Logged

The one reallocated sector was fairly recent, and combined with the selftest issue I figured it might be prudent to do something about this drive as it’s been a lone system drive for a long time now, and while I do have backups I’d rather not have to deal with those at some inopportune time.

So instead of a sensible solution (where’s the fun in that?) I decided to just plop in another drive and see if I could migrate the entire thing to RAID1 in place.

Now, after reading the Arch Wiki on the subject “in place” seemed a bit of a misnomer, since it still requires a bunch of downtime and eventually rewriting the entire disk, but well…

Migrating an existing system to RAID1


To avoid things writing to the disk while migrating bring everything down and move the drive to a secondary system. In my case this made the original disk sdb and the new disk sdc.

First things first dd a backup of the /boot and / partitions to a NFS share:
sudo dd if=/dev/sda2 | gzip > /mnt/backup/backup-superbia-root-20201202.gz
This should have been pretty fast, because infiniband, but this is a dual socket Harpertown room heater, so the CPU was the bottleneck.

In hindsight I should probably just have sent the data uncompressed and then done the compression on the remote end (or just let ZFS deal with it maybe? Not sure how effective that would be?) then I would have been able to just pump the data over as fast as the disk could deliver it.


Make sure mdadm is available in the initramfs, in Gentoo this can be done by setting


in /etc/genkernel.conf, before (re)building the kernel.

Prepare the new disk

The original layout of the “source” disk:

sdb     298.1G
├─sdb1      1K
├─sdb2  243.1M ext2
├─sdb3  296.9G ext3
└─sdb5  972.7M swap

Clone the partition table of the original disk to the new disk as per the wiki:

sfdisk -d /dev/sda > raidinfo-partitions.sda
sfdisk /dev/sdb < raidinfo-partitions.sda

Next I insert the “new” disk for the RAID1 array and create the mdadm mirrors in degrated state for each partition (including swap):

mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdc2
mdadm --create /dev/md1 --level=1 --raid-devices=2 missing /dev/sdc3
mdadm --create /dev/md2 --level=1 --raid-devices=2 missing /dev/sdc5

Create the filesystems on the new disk. Given that we need to copy all the data later this is a good time to change filesystems. I used this opportunity to upgrade from ext3 to ext4:

mkfs.ext2 /dev/md0
mkfs.ext4 /dev/md1
mkswap /dev/md2

Resulting in this layout:

sdb     298.1G
├─sdb1      1K
├─sdb2  243.1M ext2              /mnt/boot-orig
├─sdb3  296.9G ext3              /mnt/root-orig
└─sdb5  972.7M swap
sdc     465.8G
├─sdc1      1K
├─sdc2  243.1M linux_raid_member
│ └─md0 242.1M ext2              /mnt/boot-new
├─sdc3  296.9G linux_raid_member
│ └─md1 296.8G ext4              /mnt/root-new
└─sdc5  972.7M linux_raid_member
  └─md2 971.6M swap

After mounting old and new partitions per above, time to copy the data:

rsync -aAXHv --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} /mnt/boot-orig/ /mnt/boot-new/
rsync -aAXHv --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} /mnt/root-orig/ /mnt/root-new/

Unfortunately that didn’t quite work:

rsync: write failed on "/mnt/root-new/var/lib/ntopng/0/rrd/192/168/130/13/bytes.rrd": No space left on device (28)
rsync error: error in file IO (code 11) at receiver.c(374) [receiver=3.1.3]

# pydf
/dev/md1         291G  291G     0 100.0 [###################################################################] /mnt/root-new
/dev/sdb3        292G  242G   35G  83.0 [########################################################...........] /mnt/root-orig

Turns out rsync doesn’t like devicemapper, so exclude it:

rsync -aAXHv --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found","/var/lib/docker/devicemapper/"} /mnt/root-orig/ /mnt/root-new/

Edit /etc/default/grub using the blkid of our root partition as real_root:

GRUB_CMDLINE_LINUX_DEFAULT="domdadm real_root=UUID=ce3dd9dd-672c-4980-afc6-5e1dbe475845 rootfstype=ext4"

/etc/fstab on the new array needs to be updated to point to our new raid devices:

# cat /mnt/root-new/etc/fstab
UUID="cb8c01bb-627d-4d8f-9c20-436badab3086"             /boot           ext2            noauto,noatime  1 2
UUID="ce3dd9dd-672c-4980-afc6-5e1dbe475845"             /               ext4            noatime         0 1
UUID="bc3e90c8-b19a-4b59-9a3a-3c3a8043287f"             none            swap            sw              0 0

Next chroot into the new system to configure mdadm and install Grub 2.

Since I moved the drives to a different system which used a different distro entirely chrooting didn’t work, so I had to resort to using a livecd for the next steps, after moving both disks back to their original system.

mount --bind /sys /mnt/root-new/sys
mount --bind /proc /mnt/root-new/proc
mount --bind /dev /mnt/root-newd/dev
chroot /mnt/root-new/ /bin/bash

Configure mdadm

Insert the active raid configuration into mdadm.conf

# mdadm --detail --scan >> /etc/mdadm.conf

Install Grub 2

Make sure to install grub in both drives’ MBR so the system can actually boot should a drive fail.

grub-mkconfig -o /boot/grub/grub.cfg
grub-install --verbose /dev/sda
grub-install --verbose /dev/sdb

Also don’t be a dummy like me and use the old grub.conf filename from Grub 0.9 and then wonder why things don’t work…

Boot into the new raid array

If the drives are still in a second system, move them back to their original system now.

Change the boot order in the BIOS to boot from the second disk, containing new raid array.

Confirm the new enviroment booted by checking in the output of mount that partitions are mounted on the new raid devices.

Prepare the original disk

First copy the partition table from sdb back to sda:

sfdisk -d /dev/sdb | sfdisk /dev/sda

and then, after double checking the parition tables match add the new sda paritions to our array:

mdadm /dev/md127 -a /dev/sda5
mdadm /dev/md125 -a /dev/sda3
mdadm /dev/md126 -a /dev/sda2

mdadm should now start rebuilding the array:

# cat /proc/mdstat
Personalities : [raid1]
md125 : active raid1 sda3[2] sdb3[1]
      311191488 blocks super 1.2 [2/1] [_U]
      [>....................]  recovery =  0.3% (1001664/311191488) finish=77.4min speed=66777K/sec
      bitmap: 3/3 pages [12KB], 65536KB chunk

md126 : active raid1 sda2[2] sdb2[1]
      247936 blocks super 1.2 [2/1] [_U]

md127 : active raid1 sda5[2] sdb5[1]
      994944 blocks super 1.2 [2/2] [UU]

unused devices: <none>

Once the rebuild is finished, reboot, and change the boot device order back to boot off of sda, and verify the system boots succesfully from sda, confirming booting from either disk works.


With Cyberpunk now out I’ve started looking for a new PC (current i7 930 with a GTX970 is getting a bit, errr, old…), or at least picking parts so I can get them when available.

The one thing I’m sort of stuck on is a case. Not really finding anything I like. I absolutely don’t care to see the guts of my PC (it tends to be under my desk and to my left anyway), so not wanting any plexi or glass already narrows the field. Then there’s airflow. And at that point there’s not all that much left, it would seem (at least locally in the EU).

So I started considering following in Linus’ footsteps and just putting the entire thing in a rack (still got space). Which is where the rub is (didn’t check cost of Linus’ solution yet, so that’d be another possible stumbling block :wink: ): I would have to pull that cable downstairs somehow, and that’s not quite happening right now (I have plans to build a cable channel so I can pull extra cabling, but well, they’re plans, and require tearing down a few rooms, so … yeah, not soon :wink: ).

So I started looking and found out about HDBaseT, which enables sending HDMI over Cat5e+ cables. But, as far as I can see, that specification doesn’t seem to have gone much of anywhere in the consumer market… As a result the solutions are, well, expensive, and most appear limited to 60Hz, which is a bit of a downer.

Not sure if anyone is aware of anything else out there that works over CAT?

Worst case I can still go for a short depth (don’t need drive bays in the front) rackmount chassis and try to make it quiet. Then I can still move it to the rack once I can sort out the cabling.

Or, probably more sensibly, I’ll re-use the rather old Antec chassis I still have (that, unfortunately, got quite a bit of abuse when I loaned it to someone) until I find a case I like… (On which note. Hey Silverstone, any updates on that new Raven?)

I have a plink mounted under my desk. Works great, but the short depth really does limit the GPU options. Mine is ~11" deep because of the table leg.


Been digging a bit (especially seeing what I can get at this side of the pond, if I can dodge import costs that’s always a nice bonus), and I stumbled on this SilverStone chassis, which is apparently rather new.

I does look rather promising since it doesn’t reserve room for drive cages where the expansion slots are, so it has pretty much the same length as my current case (500mm / ~19.7inches) where most other chassis either are longer, or compromise on expansion card length.

Now to pick parts, measure everything properly and then dig up how you mounted your PC under your desk :wink:

Fwiw, this case also looked promising, design wise. But pretty expensive (well, compared to desktop cases, anyway) and would have to import.

1 Like

Yeah that case looks great. Plink isn’t amazing quality but the size options and modularity is good and price is ok.

My go to rack mount company

Yeah thats the main reeason

Quick update on TrueNAS Scale Alpha 2.

  • the upgrade appears to have wiped “custom” installed packages. I hope that’s just an Alpha version things. By custom I mean pulled from the available repositories, but not part of the base install.
  • the Infiniband packages are still available, and, after reinstalling them, the interfaces still show up in the UI

Haven’t really played with them further than checking they do link up to the fabric, so we’ll have to see whether the UI can handle the connected-mode Infiniband MTU (65520, well outside even Jumbo-range)

Based on FreeNAS, I’d expect that to be the case. I’d open a feature request for the infiniband packages on the iX Jira (be sure to check if one already exists).


There is a feature request that’s been open since early this year based off of this thread, which I have, of course, already voted for :wink:

Maybe I should indeed make a forum thread on the TrueNAS Scale forums to try and garner some extra support…

Though for me, personally, as long as it’s only the packages, and not the settings, that go bye-bye on an upgrade, I can easily deal with that. Something to test come Alpha 3, in case iXsystems doesn’t get to it for 1.0 (there’s also the risk that they don’t want to have extra functionality in Scale that’s missing from Core…) :slight_smile:

1 Like