Truenas (beta) Scale voyage

Apologies for perhaps the wrong category…

However - story time…

Long long time ago, an order was places for a fiber connection… Sadly, the Swedish ISP is not interested in delivering. With some old spare parts for WAN connections, the SDWAN works fine-ish with ADSL and one 4G link.

What to do meantime? Well, two as close as possible ‘servers’ with TrueNAS Scale. Prepp time for when the connection can hold packets however - we want some fun and meantime - why not test the TrueNAS Scale beta?

And old half rack (media-rack) is doing stand in to host -

Two old rack-cases (guessing some of you might be younger than the cases…)
containing the following funky parts:

AMD Ryzen 3100 (with first gen tiny AMD stock coolers :smiley: )
32GB RAM (sadly one at 3000mhz and one at 3200, but close enough, so no ECC)
oddball motherboards (asus and asrock I think b350 and b450, does not matter :wink: )
1xnvme ~500gb as ‘systemdrive’ (one old intel drive, and one ‘never’ kingston)
4xST1000NM0011 (1tb 7200rpm drives - got a deal on an old DELL SAN and stealing disks :wink: )
1gbit NIC’s (the motherboard ones sadly…)

Why would I use such futile hardware? Given Wendels “latest” TrueNAS video?
Because I can - as per usual - and, while I await the speed of the gods to the interwebs, this will do.

When this goes stable, changes will be made disk wise to increase the pool-sizes, and the purpose will become clearer… (thank you zfs), 10gbit cards added and some dusting will be made.

Back to order… In short, the scale out portion is something I want to explore - hence - this will be hopefully a thread I keep updating.

Now, so far, TrueNAS is not that interesting - it’s not that different. But, keeping track of the fun parts Wendell pulls out of his hat - My hardware limits are quite smaller… But, I want to follow along the discovery of RDMA, I want to be dumb and just run with it. Hey, instead of Cockpit, webmin!

So, where do we start? Currently with the current release;

Screenshot from 2021-12-08 23-02-52

Setup currently holds one pool (per server):


Update 2021-12-09

So, after checking all the disks with a plentora of S.M.A.R.T checks, keeping track of temps over 24ish hours in idle - it looks promising.

First up, TrueCommand. As I might continue the journey “expanding the empire out”. There’s a ‘registration’ portion on truenas homepage, register and get the pull-commands for the docker container. It’s free to use for up to 50 drives - I’m in the safe zone for awhile it seems :D.

Checking the TrueCommand documentation or here, it’s a resonable statement from it (first link, the pdf) below:

This means that one of my Asrock A300s that already runs a few containers will have to pull duty.

TrueCommand can be found on github with sufficient instructions - this means ofc, that the registration was really not needed… :facepalm: . Well, ixsystems already had one of my emails, now they have two… yay.

The version I pull down (well, version and version - they seem to have forgotten to give the four month old commit and actual version, but guessing 2.x -edit, checked - time of writing [2.0.2])

However… I keep ending up with the “oh no, not another composer-free-example container” …
So, in short, we need to set a directory for persistent storage and expose an port, easy enough…
For another day (it goes into the todo-list). For now, we simply do a:

docker run --detach -v "[hostdirectory]:/data" -p [portnumber]:80 -p [sslportnumber]:443 ixsystems/truecommand

And - quick as a rabbit, it’s up!

Default is admin:admin, or, so I thought - at one place it mentions admin:admin as the default, but the screenshot states a different scenario… Well, either way - login and change it. Now to the fun part of adding the two servers!

Head over to the dashboard, and simply click “new system”
Screenshot from 2021-12-09 18-56-45

And we get presented with adding the information about the system:

Give it a few seconds, and it’s populated - now, I would advice that API keys to be used - I simply (since it will be redone later) punched in the root passwords. Victory - we have it communicating!

Update 2021-12-13

Happy it’s not friday the 13th… I ran into a PEBKAC. There’s a minimum for gluster with this (and in general). So, just two nodes? Naah. Hence, say welcome to the third sandbox (now I’m out of disks :S )

In short, an old 3770K filled to the brim with stuffs will have to play nice for the time being (another machine will replace this later on ;).

Screenshot from 2021-12-13 20-35-00

Now to the fun times. Back to the TrueCommand … oh, no … wait. The brick-creation will have to wait, all things are not happy at the moment. Will investigate (same thing with the local logging - if you want the UI to c*ck up, please do try to read it out :stuck_out_tongue: )

Well, gluster refuses - pulling down the nightly build of truecommand, let’s see if it wants to play instead of timeout…


Update 2021-12-14

So … In short, I moved to yesterdays nighly build, same failed gluster brick creation. Moved to todays nightly - and what would you know - the TrueCommand GUI stated error, but then success! But nothing showed. So I thought, ok - no biggie, still bork. But it was not… I now sit with several bricks in different configurations, not visible in a GUI (any of them :wink: ) and they are sad and in a broken state it seems, since this is blocking SMB services to start and operate… So, it’s aware enough, but not smart enough at this point in time.

Also, the 3770K fell over - guessing lockup drives - logging is not super, and it states drives offline, but, then they are healty and online… a bit of a huh-moment there. Prepping the host that will also have the databases and 64gb ram now as a replacement as was intended from the start ;).

Or wait…

Forget to login dude

It seems that the broken Gluster-bricks is causing mayhem… Not a great sign… Since TrueCommand still does not spill the beans about their existence.
Thinking about it again tho, I think the deviation from the two previous machines to install it to a m2 ssd and instead going with an old 120gb SATA SSD, might be the culprit. I’ll re-install redo the effort, the replacement will have to wait…


Update 2022-01-05

So… I’ve now restarted it all, or, thrown out the temporary disks due to a total of three dying.
This means that it’s now upgraded to wd red’s 4tb disks in raid-z. Also, the third machine was not feeling good, one of the SSD’s threw it all into chaos. This means that this is now waiting for the third machine (that will be a bit special) to be built. However, I’ve run out of 4tb disks, and have to go back to filling it with 1tb disks = blockchecks are currently running slowly but surely (since I forgot to note what disks where borked…). Side note is that the nightly build (as of 2022-01-04) was nicer than previous versions. However, it does have a annoying issue with that when the logon has timed out, tossing out API errors before reverting back to the logon screen :P.


Update 2022-01-15

On a bit of a break - getting new power distrubition to the house as well so everything needs to be powered down for a bit…

More to come - send me challenges or pointers or comments - everything is welcome in the spirit of levelonetechs :).

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.