ThatGuyB's rants

I kind of felt like ranting on this anyway. The RockPro64 official case has some issues, with the included sata power cables and the included sata data cables putting a lot of strain on the ports on the SSDs. Maybe it’s just the cheap plastic that Crucial uses on the MX500, but the tolerance in the case is very tight.

I chose to grab new cables and do my own runs, I got the sata power ones with that are pinched with some metal teeth (kind like punch-in RJ45 sockets) and covered with a cap.

The included power converters from 12v to 5v rail on the cables themselves are only officially sanctioned to power 2 drives at a time. The official case has 4 drives slots, 2x 3.5" and 2x 2.5", yet with the included cable you are only supposed to power 2 drives. Maybe you could split each lead to a HDD and SSD and it might be fine, but I decided not to trust it, Pine64 doesn’t have the best track record with power stuff, as some people on the forum can attest.

It might work with 4 SSDs, but I got 2 Ironwolfs 10TB spinning rust, along 2 MX500s. So I went with a 19v brick that needs to be stepped down to 12v to power the RockPro64 and the drives. The drives also need 5v, so I went with another one of the same adjustable step-down converter and will connect the 2 in parallel from the 19v brick, giving me 2x 12v wires and 2x 5v wires (hot and ground for each).

And by removing the heat producing elements from the case, like the 5v converters that Pine64 provides, the components inside should run a bit cooler. Although I will have an 80mm Noctua fan on the case to pull air out.

But because of my design choices, I will need to be careful with how I connect things, make labels on what each input and output voltage is and make some shrouds for the step-down converters, as they have exposed electronics. :skull: And in my shroud, I need to let some space for the heatsinks to dissipate heat to the surroundings, as the MOSFETs need some serious cooling.

I don’t mind that I have to go through all this, but I wish I knew about it before I hopped on this endeavor. Had I have known, I would have probably bought a 3x 5.25" to 4x 3.5" hotswap bay from Chieftec, IcyDock or StarTech and made a support on the side for the board. Might have used less space like that and had better wire management options. I’m no master designer and I wouldn’t be able to come out with anything metal, but I’m a fan of transparent cases and seeing tidy cables.

1 Like

After a long break, I finally built my RockPro64 NAS. It has been a combination of lack of tools, electronics wiring, waiting for parts, going away for a while, and not wanting to fry my components.

I will be posting the update, hopefully with the pictures I took during the build, in the ARMLand thread. That official case needs a serious guide somewhere, because the component sizes / fittings are unforgiving. Get the wrong component, and you have to wait to buy another one.

3 Likes

Woot woot, I’ll be looking out for that or include me in the tag when you post. I’ve finally been able to get on a little more with the hectic holiday schedule.

1.) Hope you got your pass thru working, I had a heck of a time…but once I figured it out it’s been much easier…well for me on Proxmox at least.
2.) Hope all’s going well in your neck of the woods and your settling in, IE able to tinker some more and relax a bit.

3 Likes

I mean, it’s been working on windows fine for months now. But on linux, I can’t figure it out.

Somewhat, I can’t wait to finish the setup on the rkpr64.

1 Like

There is nothing more permanent than a temporary rapid test prototype.

2 Likes

I just finished setting up iSCSI targets in FreeBSD and iSCSI initiator in linux, using authentication. It’s not difficult at all, I’m actually surprised how easy it was. Why didn’t I use this before?

FreeBSD part

zfs create -V 8G tank/test-target
echo ctld_enable=\"YES\" >> /etc/rc.conf
  • edit /etc/ctl.conf
auth-group ag0 {
  chap user1 password1
  chap user2 password2
# in freebsd, password needs to be at least 12 characters long, at most 16
}

auth-group ag1 {
  chap user1 password1_new
  chap user2 password2_new
# not the same password as ag0
}

portal-group pg0 {
  discovery-auth-group ag0
  listen 192.168.4.7
\# only listening on a single interface, you may want to add more interfaces, or just use 0.0.0.0
}

target iqn.20221228.192.168.4.10:test-target {
  auth-group ag1
  portal-group pg0
  lun 0 {
    path /dev/zvol/tank/test-target
  }
}
  • change permissions of the file and start the service
chmod 600 /etc/ctl.conf
service ctld start

Linux part

  • install open-iscsi (on rhel, it’s “iscsi-initiator-utils”)
  • edit /etc/iscsi/iscsid.conf
node.session.auth.authmethod = CHAP
node.session.auth.username = user1
node.session.auth.password = password1_new
discovery.sendtargets.auth.authmethod = CHAP
discovery.sendtargets.auth.username = user1
discovery.sendtargets.auth.password = password1
  • edit /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.20221228.192.168.4.10:test-target
  • enable the iscsid service, or create your own service file in case it doesn’t exist, like so:
mkdir /etc/sv/iscsid
cd /etc/sv/iscsid
cat > run <<EOF
#!/bin/sh
[ -r conf ] && . ./conf
exec iscsid -f
EOF


ln -s /run/runit/supervise.iscsid supervise
ls -lh /etc/sv/iscsid/

# -rwxr-xr-x 1 root root   49 Dec 28 00:21 run
# lrwxrwxrwx 1 root root   27 Dec 28 00:22 supervise -> /run/runit/supervise.iscsid
  • start the process
sv start iscsid
# or 
# systemctl enable --now iscsid
# with systemd, you're on your own, lol, although I would be surprised if any distro that has open-iscsi in its repo doesn't ship with a systemd unit file
  • discover the targets
doas iscsiadm --mode discovery --type sendtargets --portal 192.168.4.7 --discover --login
  • happy formatting
ls -lh /dev/sd*
fdisk -l
  • to unmount iscsi
doas iscsiadm --mode node --portal 192.168.190.20 --logout
1 Like

I have an iscsi zvol target that I formatted as ext4 on the initiator. This way, I get the benefit of zfs without needing to run the dkms on other linux boxes, meaning no need for fsck, as zfs can do the scrub. It would technically be better to just use non-journaled fs, because there is no benefit to them if you’re running them on top of zfs, which does everything you need, at least AFAIK, just to get a bit more performance (probably negligible anyway, unless you do some really heavy stuff).

It appears that I can do zfs snapshots, modify the fs, unmount it, revert the snapshot, then mount the fs back and the changes on it will be gone. Bonus points for this approach: if a hypothetical ransomware hits me in one of my container, or the entire host, I can revert the snapshots just fine after I remove the ransomware, without needing to restore from backups (that doesn’t invalidate backups though, please do backups, people).

Interestingly, if I do it in another order, like take a snapshot, modify stuff, revert the snapshot, modify some more stuff, unmount the fs and mount it back, all the changes are still there, even if using the -r and -f options in zfs. The fs must either be unmounted and then the snapshot reverted, or at least the fs needs to not be modified, so only mount, change, revert, unmount, mount order and mount, change, unmount, revert, mount order works. The odds of the fs not getting modified after the revert is low, unless I stop the container, so it’s safer to just unmount, then revert.

2 Likes

Inspired by Chain’s migration to wireplumber, I took the jump too.

In my first attempt, it was a failure, audio would not load at startup. Little did I know my system was a mess. I either installed pulseaudio again at some point, or just disabled it and forgot to uninstall, but when I stopped all the pipewire and wireplumber services, I found out pulse was still running on my system. Special thanks to mpv for showing the audio output.

This time, the switch was relatively straight forward, just needed a bunch of reboots. All I had to do was:

  • uninstall pulseaudio
  • comment out the current context.exec for pipewire-media-session in /etc/pipewire/pipewire.conf
  • add the following lines:
context.exec = [
    { path = "/usr/bin/wireplumber" args = "" }
    { path = "/usr/bin/pipewire" args = "-c pipewire-pulse.conf" }
]
  • make sure the services for pipewire, pipewire-pulse and wireplumber are enabled

And that was it. I have nothing in ~/.config/autostart, nor in /etc/xdg/autostart (related to wg, I have some lxqt remnamts). It works like expected (I only have hdmi out on my pc, and if Iplug headphones, it’s through my monitor 3.5mm output). I’m keeping pipewire-pulse because I like pulsemixer - I could probably use amixer and replace sway’s command shortcuts with amixer, but meh).

Reason for my jump was because of a message in the package manager mentioning that pipewire-media-session will be eventually removed.

I have been slowly decommissioning parts of my infrastructure. My European Proxmox server is still up, but I basically removed everything I had on it before and now I only have it running a yt-dlp container (for when I can’t download from here for slow connection reasons) and a squid proxy container (for the same reason). One runs Fedora, one runs Alpine. I would’ve obviously run Void, if Proxmox wasn’t broken with containers outside of its templates.

I stopped my Pi 3 from here and powered it off. It really wasn’t doing anything useful. My RockPro64 2GB replaced it a while back as my main VPN router. Planning to still keep the Pi 3 as a on-the-go VPN router, as now I have pretty extreme portability for a “desktop” (well, it’s the same portable monitor, with an odroid h3+, but I have it on a laptop stand wired up real nice, making it a unified single unit kinda like a laptop).

My old portable HDD has been unused for a while, since my RkPr64 4GB took its place as my main NAS. The Pi 2 that was running it has yet to be entirely replaced, it’s still running TFTP, but now NFS is handled by the Rock. Oh, the pi 2 is also a terminal desktop, used when powering on the TR system (ssh into it and launch stuff, since the TR has no display output).

My Pi 4 has been collecting dust, although I want to run Android on it at some point, just never got around to setting it up and remote into it away from home.

Haven’t been touching my TR much either. When I bought it, I was hoping to make it an on-demand workhorse. It served me well, but I’m thinking of selling it. Let me know if anyone is interested in a TR 1950x with 64 GB of ECC memory, with a noctua cooler. Maybe I can bundle it with the antec p101 silent + / - RX 6400, neither of which have seen much use, other than holding the components for the former (initially was planning on a bonkers NAS Hypervisor build, but Ii went low powered ARM and I’m happy). It’s still my infrequent windows PC, but every time I power the VM, I have to windows update (tells you how often I start this thing up).

And I haven’t tested the HC4 in a long time. It’s probably my favorite little arm box, but I could not get it to work with ZFS. I doubt anything has improved since, with the HC4 support.


While most of my infrastructure (if you can even call it that) has stagnated or been retired, I’ve been playing with some stuff here and there, like installing zentyal in LXC on Proxmox (helping others, I have no use for that personally) or switching void to s6 in a container. But at this point, I’m unsure what, if anything would make me want to tinker. Maybe buying a VPS and running with it, since I’ll know I spend money on it? But then again, after 3 months of not even using it, I’d probably cancel my subscription.

I’m still interested to bringing small, self-manageable infrastructure to the masses, but as things are going, I’ll probably still be on hiatus on that for a long time.

2 Likes

I’ve been working on FreeCAD again, designing a new loft bed, since I wasn’t trusting the old design, it was too sketchy. This one has a similar design, but I’ve added a middle support beam and went with a plywood top. With new wood from store, it would be a bit on the expensive side, but nowhere near what loft beds cost (especially for queen sizes).

The problem with the new design is that it will absolutely need diagonal pocket holes, instead of just going straight with the screws. I was trying to avoid that and make the design as simple as possible and keep the gadget requirements to a minimum (like not requiring a pocket hole jig). The cost will add up a little, but I think it will be worth it for durability / longevity.

The general design avoids using screws as support points anywhere. Screws can bend and break under a lot of weight. And while the weight should be pretty well balanced among the screws, just making use of the wood’s thickness should do a better job than any screw. Thick bolts could be used to simplify the design, but those tend to be more expensive. Trying to use as less metal as possible. And even with that, it might still require around 40 or so screws anyway (meaning a lot of holes to drill).

3 Likes

As long as you don’t need to compute there is no sense in forcing yourself to artificially create problems to solve. You have been living with the small SBCs successfully for quite some time and as long as you can continue too, that’s perfectly fine.

But to take up your last point, software or hardware projects that use SBCs, are always interesting. PiHole, PiVPN and PiKVM are good examples how they can make things more accessible, so I would say if you have a good idea in this regard go for it, make it a project.

1 Like

I view my SBCs as small, very low-power demanding servers. They can run basically any server software that x86 can. I lived on a RPi as my main PC for 2 or 3 years (I think) and it was fine too, but a bit sluggish compared to a Pentium (Atom-based) quad-core.

ARM CPUs have been in use in things like routers and home NASes for years now, but in the past years they started to get good and cheap enough to run more than just SMB. Now you can run LXC/D or Docker (or docker inside the former) and have anything that you would have been running on old single core Pentiums, like a mail server, a chat server, a web server, plus the modern things like git and whatnot.

The most demanding things that I wish to set up on ARM SBCs are Jitsi and Asterisk PBX (internal SIP). Just never got around to setting them.

One other thing I’m struggling with is that I want to run my own DNS, with the previous one being my pfSense box. I’ve ran bind9 before, but I don’t want to run overly complicated software, but I also want the features it comes with. I’ve been thinking I might just run Pi-Hole with its dnsmasq backend, just so that I
get things done.

Ideally I would be running openbsd with its dns and other goodies. But the only SBC I own that could run it would be the RockPro64 (and I never managed to get it to work and I would probably be struggling with the wifi driver anyway). Besides, I want my services somewhat containerized (which is why I run LXD).

And the dilemma I faced was that services are dependent upon one another. I want a CA server (probably EJBCA), but I need a DNS, but I need a properly configured storage backend, which needs (to a lesser degree) DNS and so on. I could probably configure dnsmasq and a hosts entry on my router and then invoke everything from there, but I want some amount of redundancy (3 DNS servers, two which would run with keepalived). I still haven’t looked into how my service dependency will look like.

The problem I’m facing is that the odroid n2+ will run basically nothing on local storage and will need to connect to my rkpr64 NAS for iSCSI, but unless I use IP addresses or hosts entries, neither which I like, I’ll have an infinite loop of inter-dependency: need dns to connect to NAS, but need the NAS to start the DNS. I’ve actually been faced with a similar issue of interdependency in the real world with the second router-firewall, the one behind the DMZ to the internal network (inherited this flawed design from a previous sysadmin and fixed it by adding a hosts entry).

In all fairness, it’s not like I’m blaming other things like lack of time, or overwork at my current job. The only one to blame is myself for procrastinating (not that I beat myself very much, lmao). It’s not like I don’t hold myself accountable for stuff I do, but I’m in a weird loop of doing other less important things. But I’m working on improving myself, I’m trying to do a 5 to 20 minutes workout at least once or twice a week (depending on how sore my body feels). I’m also working on other projects to improve my life (with CAD skills and wood work being some of them, with the first short-term result, a loft bed, once completed, being an immediate gratification from the space saving I’ll get in my room).

But I do wonder sometimes how and what I’ve been doing 5 to 8 years ago, that I managed to go to work, go to college, learn stuff, watch tech youtube videos all the time, work on my homelab and spend more time on the forum. In all fairness, the only lifestyle changes I’ve made is that I cook more often, instead of eating what I get from takeout canteens and I try to keep my room clean more often. I only cook once or twice a week and only clean weekly, so it’s not a deep timesink, IDK. Well, other than that, I am working in a unix systems support job instead of being a sysadmin, so my brain is partly fried by having to juggle with infrastructures I don’t own or manage, that after a few hours of working on them, I have to forget everything about them, because there’s the next one in the line.

I was thinking of changing jobs, but I really don’t want to go through applying to 60+ places in 2 months and only get 2 calls. Especially since most companies I see are afraid to hire more staff because they’re afraid of a recession. They’re basically living the end-times with a different cover “the time will come, just you wait and be prepared.”

Kinda BS living like next month will be the apocalypse and you don’t want to hire people, overworking your current employees, just because you’re afraid to lose money by hiring someone short term. I feel like there are huge opportunity costs associated with not hiring people,because of the additional strain you put on your current resources (although near-short term of hiring, it’s even more strain, because you have to train people and lose time you’d spend working on stuff, but in one or two months, it evens out really fast if you have competent people - which in my case it didn’t happen).

Am I stressed about it, or stressed in general? Not really, I find myself in a rather relaxed state. I just wish I would be spending more time doing stuff than being a bum (again, more like a hyperbole than the actual meaning of the word).

2 Likes

Have been in that position as well. I decided to create a single PiHole instance with custom domain entries pointing to my services. It allows me to use SSL (via LetsEncrypt/ZeroSSL) inside the network. I plan to integrate a failover as well, but have been postponing this task since setting it up. The uptime is not critical for me, only important thing is to do regular backup so that I do not need to remember everything I configured.

I am guilty of that as well, I simply don’t have the same energy I had as a teenager :woman_shrugging:.

Has it become that bad, yes? Where are you based if I might ask?

Its called human resources for a reason. You are a resource and they might exploit you. It’s the sad state of the world and the reason I might join a union.

As long as the job is not too bad and you are content you might sit it out. See it positive, gives you time to work out the things you are not happy with. I have no idea how long the bad times will last, but I am sure there will be better times!

2 Likes

I’m in the US. That was about 1.5 years ago and I was looking for strictly WFH jobs, while some companies apparently were doing “remote” employing, without actually specifying it’s hybrid-remote (coming to the office every now and then).

I’d probably have better luck nowadays in all fairness, if I manage to find a company that actually only does remote work. Given how many companies have downsized to smaller offices, it would probably be easier to find more honest places. But there are still large companies that signed 10 years contracts to rent an office and they will try their damndest to force people to go to the office.

I don’t mind the going to the office if it’s close-by though, but I’d rather not relocate.

This is only applicable for places that need abstraction layers between owners and workers. In smaller sized companies, I would say up to 100 employees, where everyone pretty much knows everyone, you aren’t considered a resource (at least not on such objectifying terms). But getting a good pay from small companies, unless it’s a really hot investment company, is unlikely.

IMO we don’t live such bad times, just that people’s perception is really screwed from watching too much TV. I can understand how some people aren’t as fortunate to have at least 3 months of savings to live by, their situation is probably more stressful when working paycheck to paycheck, but I doubt that is the majority of people, or even a large minority. Most people have no reason to feel distressed.

2 Likes

If anyone uses Fedora and wants to use a non-bash shell as their main login shell and don’t like weird errors showing on your screen, you have to edit a file /etc/profile.d/lang.sh. I never figured where it was invoked from, as neither .profile or my .rc file has any /etc invocations, I think it’s something upon login hard-coded. If anyone knows, pass me the info.

I had to change this bashism line:

if /usr/bin/grep --quiet -E -i -e '^.+\.utf-?8$' <<< "${LANG}"; then

with

if [ $(/usr/bin/echo "${LANG}" | /usr/bin/grep --quiet -E -i -e '^.+\.utf-?8$') ] ; then

I wouldn’t be using Fedora if Proxmox worked in LXC with my distro of choice, but that’s another thing that I should probably compromise more on: just use whatever distro works, instead of trying to make everything work on a single distro, especially if no updates are supplied automatically. But I will will need new mechanisms to detect updates for each distro used (well, I wouldn’t be going much further than 5 distros anyway).

1 Like

LOL you said you want it less complicated and I say hey… I want it more complicated and bought a whole book on Bind 9 lol

Right now I have Pi-hole with Unbound installed as a recursive resolver upstream. If I got that terminology correct.

1 Like

The ideal would be simpler, but in this scenario, simpler doesn’t have the features I’m looking for. I might switch from bind to nsd, if I find out it has all the bells and whistles. I will also configure unbound, since that was the plan from the start.

But currently I’m stuck at bind forwarding to another server, doesn’t seem to want to resolve anything other than the local zones.

1 Like

I don’t know what I was doing wrong, but seems like the forwarding works in bind9. I tried the same settings multiple times, but it only worked after setting dnssec-validation to no, then back to auto. I was getting broken chain of trust messages.

Kinda crazy how long a DNS query can take sometimes. Typically it’s just 200ms, but sometimes I get this:

time ping level1techs.com -c 1
PING level1techs.com (172.67.73.46) 56(84) bytes of data.
64 bytes from 172.67.73.46 (172.67.73.46): icmp_seq=1 ttl=56 time=201 ms

--- level1techs.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 201.485/201.485/201.485/0.000 ms

real    0m2.238s
user    0m0.002s
sys     0m0.004s

ping took 200ms, meaning 2 seconds were allocated to DNS query and response back. Repeating the test yields only 0.007ms, which is what you should be expecting.

time ping level1techs.com -c 1
PING level1techs.com (172.67.73.46) 56(84) bytes of data.
64 bytes from 172.67.73.46 (172.67.73.46): icmp_seq=1 ttl=56 time=255 ms

--- level1techs.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 254.551/254.551/254.551/0.000 ms

real    0m0.262s
user    0m0.002s
sys     0m0.004s

Just look at this! Random all over the place!

time getent ahostsv4 level1techs.com
real    0m0.011s
user    0m0.011s
sys     0m0.000s

time getent ahostsv4 craigslist.org
real    0m0.946s
user    0m0.007s
sys     0m0.005s

time getent ahostsv4 youtube.com
real    0m0.554s
user    0m0.005s
sys     0m0.006s

I don’t blame bind, just my connection, but this can contribute to a slow internet experience. I remember when our main DNS failed and everything felt slow and we couldn’t figure out why. All hosts were querying ns1, timing out because of no response, then querying ns2. Apparently one of the scripts we were using to reload the zones broke and left the config unworkable (I don’t remember what line we had to remove, but it was something minor - if only we would have used named-checkconf before applying the changes…).

Anyway, I got a working config inside a DNS container, now I need to apply it on my router. I’ll probably end up making one or two DNS containers anyway, so I can have some redundancy (and using keepalived to prevent the slowness I mentioned earlier in case the main dns fails automatically).

1 Like

I was trying to learn more about iSCSI and how I could implement it in my infrastructure. My general idea was that I could use iSCSI as just a block device and pass it to a VM, but seems like Proxmox is incapable of that. Or maybe I’m just a brainlet (nothing new, lol) and I cannot figure out how.

I was hoping maybe there was a way to tell a VM “use this device for your OS and you manage it” and whenever I wanted to live migrate a VM, tell the hypervisor “pause the VM, transfer data from RAM to another host, detach iSCSI” and on the other host “attach iSCSI, resume VM activity with the RAM contents you received.”

This is in fact how I was doing it, but with NFS. I had a NFS share which I configured in the datacenter level in Proxmox, all VMs had qcow2 disks and when I live migrated, I didn’t have to copy the disk contents to another location (like you’d have if you ran with LVM/-Thin or local ZFS), just copied the RAM contents and the disk was already there, mounted in the same directory waiting for the VM to be resumed.

I was really hoping I could have something similar with iSCSI, because it would make management a bit easier. With iSCSI targets on a zvol, I can just snapshot the ZFS volume for each VM and send the data over to another pool or another NAS. But with NFS, I would have to either snapshot the entire share containing all the qcow2 (or raw disks maybe, since I wouldn’t be using qcow2 for anything because ZFS takes care of snapping), or create a NFS share for each VM which would be nuts.

I wouldn’t really blame proxmox for this, in all fairness, I’m struggling doing this with LXD, which is the reason I tried it on Proxmox. The same idea was to have an iSCSI target mounted on the host and have each container use their own targets. It is technically doable, but in LXD, the concept of pools is what boggles my mind. I have to create a pool for each container, which is dumb. And then, for each pool, I have to do a dir, which makes things overly complicated.

NFS, while having its downsides, is still the easiest and most straight-forward method of sharing the VM / container with other hosts and easily live migrate them. From the perspective of storage resources, it really makes way more sense, as opposed to having 2 different VMs / containers and use things like keepalived or corosync + pacemaker. The only thing coming close to it is diskless booting VMs with readonly access to the rootfs and their own /var and /etc locations basically. You still waste some space though, unlike just moving a VM over, but it depends on the scenario and requirements, sometimes having another VM always up makes more sense than doing HA (especially if you already have another way to load-balance them and not just have an inactive standby VM).

Back to iSCSI, I found this thread on proxmox forums.

Multiple ways work:
- adding a disk in PVE (host), create LVM on top and use for multiple VMs
- using iSCSI to have the backing device for on VM
- adding iSCSI inside of the VM, completely bypassing PVE

The broken English is a bit hard to grasp, I can only guess that the second option was meant to be exactly what I am trying to do. But I found no way of doing that. When I add iSCSI in datacenter storage, if I check “use LUN directly,” then Proxmox will be treating the target as its own storage, basically be able to format it with LVM and use it for VMs.

If I don’t check that, I’m still unable to add it to VMs, neither when creating the VMs, nor when they are already created.

I wonder if Ceph has a better handler and if it can do what I’m thinking of. Well, if I were to run ceph anyway, I’d probably be setting up the hosts with local storage on them and try a shot at a hyperconverged infrastructure. I think Ceph RDB might be that?


For now, it seems like if I want to be able to take snapshots for each VM or container separately, I need to use NFS. For LXD, I need to have a pool for each container, which makes things a bit annoying. Well, now that I think about it, technically if I used iSCSI, I would’ve had to create the target, then add the initiatornames in the host conf. Not that different than having to create a new zfs mount-point, set nfs share value on it, then add the entry on the lxd host in fstab or something, then create a new pool using the dir driver.

For iSCSI, I’d also have to format the disk and mount it - unless… I think I got an idea… kinda hacky, but doable. In either case, if I create an iscsi target and format the disk on the host and mount it, or if I just mount a nfs share, I can do it without having to create multiple pools in lxd. I just need to create the container, stop it, move stuff to another directory, mount either the iscsi or nfs into the target, then move the data from the container to the freshly mounted point. Ugh, I don’t like this, but it’s what I might end up doing anyway.

I probably want to experiment this in VMs before I go live with it. Thankfully, making a VM template and cloning via zfs should be easy. Now’s one time where I wish I had a low-power hypervisor, like a NUC, another odroid h3/+ or a zimaboard. I wonder how qemu/kvm works on aarch64 nowadays, since lxd --vm option still doesn’t properly work on aarch64. Or maybe I can get away with lxd vms on my odroid h3+ itself, I have the RAM and probably the CPU for it anyway, just need to install qemu… I was always curious how lxd vms compare to firecracker in terms of resource consumption. Maybe I’ll take some time to install opennebula in a VM and run firecracker on it, just to see how it goes.

1 Like

Seems like that’s also the case on x86_64, at least on void. Despite me installing qemu and qemu-user-static, neither of these allow lxd vms to run.

Instance type "virtual-machine" is not supported on this server: QEMU command not available for CPU architecture

My websearch-fu is not strong tonight, I can’t find the reason this doesn’t work.

1 Like