Token's lvl1 blog- edit -- Token's rantings

marelooke’s mess! I’m the mess not you lolz

I had gotten a really nice noctua NH-U12S for it (for when I was intending to make a NAS with this via a Fractal Node 304), but it won’t fit in this current mini ITX case. Actually every low profile I looked at wouldn’t work. So I took the OEM fan off of my daily driver desktop (i7-4790), put on the noctua, and put the OEM on this hypervisors i7-4790K.

I don’t see OEM being a problem, hypervisor isn’t going to be pegged and it ran fine on my daily driver here that I’ve done some light gaming on.

100 this, which is why I’m shocked Proxmox only saw one of the two nics, and is glitchy AF. It will boot, run for a little while then drop the SSH session and the webUI port 8006 session, but on the screen I have plugged into it still be running and responsive to commands.

So then I did a vanilla Debian install and its only gripe was the wifi card non-free firmware, but after then following the guide to add the proxmox repos and install proxmox on top of it, same issues.

In a rock and a hard spot here. If I was better at linux I think proxmox would be the way, there would be a way to get the NICs working right, Nvidia GPU ‘seen’ and able to passthrough and then even the wifi card for a VM. But I’m not above intro level at nix. I’m floored how well xcp-ng has taken with the hardware- in that it doesn’t sh*t the bed when being interacted with, but yeah it doesn’t see the nvidia or wifi card.

2 Likes

So for some reason I had put 0.0.0.0 in my DNS settings?

Snort refused to run, looking at some logs so that I could do better than “snort no work, why?” in the googles, this was my issue:

Interestingly this didn’t affect suricata.

1 Like

I hit the power switch on a surge protector by accident…

image

1 Like

What are people’s pros vs cons of having Home Assistant on docker vs VM on Synology?

I went with VM as I’ve found docker difficult to learn, my ports space getting crowded and not wanting to mess with MAC VLAN.

With VM, yeah its more resources, but I get the full fat HA build, IMO easier to update (I still haven’t figured out how to update docker containers), snapshot/backup, migrate etc.

I’ve found HA to be messy- its nice to be able to snapshot before updates, restore, or just blow away, build new and import configs but I’m curious how others run it here.

Oh also, it seemed much easier (drop down menu) to pass-through the Z-wave USB stick via the Synology VM GUI than all the hoops and tricks one does for docker.

1 Like

Woot!

So have Home Assistant installed as a VM, has it’s own IP address, can interface the second NIC later.

Started migrating Z-wave devices over from the malfunctioning Piper. Thats always a chore but getting there. I have two automations setup now for lighting as well.

Next is to find some good tutorials on ‘home alarm’ automations and dashboard, then I need to setup SSL, domain, port forwarding etc so I can start to get push notifications.

Tempted to start a free tailscale and just be VPN’ed in all the time but I don’t think push notifications work that way.

1 Like

I run HassOS in a VM as well, for pretty much the same reasons. Considered switching to Docker given how they were hammering Cloudflare DNS, but they mostly sorted that mess out so not touching what ain’t broken :wink:

2 Likes

Well that makes me feel much more reassured as to my reasoning. (VM vs Docker).

For the “well actually…” types, I know there are Docker gurus that can spin it up, get the USB device (zwave, zigbee dongles etc) pass through, MAC VLAN for separate IP, IP tables to segment the traffic if more than one NIC available, self host git for configurations (I listen to the HA podcast and they said its json now but I still see yaml, I donno), and use whatever docker and git magic is out there to do things similar to snap shots…

and going homelab is to force myself to get better with this things…

but for prod I stuck with what I know how to use to do the above- which IMO is much easier as a VM (with GUI controls).

#break

Wow. So I’ve been living with a really healthy dose of delay as the Piper as the zwave controller.

  • open app, wait a few seconds for auth, cloud sync etc
  • click on device to turn on/off
  • wait a few more seconds, device does the thing.

Now with local control the aeon smart energy switch G2 toggles before my finger finishes the click of the mouse/tap of screen, just amazing.

The aeon smart dimmer switch is laggy though, which is interesting. I also to not have dimmer controls, just on/off- but then in automations I do have dimmer controls…

With one of the aeon door switches I had to import it twice for it to show up correctly.

HA has come a long long looooong way but its still super frustrating to do just about anything. IMO its still not normie approved, you still have to be a LVL1’er type to make use of it.

1 Like

I got some of these:

Connecting them to HA has been the easiest of the zwave devices. So excited how I can now use this data to automate things and push notifications if I’m out- self hosted as I’ve read the Nest stuff is somewhat of a hot mess.

As I’m installing this comes out:

Dig this dude’s thoroughness.

1 Like

Sweet:

Begone with you 3rd party hubs required to update sensors.

1 Like

It’s not a home assistant update if it doesn’t break something.

My Synology integrations stopped working haha.

Oh home assistant, some things just don’t change…

I’ve not got enough justification to use homeassistant properly quite yet.

Hoping I’ll give it a go once I’m all settled in with the move.

At your skill level I think you will really enjoy it.

I’m already having to debate restoring from backup as this update seems to break the synology integration- a simple restart of the service, checking if the creds got dropped etc is not fixing it.

Home Assistant is far from a turn-key product with #justworks attributes. But I think you will wrangle it no problem and make it do cool things.

Pretty neat I don’t even need to SSH in and navigate for logs, use grep. Just installed the log viewer add-on and already getting good data:

I’m suspecting the synology integration is built on Python X and the HA update is requiring python Y.

Token rant time: Its not that I know what I’m looking at here, I totally don’t, but when I’ve debugged multiple issues on other software/apps… man… its usually python rearing its ugly head.

I never got on the python band wagon and have grown to hate its update frequency and process, its a constant churn of breaking apps and integrations that are not maintained by full time devs.

1 Like

This was the fix: delete and re-add the integration.

Pretty lame, seems like something good development could avoid. But then again its just amazing how many people contribute to the project and make this stuff pretty turn key.

I wasn’t too hot on integrating synology into HA but there are two things I like with it so far:

  • survail station integration, I get camera previews with ease
  • temp and update status- you know what I’m using the BTUs from the NAS for and why I like to keep tabs on the temps.

image

1 Like

Alrighty, so this is an inner battle:

A big reason (for me) of home labbing and buying overkill gear was for learning and self hosting, to be self sufficient and cut out as many other parties as possible.

A better me would have gone/pushed even further, know how to and actually review code of the open source stuff I use, actually check the check sums of software, use data logging to try and hunt down supply chain attacks, network attacks etc.

But I’m lusting towards Tailscale.

Its 3rd party and IMO harder to monitor for shinanigans, but its SUPER appealing to do a tailscale router build/VM on my LAN, install client on phone and laptop and be set.

I have things like Home Assistant and other services that would require that I do the dance I’ve grown to hate doing. Something like (maybe not the right order but spit ballin) router >> port forward >> LAN IP of thing >> setup info with domain provider >> some kind of LetsEncrypt SSL setup >> fail >> bang head on wall >> try HAproxy >> failing continues >> kill liver some >> blitz lots of things, it works and I’m not sure what of the blitz I did made it work >> walk on egg shells to not brake anything etc… And then have the paranoia of yet another port opened on the firewall.

In alternate I have an OpenVPN setup but I don’t like having it on all the time on the phone and tunneling absolutely everything through it.

Enter tailscale, I like the idea of apps on my phone such as NAS SMB access, WAN blocked Reolink, Home Assistant etc #justworking as if I’m on my LAN (access + notifications) but while I’m out, but then other traffic is routing normally as if the phone is not on a VPN.

My issue is its a 3rd party, and its free for just a few boxes… and well nothing is “free”, the whole ‘if its free then you are the product’ thing.

But dang, its tempting.

Anyone self hosting something like tailscale or zerotier where their cloud service is not needed, but you are pulling a kind-of inception of hosting it yourself as heck, I have a domain, pfsense, boxes for VMs, it would be just one more port to open up but then serve lots of clients in a hybrid VPN type of way…

@oO.o I just noticed I couldn’t access my unifi webUI that is a docker on my synology- I have the containers network setting as “host”.

I had to make a synology firewall entry to allow for the webUI ports- makes sense.

But then it highlighted… how the heck am I accessing all of these other docker container ports that do not have firewall entries? Those containers are set to bridge.

Is this normal? Seems like a flaw to me. I guess this is like port forwarding when set to bridge? As I’ve found port forwarded traffic on my pfsense does not get checked by the LAN interface IDS.

1 Like

More detailed discussion here.

It’s why I separated my firewall from everything else, Docker pulling shit like that (it will add these to the end of your iptables rules, overriding any DENY or other rules that came before it too).

2 Likes

I have also read of how you have to use iptables to segment the use of nics otherwise it’s a party. Yeah I’m not ready for docker haha, feels like it’s easy to use VMs and just take the efficiency hit.

I’ve never run containers on a Synology but it sounds like they’re trying to make it easy but then it doesn’t work out each time. Idk. I’ve also never used standalone Docker for anything serious.

On a basic level, you definitely need some NAT/firewall stuff to get a private host container network out onto the LAN. Sounds like that is automated to varying degrees for different applications.

1 Like