Phaselockedloopable- PLL's continued exploration of networking, self-hosting and decoupling from big tech

I dont use Debian based distros so no. I created my own scripts.

Yeah in that case its sounds like you got a bad or mislabeled unit. Go ahead return. No harm no foul. Order it from the site with coreboot and what you need

Saving a buck isn’t worth it if you get a mislabeled unit. Which happens A TON on amazon

2 Likes

I wrote a salt stack SLS file to automate the pulling of the dnf-automatic packages or yum-cron (my code works for both RHEL 7 & 8) and configures a systemd timer to install security updates automatically for me.

Then about once a month or so I reboot when convenient.

Here is a snippet of my stuff

.
├── pillar
│   ├── common
│   │   ├── packages.sls
│   │   └── repositories.sls
│   └── top.sls
└── states
    ├── automatic-updates
    │   ├── dnf-automatic.sls
    │   ├── files
    │   │   ├── automatic.conf
    │   │   ├── dnf-automatic-install.timer
    │   │   └── yum-cron.conf
    │   ├── init.sls
    │   └── yum-cron.sls
    ├── top.sls

saltstack/states/automatic-updates/dnf-automatic.sls

# For RedHat family, version 8

# Manage the conf file
/etc/dnf/automatic.conf:
  file.managed:
    - source: salt://{{ slspath }}/files/automatic.conf
    - user: root
    - group: root
    - mode: 0644
    - require:
      - pkg: dnf-automatic

# Manage the systemd timer
/usr/lib/systemd/system/dnf-automatic-install.timer:
  file.managed:
    - source: salt://{{ slspath }}/files/dnf-automatic-install.timer
    - user: root
    - group: root
    - mode: 0644
    - require:
      - pkg: dnf-automatic

saltstack/states/automatic-updates/init.sls

{% if grains['os_family'] ==  'RedHat' %}
  {% if grains['osmajorrelease'] == 7 %}
    {% set package_name = 'yum-cron' %}
    {% set service_name = 'yum-cron.service' %}
    {% set state_file = "automatic-updates." + package_name %}
  {% else %}
    {% set package_name = 'dnf-automatic' %}
    {% set service_name = 'dnf-automatic-install.timer' %}
    {% set state_file = "automatic-updates." + package_name %}
  {% endif %}
{% endif %}
include:
    - {{state_file}}

automatic-updates:
  pkg.installed:
    - name: {{package_name}}
  service.running:
    - enable: True
    - name: {{service_name}}

Then in my top state file its included thusly:

  'G@os_family:RedHat':
    - automatic-updates

What this effectively does, is automatically pull packages, configure stuff, and the end result to automatically install just security updates. However, this could be easily configured to install all updates but I wouldn’t something so brash. :wink:

An d if I ever feel like updating it for Debian family I could. Though I only run one Debian sever and that is for my Unifi controller so I am indifferent on that.

2 Likes

DONE and DONE … now if im in gui. atom is loaded and ready to go. If I am stuck without the GUI I have NVIM and the wonderful bloat of ZSH

1 Like

Thanks for this. When I get more time I’ll look at it. Been very oddly busy this week

1 Like

Dynamic this is amazing.

Breakage 101

I dont use any debian family products so no hard feelings. Arch and RHEL are the backbone of my infrastructure. With BSD on network devices such as the protectli

This is sweet. I should integrate something like this for Arch. Only security is pushed and then I wait for stable releases of other stuff once a week. Being mostly rolling has been nicer. Ive had fixes for zero days faster

1 Like

Updated network map

@Dynamic_Gravity I’m thinking there are a lot of SPOFs here. What should I try to make redundant?

4 Likes

Ugh I need to get grafana monitoring going… For everything I have. I’m just lazy.

This means
every docker instance
Every NGINX served page
Every NGINX socket proxy
A nice visualization of graylog data
Internal network health monitoring (firewall)
External network monitoring (linode)
Maybe export pihole data to grafana and make it prettier
DoH and DoT termination log visuals…
Etc

Why did I start this. Why do I want nice things. Whiskey is better :joy:

1 Like

Fantastic net map. I like it.

I made a typo or two

Graylog Group should be Mongo DB and ElasticSearch

Nice diagram. I need to make one myself, did you use the free diagrams.net?

1 Like

no I used lucid chart and ran out of the free edition

So I cheated and made 6 sectors and lined up the photo/screenshots LOL

1 Like

I can’t like anything but I would here

1 Like

Have you checked out diagrams.net? Saw it on a Lawrence systems vid, it came off as almost too good to be true but with the limited use I have so far (installed it on Windows) its pretty neat.

time to test it out ehh?

What limitations? (number of shapes?

ROFL reached your limit?

I don’t recall any particular limitations- it came off with a libre office vibe, free. I think their business model is the webhosting/cloud service aspect of it if one chooses to have that service.

1 Like

Ahh its Draw.IO on nextcloud

Interesting

1 Like

fuck

If you got the money, why not get a stackable switch and make that redundant, lol. But I doubt the switch is that high risk as a SPOF. I’d rather get 2 more servers and implement HA for the Hypervisor. I’m guessing you’re running XFCE because of RDP / VNC, you could try JWM with PoorMan’sTillingWM if that’s more to your liking, last time I tried, JWM worked with VNC.

The first thing I would do TBH is remove the services from the barebones Pis, make a HA LXD cluster and run the services in containers, so if some Pis go down, your services just get moved on other Pis. It would be the most cost-effective removal of SPOFs, just add a small NAS (2x 500 GB in RAID1 should be more than enough for everything there on the right side, but considering you already have the SSDs, just buy 1 more 120GB SSD and do a RAID-Z2 on a separate box) and make it the storage for the containers. Later on, just add a separate NAS and enable replication of the services. I understand you will be moving the SPOF from the Pis to the NAS for a while, but a NAS should be more reliable (technically) than RPis. And the 2nd NAS doesn’t have to also be RAID-Z2, you can just put a RAID mirror with 500GB SSDs, just enough to replicate what’s on the other one.

If you are running Fedora on the Pis, it should be pretty straight forward to run LXD. I’m doing it on Void with no issue (although I didn’t have the time to set HA on the cluster, I’ll move soon and I have to get more RPis or similar after I move).