Sysadmin Mega Thread

lol

Ive used 1.5T per PV before…

Pretty sure you can go much higher.

depends on your backend storage really and how many paths to storage you have and the number of storage processors on the SAN…

Like if the volumes are on different datastores?

Just a heads up - this is a virtual machine that is acting as a samba server. The backend are datastores on SAN.

If you extend a vg onto multiple SAN-backed pvs and one goes down, then it will all go down, but otherwise should be fine…

2 Likes

( by datastore you mean a vmware volume for keeping vmdk files…)?

It shouldnt matter that much.

Im overthinking it.
Just size the PVs so you dont have to add but maybe one or two a year as it grows…imo

Migrated my public IP block in the DC remotely. Thankful for 2 things:

  1. Uplink is plugged into the switch and VLAN’d to the firewall. Would have been more difficult if it was connected directly. Allowed me to backdoor myself through a VM via wireguard on the new IP block before taking the old one down.

  2. When I set up the Edgerouter, I abstracted the public IPs into a group instead of hard coding it into the firewall rules. Only had to change the WAN interface address, default gateway and the WAN IP group. Didn’t have to touch firewall or NAT rules.

1 Like

9 posts were split to a new topic: Switch Hardware vs Bridged NICs in OpenBSD

My challenge for the day is to migrate the data from a smaller volume to a larger volume. The volume contains the files and folders which are shared out by a SAMBA server hosted on a linux server. The migration must move all the file hierarchy of the share, along with all attributes to the new volume.

In the past, I’ve used rsync to do filesystem level copies to move data to smaller volumes. However, I’ve never done this with a SAMBA share.

Basically, I’m going to mount a second larger volume as /smbshare_new. Next, I need to migrate the data from /smbshare to /smbshare_new. To do this my plan is to run:

 rsync -aHAXxSP /smbshare /smbshare_new

That should get the initial buik transfer out of the way. When I have my outage,

  1. Stop SMB Service
  2. Run Rsync again to update all changes
  3. diff -r /smbshare /smbshare_new
  4. mv /smbshare /smbshare_old
  5. mv /smbshare_new /smbshare
  6. restart smb service

The only items I’m concerned are will the sticky bits copy correctly and will swapping the volume create any issues with the SMB service? I’ve been building my test cases, but was wondering if anyone has done something similar?

Found this comment:

Rsync is a very powerful tool, perfectly capable of doing what you are asking. Simply use the following options: -aAX --numeric-ids , where:

  • -a means “archive”, and it implies several other options;
  • -A means “Access Lists”, and it is needed to backup the NTFS security descriptors;
  • -X means “Extended Attributes”, and it copy any additional meta-stream attached to the file;
  • --numeric-ids means to not mangle the UID/GID attached to the files. NOTE: if you Windows and Linux machines have persistent UID/GID (eg: by being joined to a AD/LDAP domain), you can safely skip this option.

I suggest you to directly install CygWin/Rsync on the Windows machine, bypassing the Linux mount entirely. Moreover, please consider using rsync via rsnapshot: it is a very good utility with incremental backup feature.

The SAMBA server is a linux machine with about 10 clients. Only one client is Windows.

Does anyone have any experience with this?

Does anyone use Spiceworks?

Came across Uyuni which is SUSE’s fork of Spacewalk. Setting up a test system to see how it performs, seems pretty robust from what I can tell at an initial glance. Only downside so far is that there isn’t a lot of 3rd party info out there about it so the documentation is probably your best friend.

1 Like

Finally got all the parts hooked up to use a Netapp DS4243 to my TrueNAS 2U server.

Twas a long journey.

1 Like

I did not realise there was a ZFS - module for Cockpit yet… still in testing, but seems pretty sweet

1 Like

Wonder if 45drives people helping with that

1 Like

I saw it on the recent Lawrence systems video, and pretty sure Tom mentioned that.

1 Like

anyone setup lancache? i can’t tell if ubuntu server is blocking the ports or snapd is doing something weird with docker. but, dns doesn’t seem to work when i point dns to lancache dns

yes… i made a how-to
with podman…

how long ago?

i disabled ufw and disabled systemd listening on port 53 as per lancahce common TSing. when ever i disabled DNS masq in dd-wrt and set to my ubuntu vm running docker as per my specified in my .env file and yet dns BREAKS, my machines can’t resolve hgtv.com or espn.com etc etc.

said machines can ping ubuntu vm, i do not see any ip conflicts

…what gives…? what am i not seeing and missing.

are there commands i can run to see if my lancache containers can do dnslookups to it’s dns server (8.8.8.8)? is my lancahce dns server starving for dns data?

I wont be much help with ubuntu, sorry.

You can try to exec into the container to run commands.