Return to

Moving to Linux as a daily driver for work - Multiple SSH Sessions

Hey all :slight_smile:

I have finally moved to Ubuntu as a daily driver at work and all is great except the use of large amount of SSH Sessions - Moving from using MobaXterm and have the ability to create groups of Switches for each single IDF or just our deployed thin clients.

I came across terminator and tmux that help with multi execute :smiley: But I’ve been unable to find a way to manage my sessions in groups so I can easily SSH to multiple clients at one time.

Only able to find GUI management I have become very fond of using the terminal for most things.

I know about using ~/.ssh/config to save them all but I have over 500 clients to need to access :stuck_out_tongue:

Many Thanks


.ssh/config allows for include statements. That means you can write half of your .ssh/config by hand, and also have it include an autogenerated config.

tmux supports this “synchronize panes” option that some folks have used to implement stuff similar to “cluster ssh” if you want to type to multiple ssh sessions in parallel. Some guy scripted it into tmux-cssh

If you haven’t already, you might want to look into ansible or just using pexpect to make simpler request/reply workflows happen unattended


Another thing if you use lots of SSH sessions is to invest some time in a tiling WM like i3, awesome or similar. If most of your work is CLI-based, tiling WMs really help.

Have a personal bias towards i3 myself because I think it is the only desktop that handles multiple screens on a laptop in a sane manner (every screen is a virtual desktop). But, your pick! :slight_smile:


Took me a little while to configure i3 to how I like it. But I haven’t looked back since! It’s really great at handling multiple terminals and increases my productivity significantly .

Paired with Rofi I’m able to launch an SSH session to any server in an instant.

The estate I look after is relatively small compared to OP so haven’t looked at configuring groups etc.

1 Like

I can’t recommend parallel SSH for anything. A sysadmin I worked with five years ago used it a lot. I don’t remember the exact program, but it opened a lot of terminal windows and ran them in parallel.

Every once in a while, like every three months, something would go wrong. One of the systems would be overloaded and not connect quickly enough. Or one of them would have a network disconnect or reboot, and he wouldn’t notice right away.

So, say he does a package install or update, but one of the systems doesn’t do it. Now what? Now there’s a weird bug lurking in there, or a security problem everyone thought was fixed.

It’s a much better idea to use a system that handles the grunt work for you. Specify what packages should be installed and let it do the SSH’ing and package operations until it completes. Or what configuration files should be there. Etc.


It’s just a simple trade-off and a question of scale

If you have thousands of systems, then you’re definitely not going to be ssh-ing into them manually regularly, they might be assembled, put in service, dismantled a few years later without a human ever running into their hostname in any shape or form.

If you have 2 systems they’re more like pets and less like cattle - you’re likely to ssh in, run scripts to do things, tinker and set things up.

Things will go wrong either way, you need to set a budget for how much time you have, and gauge your options.


Thanks for all the replies :smiley: So we already have automated tools for the each of the types of clients but now and then it is needed to SSH into a few and pull logs to troubleshoot and push a building wide patch.

Its no problem to add all the clients to ~/.ssh/config but just wondered if they’re was a way to group to SSH in batch if needed - Mostly to watch upgrades or logs in the future

What kind of solution are you using to centralize and backup debug logs (e.g. syslog and stuff). I’d imagine it be easier to query or tail those logs.

To be fair kinda moved far from my original question :stuck_out_tongue:

As I just want to create a organised saved SSH clients with groups/files for each type of devices - Mostly for the OCD and future ease of use

Oh, so if I understand you correctly you want to create order in your .ssh/known_hosts file?

Hmm, I can think of only two things. Either split the known hosts file into multiple files, and/or, use git to handle these files. One does not exclude the other.

You can split hosts into different files using these commands (replace with your own hosts):

ssh-keygen -R                              # Remove hostname from known_hosts
ssh-keyscan -H >> ~/.ssh/known_hosts       # Add hostname to known_hosts
ssh-keygen -R -f ~/.ssh/example_hosts      # Remove hostname from example_hosts
ssh-keyscan -H >> ~/.ssh/example_hosts     # Add hostname to example_hosts

To log in to an ssh server, you can use this command, which looks for the host in the example_hosts file rather than known_hosts:

ssh -o ~/.ssh/example_hosts

You can also add to the list of known configs in your $HOME/.ssh/config directory. Simply edit or add the line that starts with UserKnownHostsFile and watch the magic happen (space separated list):

UserKnownHostsFile ~/.ssh/known_hosts ~/.ssh/example_hosts

As for how to use and combine this with git, have a look at this page. In fact, I know many people that keep their entire home directory as a git repository - or atleast the configuration files portion of it. :slight_smile: Here is a more generic tutorial video as well. It’s really smooth for 98% of your setup, but beware not to store anything sensitive like passwords, authentication keys and browser caches!

1 Like