OpenSSH and Git Commitment | Level One Techs

Help support Level 1 Techs Linux through our Inmotion Link:

This is a companion discussion topic for the original entry at

Awesome video!

One thing I'd like to add, that also came up in the YouTube comments, is that it is better to rebase on pulling from a repo than doing merge commits (at least in my opinion, but others seem to agree).
It helps to keep the log clean and it doesn't really make a difference, since any local commits are only on the local machine and can be moved on top of the remote commits.

Maybe a follow-up video on how to use Git might be a cool idea, too. :smiley:


Is this any useful to me if i'm not a developer?

I'd say no.

Praise Lord Wendell for delivering the golden covfefe nugget manifested in this video :sunglasses:

Also to note Lord Wendell uses vim

1 Like

As I'm currently turning into some sort of developer, but not on web, one vote for the Wendell lecture on git.

I will probably use the parts presented here to create my personal instead-of-the-cloud solution. This is highly useful content (to me). Thanks!

And I'm pretty sure I will understand it, I just need to watch it at 3/4 speed and pause a lot... I think too many M&Ms were used (abused?) in the creation of this video. Or more to the point: for learning something new, the pace of this video is a bit too high.

1 Like

I'd say YES! I use git as version control for my LaTeX projects, various documents, configs, code, and various other things I'd like to have version control over.


One thing that git is great for, for non developers, is managing config changes in /etc

How does it sound to you to keep track of every single change to any config in /etc and be able to revert/review/undo as well as diff changed for new incoming and upgraded packages with your existing config?


I still use SVN for most of my projects. It does what I need in a way I understand.... git is confusing (to me).
I have managed to pull some git repos (mainly from GitHub) to play with the code, but I still don't get how it works... Haven't watched the video yet. I will after work. Maybe Wendell will make me understand.

Wow, I don't know why I haven't thought of this before. It happened to me so often that I wanted to keep track of /etc so I would be able to revert changes, either made by me or an application.

1 Like

a .git folder at the root of the site. i see what you did there. I applaud you @wendell

I had the same issue few years ago I used ClearCase and SVN and migrating projects to git was a nightmare so in case of big projects this is the only reason to stay with 'the others'. Git could look not so intuitive at start but when you get familiar with it you will never want to use any other VCS again. I've been using it for about 6 years and it never disappointed me. :wink:
I'm pretty sure you will not be disappointed if you give it a chance.

P.S. :smiley:


Didn't he say that he explicitly did not do that?

somewhere it is

So for the /etc git setup. You would just initialize a repostory in your /etc folder then push to git server of your choice correct?

Not quite, at least to my understanding. You would initialize a bare repo somewhere (doesn't have to be in /etc), like for example in your home directory. You then set the two environment variables as shown in the video to point to /etc and the bare repo. After that you can commit all files you want to keep track of to the bare repo etc.

So for example:

$ cd # Changing directory to home
$ git init --bare etc.git # Creating new bare repo called 'etc.git' in home directory
$ declare -x GIT_DIR=/home/sanfordvdev/etc.git
$ declare -x GIT_WORK_TREE=/etc
$ git add /etc/apache2
$ git commit -m "Added Apache's configuration files"

Edit: While you could of course initialize a standard repo in /etc as well, I think a bare repo that you keep in your home directory is cleaner.

If you want to use this to push new configuration files to some servers that you would like to configure by pushing commits to them, you can do the same pretty much, but make sure that you have the post-receive-hook active as well as the appropriate flags set, that allow you to push to the server.

Those flags are either:

$ git config receive.denyCurrentBranch ignore

or preferably with the shared flag when initializing the repo:

$ git init --bare --shared=group

This page here might help you further if you want to make a server host an existing repo:

1 Like

someone really needs to watch this video

I respectfully disagree with creating a bare repo elsewhere and cherry picking config files to have in the repo.
From a standpoint of cleanliness having git work in it's most standard way is going to be cleaner than having random repos littered about completely disconnected from their true file source.

For websites the benefits far outweigh the costs and disorganization, because you would then need to create configurations at the web server level to not serve the .git folder. That would mean if ever some poorly intended configuration change was made; your entire web source would be exposed and if google or some other search engine indexed it, it could take days to remove it.

On the topic of not adding all configs to the repo, you would be opening an easy attack vector to ruining your machine. If an enterprising scamp happened to get access to the repository he could simply add all blank configs, and if you aren't careful you could wipe out your etc. Now it would take all of a few commands to bring back the configs from git, but the ones you didn't track are now gone and your recovery time went from a few minutes of verifying and seconds of commands to hours.

I would disagree completely with pushing config changes from git. I would disagree with even doing it for a website. It is very dangerous. You should manually want to see what is happening and be fully in control of when these changes go live. In any live(as in internet accessible service) system I would highly suggest having a development and production environment. Test the changes for at least some level of sanity and security before pushing them to production even if this is small home site/service.

GIT is not DEVOPS, it offers exactly 0 robustness for dealing with conflicts that can arise from system changes. Not to mention that while git is handy for tracking config changes, it is only good for a few home machines at best, and once you get into servers, virtual machines, and containers it's time for devops to take over.

Sorry for rambling, but closing all possible attack vectors just makes sense and git alone offers 0 authentication or authorization mechanisms.


Thanks for your input!

For me that is a matter of taste. Except for the environment variables, it's basically the same procedure and I don't mind either way. If someone feels better about keeping the repo for /etc in there (as a basic repo), then there is nothing wrong with that.

I don't quite understand how that scenario is supposed to work? The only copies of the repo are on the server and the administrator's machine. If he has access to either, it's already game-over. If he just gets a copy of the files, like from a backup, then he doesn't automatically have access to the copy of the server's repo. Those are two independent copies that need to be synchronized using git push or git pull through an SSH connection for example. The SSH connection does all the authentication and encryption.

I agree. I wouldn't personally use it for pushing configurations files to a server either, but my previous post was just to elaborate that you could fairly easily do that if you wanted to. A much better (and more professional) approach would be to use something like Ansible for that.

The authentication is part of SSH and the authorization can be solved with file-level permissions. It's not ideal, but for what it is usually used for, it should be enough. If it's just a simple home server, I think you can get away with it. Of course there is nothing wrong with using better tools, if you know how to use them.

True, I suppose on the placement of the repo, it doesn't really matter.

To elaborate on the security aspects. The only repo that matters is the one that is getting pushed to the server, but git is wide open. Just because you connect via ssh to the server means very little. You have a basically wide open repo that can be altered and stored in at least 2 places. Adding files won't cause a merge conflict, and while I don't know for sure but I believe that git will just overwrite files that aren't a part of the receivers revision quietly.

So someone with access to your machine can sneak a commit in easy enough. At the server they could alter the files directly but if they tamper with the repo those changes are much harder to reverse and easier to go unnoticed.

Yes, once they have access to one machine it is game over but by leaving this vector open you are giving them more time until they are discovered and you are giving them an easy way to ruin your day. This scenario seems far fetched for a home user, but the second you have a service available online, those in search of lulz may be paying you a visit.