Come and DevOps With Me

Hey everyone, AdminDev here.

How’s it going?

I hope you are doing well and looking to LEARN.

I’ve been working on a series for my podcast, but a few have suggested I share with you.

Wendell has done an interesting concept for a DevOps workstation, and I was surprised with the responses. I didn’t expect that many people on this side of the fence would be that interested.

This series is going to focus on the culture and toolchain aspect versus what operating system, system specs, and hardware you should use. The beauty of DevOps is that it is a culture movement first, and a technology last. So if Windows fits in your company culture and makes your team into high performers, go for it. If it’s Ubuntu, rock on. For my current and previous organizations that I’ve been in the DevOps realm, OS X has been the perfect tool for us.

This isn’t about a technology being better than another. The minute you start saying “Ansible is better!” then you lose, and you miss the point of DevOps. It isn’t about the tech or the tool, it’s about culture (I know, I know, SAY CULTURE AGAIN, just really hammering this home).

A true champion in the world of DevOps is Nicole Forsgren. She is the Chief Scientist and CEO of D.O.R.A. (DevOps Research and Assessment) and leads a team that produces The State of DevOps Report every year.

You can find more about that research here.

To truly learn what it means to practice DevOps, I would check out a few of her talks or interviews. She has been on RunAs Radio a few times and she has an energy that is truly an inspiration and captivating.

DevOps started as a cause to unite Development and Operations to have the same business goals. Rather than throw the code “over the wall” and let the admins figure it out, collaboration and communication are key. So, more important than anything, DevOps is about people!

The biggest component to DevOps is YOU. You are the most important piece to the DevOps toolchain. This wiki will have in-depth discussion on what I have learned, as well as some friends of mine in the industry, from our experiences in getting into a more aggressive role of deploying software and infrastructure. This discussion will include books, study habits, staying focused, tips for working with teams outside your department, what culture can look like, and continuing to learn and grow in your career.

To demonstrate the examples, I am going to use Terraform, AWS, and a few other core technologies for Scripting, Continuous Delivery, and automation. If you’re comfortable with Bash, Ruby, Python, or Go you should be more than able to handle the following examples.

Why Terraform? It’s an open source, fast, and powerful Infrastructure as Code tool that can deploy software to various cloud and hypervisor hosts. I am using AWS because it’s free.

AWS offers a free tier that you can sign up for here. This account will give you 750 hours of resources per month for twelve months.

Note, resources differ in cost, and some of the resources (like messaging and some of the databases) do not offer a free tier. This walkthrough will not incur any charges as long as you remember to run terraform destroy after every deployment.

This excerpt is taken from my blog post, but I don’t want to seem like I’m begging for clicks. If you’re interested DM on the side or checkout my profile for a link to the website.


First, we want to create a non-root user to manage our infrastructure. Head over to IAM (Identity Access Management).

Select Users > Add User

Create a user name and check both access boxes.

Set a strong password for this user and head over to permissions.

Feel free to grant Admin access, but really the only things you’ll need for this exercise are the following:

Review the user to make sure everything is correct, and click Create User.

Copy down the access and secret keys, or download the CSV file that contains the information. You’re going to need these later. Don’t worry if you clicked off it or closed out of the window, you can create new access keys any time you want.

Next, head over to EC2 and select Key Pairs from the side.

Create a new Key Pair and download the file.

This will allow you to ssh into any instances we create in the future (we’ll go through how to define the key in Terraform).

Now, you’re ready to setup your workstation, development environment, whatever you want to call it.

I’m on Fedora, so installing aws-cli is fairly easy.

sudo dnf install awscli -y

Too easy.

If you’re on Windows, OS X, Ubuntu, or another operating system, the process is fairly painless. You can find out how to install for your operating system at this website.

Once you have awscli installed, it’s time to setup your credentials. Open up the terminal and type:

aws configure

Enter your access key, secret key, default region (using us-east-1 for these examples), and default output. The default output is either JSON or YAML, but I always leave it blank because it doesn’t matter.

Finally, we’re going to download Terraform. Head over to the download site and select your operating system. Thanks to the beauty of Go, you only have to worry about a single binary. Unzip the file and move it to /usr/bin or, if you’re careful about such things, move it to /opt or somewhere where you’ll remember it. If you’re going to the /opt route, I recommend setting up an alias in ~/.bashrc

alias terraform=/opt/terraform

If you’re on Windows 10

Check out the Windows Subsystem for Linux here. Otherwise, download the zip file and extract the binary. You can add it to your $PATH by storing it somewhere safe (Like C:\Program Files):

Control Panel > System > Advanced System Settings > Environment Variables (under Advanced tab) > System variables > Add the path (C:\Program Files\Terraform for example) > Click OK through all the menus.

Now, close PowerShell if you have it open. Relaunch and type terraform to test if it was successful.

Next, you’ll want to fire up your favorite editor. Sublime Text, Visual Studio Code, and even Intellij have Terraform plugins for syntax highlighting and autocompletion.

Create a new file and name it main.tf

Hopefully you have a good editor that recognizes this as a Terraform file. If not, check out one of the examples above and setup the plugins for Terraform.

We’re going to build a very, very basic ec2 instance (server) that allows traffic to ports 80 and 22.

The first thing you need to do is add a provider. Terraform works with OpenStack, Azure, AWS, Google Cloud Platform, KVM, and Hyper-V.

provider "aws" {
    region = "us-east-1"
}

This gives us the AWS calls with our default region as N.Virginia.

Next, we’ll want to declare our resources. I use the word “declare” for a reason, because Terraform is a declarative IaC language. Terraform is designed to check for existing resources before creating new resources, because it uses the code to create the “desired state”. This differs from something like Chef and Ansible, where you make a plan and procedurally generate the infrastructure. If you’ve used Puppet, you’re familiar with how a declarative IaC system works.

The two resources we’re going to define are our instance and our security group to accept traffic.

resource "aws_instance" "serv-1" {
    ami = "ami-0ff8a91507f77f867"
    instance_type = "t2.micro"
    vpc_security_group_ids = ["${aws_security_group.instance.id}"]
    key_name "terraform" (use whatever key pair you created earlier)
    
    tags {
        Name = "TestServer01"
    }
}

The resource starts with “aws” which is a good indication as to what platform we’re on. I grabbed the AMI from the EC2 Management Console. That image is Amazon Linux with Docker and MySQL repos, as well as Python, awscli, and other tools installed.

Next, we define the instance type. You can go as heavy as you want, but keeping the free tier in mind, I’m rocking t2.micro.

The vpc_security_group_ids isn’t something we’ve made yet, but, since I planned on creating a security group it was easy enough to add it now. If you’ve written any scripting or programming languages before, you probably recognize the string interpolation. That variable calls the “aws_security_group” named “instance” and retrieves the “id”. The dot operator separates the objects.

Last, we have our key_name, which is the name of the key pair we created earlier.

The “tags” section is optional, but it’s good to get into practice. You can name your server whatever you want, and have other tags such as who created the server, its purpose, etc.

Next, we’re going to define the security group resource.

resource "aws_security_group" "instance" {
    name = "testrunsec"
    
    ingress {
        to_port = 80
        from_port = 80
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }
    
    ingress {
        to_port = 22
        from_port = 22
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }
}

Hopefully you’re getting into the rhythm of how this all works (at least, as far as formatting is concerned).

We’ve defined and named our security group resource and created two inbound ports, 80 for http and 22 for ssh access. The cidr_blocks accept an array of acceptable addresses.

Hey, what do you call 17.0.0.0/8?

Apple Cidr.

Alright, be serious.

The last thing you want to do is create another file and save it as outputs.tf in the same directory as your main.tf file.

In the new file, we want to get our public IP address. This way, whenever we deploy a new instance, we’re not constantly bouncing between our editor, our terminal, and the AWS console.

output "ip_addr" {
    value = "${aws_instance.serv-1.public_ip}"
}

Now we’re ready to launch our instance!

Navigate to the folder with your main and outputs files, and execute:

terraform init
terraform plan

Make sure you see two resources are going to be created. If the plan stage fails, make sure you’re defining the provider as “aws” and make sure your aws configure is correct.

If all looks good, go ahead and deploy!

terraform apply

You should see your output:

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Outputs:

ip_addr = x.x.x.x

Where “x.x.x.x” is your public IP address.

Time to ssh in (don’t forget your .pem file):

Success!

You might have noticed some things change (depending on your editor and environment) while you were running the commands above. A .terraform directory, terraform.tfstate file, and terraform.tfstate.backup file are created.

The directory keeps tracks of your systems and plugins you’re running with Terraform. The tfstate keeps track of what you’re running. So, with this single file, you can change and add all kinds of resources, when you plan and apply, the tfstate file will say “Hey, serv-1 is now serv-test, so we need to swap those out. Oh, by the way, they added an auto-scaling group and load balancer, so throw those instances in with that. You know what? Never mind, delete those instances and create new ones.”

A bit of an exaggeration, but that’s what the state files are for. Since you’re by yourself, there will rarely be an issue with making changes. However, it’s worth noting that remote state files and build servers are often necessary with teams to prevent conflicts. If I’m making a change and Phil is making a change, without protections in place (like a single state file with locking policies), we can overwrite each other’s changes or, worse, brick the network.

Speaking of bricking, go ahead and run terraform destroy to kill your beautiful creation (so we don’t incur any charges).

That’s all for today. We’ll explore the scenarios above and make a build server with a job queue for small teams to work with. I’ll also pal around with the idea of using a self hosted provider over a cloud provider to see how cool that will be. I want to keep this growing and have Jenkins, GitLab, Git, Terraform, Chef or Ansible, maybe containerization, maybe orchestration, some InfoSec (SonarQube), etc. The full DevOps workflow.

I had some questions (from my podcast) about some resources and made a little video about it. I cover some books on DevOps and a sort of “Teach Yourself Computer Science” path. I received two e-mails so it wasn’t like I was getting blown up lol. But, I want to give back to the community. Here is the video if anyone is interested.

I’ll add more to this as I learn and grow and find time. Feel free to contribute if you’re a Jenkins master or introduce TravisCI to the world. I love Ansible and Chef, but I’m willing to bet a lot of people out there are more knowledgable of both than I am.

7 Likes

Didn´t read it all yet, but I watched the podcast on the way home. :slight_smile:

I have never yet used Gitlab CI tool either. Afaik you don´t really need any premium package to have that work (at least I do have a Gitlab CI pipeline on my community version of gitlab).
I looked into it a little bit, but nothing concrete yet. By the looks of it, it does not do a whole lot of testing by itself but instead you´re gonna tell it to call maven jobs for instance (in an yml file) and the return value 0 or minus whatever will determine sucess or failure. What´s a bit tricky I guess is that every stage of your CI pipeline runs in it´s own isolated space, so if you want to use a .jar in your next step you have to declare that as output in your first step.

I´m frankly not really doing a whole lot of testing in my private projects, but what I wanted (or still want to do) with it is when I check something into version control it should compile every part of the project and move them together into some folder somewhere and maybe zip that. It still has some time. The point of this would be that I plan to build a website (once I´m done with what I´m currently working on) and that one I want to host the latest snapshot of whatever projects I´m working on or consider worth sharing with the world and somehow figure out how to do that with stable “release” type of builds too. No idea how I´m gonna do it with the source code though, because I´m not entierly sure if I want to share my tiny little underpowered gitlab box with the world. Though I don´t even believe that it could break the thing and also not like I expect many people to even go there to begin with. It´s mostly just gonna be to share some projects with friends and when I apply to Jobs.


Also, I only found out a few days ago that DevOps can be an actual Job. I always thought it would be either some kind of sysadmin OR a programmer, but it´s like it´s own Job now too. There are companies specifically searching for DevOps. Witch I found really interesting, because at first it was really confusing when you hear people explain it as a “Culture” sometimes. Guess it makes sence though, because a lot of what I heard from friends in some companies and what is annoying in their daylie Job. A dedicated DevOps employer or team could totally fix a lot of that. Because everyone else appearently mostly has no time for that, or does not want to have time for it.

1 Like

Thanks for the feedback! Appreciate you checking out the show.

I’m interested in trying the full suite of GitLab just to see how it flows with my current workflow.

Looking to add some comments about it in this here wiki :wink:

Yeah, some places call it DevOps Engineer, Infrastructure Engineer, Build Engineer, Automation Engineer, Infrastructure Developer, and Site Reliability Engineer. It’s all the same thing. Hoping this guide can provide some more insight to people interested in those roles by building out a full toolchain.

In my current role we set up the Jenkins jobs, deploy the correct branch for all the environments, get the networking and databases connected, and monitor the environments. We primarily do scripting for automation and we automate I.T. practices.

We don’t administer I.T. for the company nor do we develop the applications (we can assist and make recommendations). As someone that loved computer science and infrastructure I feel it is the best role for me, eventually I’ll move into SRE and system architecture.

My title is DevOps Engineer but the guys I do my podcast with have different titles from the list above. Our job duties are identical.

@AnotherDev

Current bundle for devops books.

Pay $1 or more to unlock:

  • Effective DevOps
  • Moving Hadoop to the Cloud
  • Cloud Foundry: The Definitive Guide
  • Kubernetes: Up and Running
  • Linux Pocket Guide Third Edition

Pay $8 or more to also unlock:

  • Cloud Native Infrastructure
  • Jenkins 2: Up and Running
  • Deploying to OpenShift
  • Database Reliability Engineering
  • Practical Monitoring

Pay $15 or more to also unlock:

  • The Site Reliability Workbook
  • Seeking SRE
  • AWS System Administration
  • Prometheus: Up and Running
  • Designing Distributed Systems
1 Like

Really good. This was my primary resource for learning K8 and the Kubernetes UI. I purchased this and Docker Up & Running at the same time. If you’re getting hardcore into Orchestration and Containerization it’s hard to pass these up.

Pair these with the documentation, because they’re a bit dated at this point. Not so much so that they’re irrelevant, however.

Title says it all :sunglasses: At a glance Linux usage - File systems, commands, tools (sed, awk, grep). Definitely worth it in this bundle.

A bit misleading, this book expects you to be Up and Running already. The installation process and most of the setup is skipped in this book. He assumes prior knowledge or an already functioning system. Other than that, it’s a great book. Dives into really advanced topics like SonarQube and dedicates two whole chapters to Declarative Pipelines.

The only way to fly :sunglasses:

Haven’t read it, but monitoring is a staple of DevOps culture and SRE practices.

One of the BEST monitoring tools on the market. This book alone is worth the price of admission.


My $0.02 on the books. Great bundle. They’ve done really good with DevOps and Functional Programming as of late. Don’t forget to pick your charity/how much you want to send to them. You can always give more for the bundle :wink: