Hey everyone, AdminDev here.
How’s it going?
I hope you are doing well and looking to LEARN.
I’ve been working on a series for my podcast, but a few have suggested I share with you.
Wendell has done an interesting concept for a DevOps workstation, and I was surprised with the responses. I didn’t expect that many people on this side of the fence would be that interested.
This series is going to focus on the culture and toolchain aspect versus what operating system, system specs, and hardware you should use. The beauty of DevOps is that it is a culture movement first, and a technology last. So if Windows fits in your company culture and makes your team into high performers, go for it. If it’s Ubuntu, rock on. For my current and previous organizations that I’ve been in the DevOps realm, OS X has been the perfect tool for us.
This isn’t about a technology being better than another. The minute you start saying “Ansible is better!” then you lose, and you miss the point of DevOps. It isn’t about the tech or the tool, it’s about culture (I know, I know, SAY CULTURE AGAIN, just really hammering this home).
A true champion in the world of DevOps is Nicole Forsgren. She is the Chief Scientist and CEO of D.O.R.A. (DevOps Research and Assessment) and leads a team that produces The State of DevOps Report every year.
You can find more about that research here.
To truly learn what it means to practice DevOps, I would check out a few of her talks or interviews. She has been on RunAs Radio a few times and she has an energy that is truly an inspiration and captivating.
DevOps started as a cause to unite Development and Operations to have the same business goals. Rather than throw the code “over the wall” and let the admins figure it out, collaboration and communication are key. So, more important than anything, DevOps is about people!
The biggest component to DevOps is YOU. You are the most important piece to the DevOps toolchain. This wiki will have in-depth discussion on what I have learned, as well as some friends of mine in the industry, from our experiences in getting into a more aggressive role of deploying software and infrastructure. This discussion will include books, study habits, staying focused, tips for working with teams outside your department, what culture can look like, and continuing to learn and grow in your career.
To demonstrate the examples, I am going to use Terraform, AWS, and a few other core technologies for Scripting, Continuous Delivery, and automation. If you’re comfortable with Bash, Ruby, Python, or Go you should be more than able to handle the following examples.
Why Terraform? It’s an open source, fast, and powerful Infrastructure as Code tool that can deploy software to various cloud and hypervisor hosts. I am using AWS because it’s free.
AWS offers a free tier that you can sign up for here. This account will give you 750 hours of resources per month for twelve months.
Note, resources differ in cost, and some of the resources (like messaging and some of the databases) do not offer a free tier. This walkthrough will not incur any charges as long as you remember to run terraform destroy
after every deployment.
This excerpt is taken from my blog post, but I don’t want to seem like I’m begging for clicks. If you’re interested DM on the side or checkout my profile for a link to the website.
First, we want to create a non-root user to manage our infrastructure. Head over to IAM (Identity Access Management).
Select Users > Add User
Create a user name and check both access boxes.
Set a strong password for this user and head over to permissions.
Feel free to grant Admin access, but really the only things you’ll need for this exercise are the following:
Review the user to make sure everything is correct, and click Create User.
Copy down the access and secret keys, or download the CSV file that contains the information. You’re going to need these later. Don’t worry if you clicked off it or closed out of the window, you can create new access keys any time you want.
Next, head over to EC2 and select Key Pairs from the side.
Create a new Key Pair and download the file.
This will allow you to ssh into any instances we create in the future (we’ll go through how to define the key in Terraform).
Now, you’re ready to setup your workstation, development environment, whatever you want to call it.
I’m on Fedora, so installing aws-cli is fairly easy.
sudo dnf install awscli -y
Too easy.
If you’re on Windows, OS X, Ubuntu, or another operating system, the process is fairly painless. You can find out how to install for your operating system at this website.
Once you have awscli installed, it’s time to setup your credentials. Open up the terminal and type:
aws configure
Enter your access key, secret key, default region (using us-east-1 for these examples), and default output. The default output is either JSON or YAML, but I always leave it blank because it doesn’t matter.
Finally, we’re going to download Terraform. Head over to the download site and select your operating system. Thanks to the beauty of Go, you only have to worry about a single binary. Unzip the file and move it to /usr/bin
or, if you’re careful about such things, move it to /opt
or somewhere where you’ll remember it. If you’re going to the /opt
route, I recommend setting up an alias in ~/.bashrc
alias terraform=/opt/terraform
If you’re on Windows 10
Check out the Windows Subsystem for Linux here. Otherwise, download the zip file and extract the binary. You can add it to your $PATH by storing it somewhere safe (Like C:\Program Files):
Control Panel > System > Advanced System Settings > Environment Variables (under Advanced tab) > System variables > Add the path (C:\Program Files\Terraform for example) > Click OK through all the menus.
Now, close PowerShell if you have it open. Relaunch and type terraform
to test if it was successful.
Next, you’ll want to fire up your favorite editor. Sublime Text, Visual Studio Code, and even Intellij have Terraform plugins for syntax highlighting and autocompletion.
Create a new file and name it main.tf
Hopefully you have a good editor that recognizes this as a Terraform file. If not, check out one of the examples above and setup the plugins for Terraform.
We’re going to build a very, very basic ec2 instance (server) that allows traffic to ports 80 and 22.
The first thing you need to do is add a provider. Terraform works with OpenStack, Azure, AWS, Google Cloud Platform, KVM, and Hyper-V.
provider "aws" {
region = "us-east-1"
}
This gives us the AWS calls with our default region as N.Virginia.
Next, we’ll want to declare our resources. I use the word “declare” for a reason, because Terraform is a declarative IaC language. Terraform is designed to check for existing resources before creating new resources, because it uses the code to create the “desired state”. This differs from something like Chef and Ansible, where you make a plan and procedurally generate the infrastructure. If you’ve used Puppet, you’re familiar with how a declarative IaC system works.
The two resources we’re going to define are our instance and our security group to accept traffic.
resource "aws_instance" "serv-1" {
ami = "ami-0ff8a91507f77f867"
instance_type = "t2.micro"
vpc_security_group_ids = ["${aws_security_group.instance.id}"]
key_name "terraform" (use whatever key pair you created earlier)
tags {
Name = "TestServer01"
}
}
The resource starts with “aws” which is a good indication as to what platform we’re on. I grabbed the AMI from the EC2 Management Console. That image is Amazon Linux with Docker and MySQL repos, as well as Python, awscli, and other tools installed.
Next, we define the instance type. You can go as heavy as you want, but keeping the free tier in mind, I’m rocking t2.micro.
The vpc_security_group_ids isn’t something we’ve made yet, but, since I planned on creating a security group it was easy enough to add it now. If you’ve written any scripting or programming languages before, you probably recognize the string interpolation. That variable calls the “aws_security_group” named “instance” and retrieves the “id”. The dot operator separates the objects.
Last, we have our key_name, which is the name of the key pair we created earlier.
The “tags” section is optional, but it’s good to get into practice. You can name your server whatever you want, and have other tags such as who created the server, its purpose, etc.
Next, we’re going to define the security group resource.
resource "aws_security_group" "instance" {
name = "testrunsec"
ingress {
to_port = 80
from_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
to_port = 22
from_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
Hopefully you’re getting into the rhythm of how this all works (at least, as far as formatting is concerned).
We’ve defined and named our security group resource and created two inbound ports, 80 for http and 22 for ssh access. The cidr_blocks accept an array of acceptable addresses.
Hey, what do you call 17.0.0.0/8
?
…
Apple Cidr.
Alright, be serious.
The last thing you want to do is create another file and save it as outputs.tf
in the same directory as your main.tf
file.
In the new file, we want to get our public IP address. This way, whenever we deploy a new instance, we’re not constantly bouncing between our editor, our terminal, and the AWS console.
output "ip_addr" {
value = "${aws_instance.serv-1.public_ip}"
}
Now we’re ready to launch our instance!
Navigate to the folder with your main and outputs files, and execute:
terraform init
terraform plan
Make sure you see two resources are going to be created. If the plan stage fails, make sure you’re defining the provider as “aws” and make sure your aws configure
is correct.
If all looks good, go ahead and deploy!
terraform apply
You should see your output:
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Outputs:
ip_addr = x.x.x.x
Where “x.x.x.x” is your public IP address.
Time to ssh in (don’t forget your .pem file):
Success!
You might have noticed some things change (depending on your editor and environment) while you were running the commands above. A .terraform
directory, terraform.tfstate
file, and terraform.tfstate.backup
file are created.
The directory keeps tracks of your systems and plugins you’re running with Terraform. The tfstate keeps track of what you’re running. So, with this single file, you can change and add all kinds of resources, when you plan
and apply
, the tfstate
file will say “Hey, serv-1 is now serv-test, so we need to swap those out. Oh, by the way, they added an auto-scaling group and load balancer, so throw those instances in with that. You know what? Never mind, delete those instances and create new ones.”
A bit of an exaggeration, but that’s what the state files are for. Since you’re by yourself, there will rarely be an issue with making changes. However, it’s worth noting that remote state files and build servers are often necessary with teams to prevent conflicts. If I’m making a change and Phil is making a change, without protections in place (like a single state file with locking policies), we can overwrite each other’s changes or, worse, brick the network.
Speaking of bricking, go ahead and run terraform destroy
to kill your beautiful creation (so we don’t incur any charges).
That’s all for today. We’ll explore the scenarios above and make a build server with a job queue for small teams to work with. I’ll also pal around with the idea of using a self hosted provider over a cloud provider to see how cool that will be. I want to keep this growing and have Jenkins, GitLab, Git, Terraform, Chef or Ansible, maybe containerization, maybe orchestration, some InfoSec (SonarQube), etc. The full DevOps workflow.
I had some questions (from my podcast) about some resources and made a little video about it. I cover some books on DevOps and a sort of “Teach Yourself Computer Science” path. I received two e-mails so it wasn’t like I was getting blown up lol. But, I want to give back to the community. Here is the video if anyone is interested.
I’ll add more to this as I learn and grow and find time. Feel free to contribute if you’re a Jenkins master or introduce TravisCI to the world. I love Ansible and Chef, but I’m willing to bet a lot of people out there are more knowledgable of both than I am.