Virtual Enviroments for Development?

Sup

I heard someone saying something (read someone typing let’s be real) that they use virtualization to install all the development crap they need.

It was Chris (IIRC), hi Chris!

I wanna know how this works and if everyone does this and why exactly.

So far while I’m learning to develop I’ve been stuffing my OS with bullshit and it seems wrong, I might need help understanding this:

should I use bare metal to clutter everything and call it a day or should I use controlled environments for development?

1 Like

While virtualiztion isn’t strictly needed, it’s a professional best practice to develop code in a way that’s self-contained, repeatable, and as independent of state as possible given your application.

This reduces situations where your app works on your dev box, but doesn’t work in production (or another dev’s box) because all of the components you’ve been “stuffing your OS” with aren’t exact matches.

Tools like Docker and Vagrant are primarily development tools that can help keep this sane. Similar tools exist for other platforms, usually.

3 Likes

Why not both? :wink:

If you have repeatable environments, VMs work really well for this. You can setup Ansible, Kickstart, Chef, Terraform… You get the idea. Just spin up a VM when you need the environment. This could be LAMP, LEMP, MERN, MEAN, MEN, RoR, whatever. You have your “stack” running in the VM and then run your tests/benchmarks from the localhost.

If you have a good workflow, Jenkins + Git + Terraform, you can make changes locally, push to Git, Jenkins will test, and then apply/deploy changes via Terraform.

Most people use containers for this reason. Their “stack” is in a container, they develop locally, and then “deploy” to the container to make sure it all works. Same concept as a VM, with some pros and cons that VMs don’t have.

If you have services like Elasticsearch, Logstash, Jenkins, whatever, you could probably get away with running those in containers, but I have (always running) VMs for those services.

2 Likes

That dependency you downloaded a long time ago to get that error message to go away and conveniently forgot about lol

3 Likes

If at all possible use scripted deployments and use fully downloaded installers. Don’t use network sources.
If you can’t script it, write it all down in a document.
Make sure to test it from a clean OS install to see if you missed anything.
Once you get a good development environment set up, making it into a virtual appliance is a great idea. If it isn’t open source, licensing can be a little tricky.

3 Likes

As someone who does this “orchestration” for a good portion of my job and hobbies… I would say, if you want to get into standardizing your environement - you want to learn docker. You’ll need to learn a few things along the way, like how to build virtual machines, but everything that’s been mentioned so far can feed docker.

6 Likes

thanks for all the replies but wait, there’s more

I’ll get my feet wet with docker, understand a few things before really touching it, but le me see if I got it straight:

I develop on my machine, let’s say some java application, then wrap it up and all that and then I set up a VM (or docker), put my dependencies and see if it works?

like a VM for each project/stack?

there where some really nice names I also learnt today, on this week I’ll also get to know terraform, jenkins and all that

also need to stop being lazy and finally get my way around git

1 Like

Oh man, the fights this causes!

Some people stick all the things into one Docker container.

Other people put each individual separate thing in its own Docker container and make them talk to each other over network sockets.

So I’ve seen a project that was some kind of web forum in a Docker that contained MySQL (or Maria or whatever they’re calling it) and the web server and the PHP FPM all combined. And the sysadmin was supposed to give it its own volume storage sidecar or whatever they’re calling it these days, to put the database into.

Then I’ve also seen people who put the database in one container, the PHP in a second container and the user data in a third container and hook them together with Kubernetes or Docker Compose. Oh, and the user was expected to provide their own web server / reverse proxy to connect it to the real world. Which may be possible with just a simple Kubernetes config option these days. I can’t remember.

2 Likes

There’s nothing limiting folks from offering two kinds of containers, ones used in prod with storage and certificate handling externalized, and ones used for easy development/single machine home deployments with everything bundled in.

Specifically for development tools cluttering your system, there’s things like bazel where hermetic builds are the norm - and folks often have versions of java/c++/python checked into their repo, or at least stubbed out. Often times the output is a Debian package or a container image, but getting the build cache warmed up takes a while.

2 Likes

IMO the VM for each project / stack seems like the right approach in many scenarios. My industry experience is limited, but my current job has the approach of a VM for each project / stack. We have individual VM’s for:

  • Ansible / Jenkins (to be fair, not the same stack, but we use them heavily both in tandem for CI / CD)
  • Elastic Stack
  • Nagios alerts
  • Graylog

And some more I can’t remember. If you have all of these services in one VM (other than obvious issues such as performance), then a single VM crash can take down your entire infrastructure.

2 Likes

Deffinitely not a very docker like thing to do though. I don’t really understand why you’d put your database into a container together with your application. There are already readily available images for pretty much every noteable database out there. If you bundle your database with your application container you are imo just limiting deployment options for no good reason.

3 Likes

VM for every project is kind of slow upstart. This is where docker excells, could can develop configurations, not projects. Docker supplies the php, nodejs, or java, not the actual code. That’s generally how i would use docker for development. VMs are basically the same. One VM per environment or configuration. Unless you’re working on high security projects, there’s little benefit using all that additional space or resources to partition your projects with that much additional footprint.

2 Likes

YES… save yourself the issues… Most jobs are going to in house vms or something like amazon workspace. Which you cannot use virtualization on those services(right now). So don’t count on mobile development

Last three place’s ive worked coded solely on VMs

1 Like

I still find it a bit tideous when developing a project to run it in docker all the time. Rebuilding images still takes some time. And in all of those high level languages you are basically already using a VM anyways (i.e. one is quite literally called Java Virtual Machine). Its good for testing production deployments locally. But to actually develope with all the time? Not so sure. It deffinitely would need excellent ide support for me to do that.

I do like docker, but not that much. It is however very nice for the things that need to live around your project like databases, caching servers and whatever you need. You can spin in up really quickly and you wont gobble up your system with a bajillion databases.

1 Like

I use docker-compose files. This relates to what I said early on - I use docker-compose files to deploy to my swarm(s).

Also, FWIW, docker is generally the part of the project that I build. So my team has me to orchestrate the projects. It IS a bit of a heavy lift in general by yourself.

Another pro-tip. Most configurations can be supported by using the generic docker images. Much configurations (I find) can be deployed with those global docker images. Like MySQL, or Redis. If you’re sortie deploying something, it’s probably time for a vm or dockerfile.

1 Like

No question about that it does not change the fact that when you change a line of code it probably will have to rebuild the container unless you’re working with an interpreted language (i.e. javascript, php, python) where your code can literally just be mounted into the image. It still wont be a bullet proove production test if you do that. Bundling up your css and js files can mess up your project too. So can compiling something in release mode.

I dont question that it is a good to test your final program as close as you can get to production (especially if you deploy it into containers). Just I dont think its necessary to do all the time while developing new code.

1 Like

Yeah, docker is best for pre-built binaries. Building binaries is good for CI workflows.

2 Likes

In CI definitely makes sense, yeah.

1 Like

@anon85095355 As you can see, there are interesting, opposing philosophies and workflows to each method :wink:

BTW, I don’t think anyone here has said anything “wrong”, and they’ve all given “correct” responses. :stuck_out_tongue_winking_eye:

Do what works for your team/organization and what works for you. If they’re all using VMware and KVM or EC2, then look into Chef/Ansible/Puppet. If they’re looking at moving everything to Docker, starting working with Dockerfile, docker-compose, and Docker Swarm.

Or use LXC/LXD and be free and based of the marketing hype :wink:

1 Like

That seems to be a fairly good option, even more coz it helps avoiding this:

thank you all, if there’s anything else to be said I’m all ears, everyone is welcome to bring their opinion

1 Like