Devcontainers
This is a vastly oversimplifed and technically inaccurate description of how to use devcontainers to make software development easier. I'm writing it for someone who's never used them before.
Virtual Machines
When you run a virtual machine, you have a host OS (whether that's macOS, Windows or Linux or whatever). Then you download a copy of your virtual OS (let's say Ubuntu Linux), and use some virtualisation software (a hypervisor) to run the virtual OS over the top of the host OS.
Each virtual OS is allocated some disk space and some RAM and some CPU time - and the virtual OS kernel runs separately to the host OS kernel. The virtualisation layer intercepts requests made by the virtual kernel and translates it to the host kernel - for example, when hardware access is required.
But, if you want to run four copies of Ubuntu at once, each copy needs its own allocation of disk space, RAM and CPU - so it could get very expensive for the host OS.
Docker
Docker uses a feature of the Linux kernel, called "containers" (which in turn were copied from FreeBSD jails).
Instead of running a virtual OS on top the host OS, all containers run on a single host kernel. Each container has its own namespace, keeping the processes, filesystems, networks and users separate from each other. And each container has maximums on how much CPU, RAM and other resources it can consume.
But, even when running different operating systems within multiple docker containers, they all must share the same kernel. Which in turn means that running multiple containers is far less resource intensive than running multiple VMs.
This also means that containers, unlike VMs, have to be some variant of Linux. The macOS and Windows versions of docker actually start a Linux VM, then run the containers on that OS.
Images
There are two key parts to a docker image. The base image itself, plus the Dockerfile
.
The base image, similar to a VM image, contains the files, executables and components of the operating system to be run (for example, Ubuntu Server, or CentOS).
A Dockerfile
is a standard text file that describes modifications to be made to the base image. When these modifications are executed (in so-called layers, which are cached to speed up subsequent builds), docker builds a new image, which you can then run (the equivalent of launching a VM). This modified image can then be used as a base image for further modifications - or you can store the new image in a repository, for reuse later on.
There are two key facts about this process. Firstly, the Dockerfile
is standardised, meaning it can be used in any docker environment. Secondly, the images themselves can be pre-built, then downloaded as needed.
Because of this, docker is now used for managing server deployments. Rather than building a virtual machine, then installing components using a series of tools such as Ansible or Puppet - or as a series of bash scripts and CLI commands - you can pre-build a docker image from a known base image, plus a dockerfile. And then every time you want to start a new server, you simply install docker onto that machine (no matter what host OS it is running), then download the image from your repository. And you now have identical execution environments, reproducible every time.
Dev containers
The benefits of this standard execution environment are not just confined to production servers.
When developing software, we often install various tools - whether it is the compiler toolchain, various runtimes or linters and formatters - and different members of the team often end up with differing configurations. Likewise, most software has dependencies on other services - a MySQL or Postgres database, a key-value store, a file-server. Installing these by hand onto development machines, then ensuring that the versions and configurations all align, can be an arduous process.
But containers allow you to specify and build an exact replica of your environment every time.
So the dev container standard defines a specification for development environments. The .devcontainer/devcontainer.json
file specifies a base image to use (with prebuilt options for most programming tools) - or allows you to specify your own Dockerfile
. It also allows a standardised mechanism for customising that dev environment, adding additional tools, setting environment variables or installing specific plugins for IDEs and editors. Coupled with a compose.yml
file (which is part of the standard docker toolchain), you can also run separate containers for your dependent services (such as database servers) - and they will all execute within a private network as separate containers.
No more messing about with postgres installs and managing different python or ruby versions on a developer laptop.
In other words, this combination of devcontainer.json
, Dockerfile
and compose.yml
can specify everything a developer needs to get a full developer environment running on their machine in a single command.
Using a dev container
Firstly, note that the dev docker container and the production docker container are separate. The production image should be as slim as possible, with only the essential components for the software to run in a live environment. Whereas the development container will be packed full of tools and scripts and other software to make writing the software simple and easy. Normally the dev container is specified in the .devcontainer
folder, while the production container is specified in the root folder of the project (although this depends on your source code organisation patterns).
VSCode has a suite of (proprietary) extensions that will automatically notice if a dev-container is part of your project source code. It will build the containers specified, then run the VSCode server inside the primary container. This means that, if your environment specifies that everybody should use a particular linter, controlled by a specific VSCode extension, those will be added to the container and VSCode will automatically start using them. No matter what is installed on the host machine.
Alternatively, the devcontainer CLI allows you to control your development environment from a shell. Move to your project source folder then run devcontainer up --workspace-folder ./devcontainer/devcontainer.json
and your development environment will be started for you. As your source folder will be mounted inside the container, any edits you make will be reflected there, so when you execute your code (probably by SSH
ing into the container, although VSCode and other IDEs allow you to connect to the container directly), you can run your tests or execute the application from within the known environment.
In all these cases, the devcontainer maintains a port mapping between the private docker network and the host machine - so if you are running a Rails application on port 3000, it will be available on your host machine at http://localhost:3000
.
Finally there are a number of other tools that all use the devcontainer standard - Github Codespaces uses a container to allow you to edit and run your code within a browser. And devpod.sh is an open source tool that allows you to run your devcontainer remotely (on a cloud machine) whilst still editing it using an IDE installed on your own host machine (although it also works within local docker containers as well). Simply install the tool, tell it where to create its containers (using the local docker plugin, or a cloud-provider-specific plugin) then give it a git URL or local folder and it will connect your IDE to the running container.