Ken Muse

Fast Start Dev Containers

In last week’s article, Using the Docker Cache, we learned that we can take advantage of the Docker cache to reduce our build times. There’s another advantage to using that cache. We can reduce the time it takes to spin-up a dev container!

It’s almost Halloween, so you’re probably thinking “this must be some kind of witchcraft!”. Almost. It’s a configuration setting in the devcontainer.json file. If you’re referencing a Dockerfile and building an image dynamically, you can take advantage of the build.cacheFrom setting. This is passed to the docker build process as the --cache-from parameter. By providing a value, we enable the development environment to use cached layers. For example:

1"build": {
2    "dockerfile": "Dockerfile",
3    "cacheFrom": [
4        "type=registry,",
5        "type=local,src=../docker-cache"
6    ]

You might have noticed that we can pass multiple cacheFrom values. This parallels our ability to pass multiple --cache-from values on the command line. Docker will process these in order, trying to resolve the layers from each provided type and stopping at the first match. If there is no match, then the caches are ignored. In this case, our dev container is first looking at a GitHub Registry to resolved myacct/myimage:main. If the cached layers aren’t found there, it then looks to a local folder.

The local folder can be populated by doing a local Docker build (possibly using --cache-from) and configuring the --cache-to value to use a local cache and that path. The remote registry cache is the more interesting aspect. We can configure a GitHub Action to build the image any time a new Dockerfile is pushed. This allows us to build and cache some of the layers we need, relying on the Actions build process to generate the necessary layers. For example, the trigger can look like this:

3  push:
4    branches: [ "main" ]
5    paths:
6      - .devcontainer/Dockerfile

This ensures that any time there is a push to main that updates the dev container’s Dockerfile, a new build is triggered. The build and push can use the cache-from/cache-to parameters to store the cache metadata in the registry. For example, we can use an inline cache-to and a registry-based cache-from (implied min mode) to store cache results in the registry:

 1# Build and push Docker image with Buildx (don't push on PR)
 3- name: Build and push Docker image
 4  id: build-and-push
 5  uses: docker/build-push-action@v3
 6  with:
 7    context: ${{ github.workspace}}/.devcontainer/
 8    push: ${{ github.event_name != 'pull_request' }}
 9    tags: ${{ steps.meta.outputs.tags }}
10    labels: ${{ steps.meta.outputs.labels }}
11    cache-from: type=registry,ref=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:main
12    cache-to: type=inline
13    file: ${{ github.workspace}}/.devcontainer/Dockerfile
14    platforms: linux/amd64,linux/arm64
15    build-args: |-

This means that changes to the Dockerfile are automatically rebuilt and made available for use by the dev containers. In addition, the image is cached, ensuring that layers which haven’t changed are not rebuilt (speeding up the build process). When a dev container starts, it will be able to use the cached layers to minimize the time required to start the container. This means a faster start for your container while still being able to alter the Dockerfile as needed. This gives you the performance boost of specifying an image with the configurability of a Dockerfile. Best of both worlds, right?

Try experimenting with these settings to find the right balance for your team. And one more thing – you’ll notice that I threw in the platforms parameter. I like to always make sure the images I use support both ARM64 and AMD64. This requires setting up QEMU as part of the workflow. The emulator takes substantially longer to build ARM64 images (and uses quite a bit of the runner’s memory), but it allows me to use the image on both Mac and PC. It requires just two lines in the Actions workflow:

1- name: Setup QEMU
2  uses: docker/setup-qemu-action@v2

Hopefully we see native ARM runners in the near future. That will substantially improve the build time and allow us to build up a multi-architecture image by running two build in parallel on two runners. Until then, it provides a nice workaround for creating working, cacheable images.

In the meantime, have fun playing with pre-built images for your dev container!