Sometimes you need Kubernetes to do something a bit out of the ordinary, such as hosting Docker-in-Docker (DinD) containers. This is a common use case for any time you need to directly run Docker commands from within a container. This approach works perfectly fine for smaller use cases, but if you pull images from Docker Hub, you might find that it suddenly begins to fail. Docker limits unauthenticated users to 100 pulls per IP address every six hours. If you exceed that limit, you will see a 429 Too Many Requests error.
How does this happen? DinD runs Docker as a daemon inside a container. It does not know about Kubernetes or integrate with it. As a result, it cannot access the Kubernetes image cache to reuse images. It also cannot rely on any imagePullSecrets
that you might have configured for your Kubernetes cluster.
Creating a secret
First, create a secret in Kubernetes that contains the credentials. To create a secret in the same namespace as the DinD pod, use this command:
1kubectl create secret docker-registry image-pull-secret \
2 -n $NAMESPACE --docker-username $USER --docker-password $PASSWORD
This command creates a secret, image-pull-secret
, that contains the credentials. The secret uses the Kubernetes type kubernetes.io/dockerconfigjson
and contains a single data item, .dockerconfigjson
. This item is a base64 encoded Docker configuration file. If you want to connect to Docker Hub, use your Docker username. For the password, I recommend creating a personal access token in your account.
If you have already performed a local docker login
, you can create a generic
secret from your local Docker configuration file. On Linux, this file usually exists at ~/.docker/config.json
. Create a secret from that file using the following command:
1kubectl create secret generic image-pull-secret \
2 --from-file=.dockerconfigjson=~/.docker/config.json \
3 --type=kubernetes.io/dockerconfigjson
If you want to create a secret for a different registry, add --docker-server
(and if needed, --docker-email
) to the command. For example, to authenticate with the GitHub registry, use the following command:
1kubectl create secret docker-registry image-pull-secret \
2 --docker-server=ghcr.io \
3 --docker-username $USER \
4 --docker-password $PASSWORD
You can use this kind of secret as an imagePullSecret
in a pod spec. This tells Kubernetes how to authenticate with the registry when pulling images. Unfortunately, this does not make the secret available to the DinD container. To solve that, you need to take a few more steps.
Authenticating DinD
The DinD container does not need to directly consume the credentials. Instead, it relies on the client to provide those. The client usually stores those values in its home folder: $HOME/.docker/config.json
. You just need to mount the secret into the correct location. To do that, follow these two steps.
First, mount the secret into a volume in the pod, providing the name of the secret, the item key (.dockerconfigjson
), and the path (file) where the secret contents should go (config.json
):
1volumes:
2 - name: docker-secret
3 secret:
4 secretName: image-pull-secret
5 items:
6 - key: .dockerconfigjson
7 path: config.json
Then, mount that secret into the correct location in the container that has the Docker CLI:
1volumeMounts:
2 - name: docker-secret
3 mountPath: /root/.docker/config.json
4 subPath: config.json
This configuration reads the volume docker-secret
and mounts it into the container at /root/.docker/config.json
. The subPath
option specifies that only the single file config.json
from the volume should be mounted. In this case, the container uses the root
user, so the file goes in that user’s home directory. If you use a non-root user (such as the runner
user in GitHub Actions Runner Controller), adjust the path accordingly. For example, ARC uses /home/runner/.docker/config.json
.
Putting it all together
Now that you know what you need, you can put it all together. The client container and Docker-in-Docker container rely on a shared volume (and shared docker.sock
) to communicate. As a result, this example requires two volumes.
The complete pod spec looks like this:
1apiVersion: v1
2kind: Pod
3metadata:
4 name: test-dind
5spec:
6 containers:
7 - name: client
8 command: ["/bin/sh", "-c", "sleep 10000"]
9 image: docker:cli
10 env:
11 - name: DOCKER_HOST
12 value: unix:///var/run/docker.sock
13 volumeMounts:
14 - name: dind-sock
15 mountPath: /var/run
16 - name: docker-secret
17 mountPath: /root/.docker/config.json
18 subPath: config.json
19 - name: dind
20 image: docker:dind
21 args:
22 - dockerd
23 - --host=unix:///var/run/docker.sock
24 - --group=123
25 securityContext:
26 privileged: true
27 volumeMounts:
28 - name: dind-sock
29 mountPath: /var/run
30 volumes:
31 - name: dind-sock
32 emptyDir: {}
33 - name: docker-secret
34 secret:
35 secretName: image-pull-secret
36 items:
37 - key: .dockerconfigjson
38 path: config.json
Save this to a file called deploy.yml
and run kubectl create -f deploy.yml
. This command creates a sample pod with two containers. The first container (client
) contains the Docker CLI. The second container (dind
) runs the Docker daemon. The client now receives a config.json
file containing the credentials, which enables it to authenticate. If you use a Docker Hub account, you can run kubectl exec -it test-dind -c client -- docker info
to verify that authentication works. This command outputs a lot of information, ending with something like this:
Name: test-dind
Docker Root Dir: /var/lib/docker
Debug Mode: false
Username: kenmuse
Experimental: false
Insecure Registries:
::1/128
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
Notice the Username
field? This field indicates that the CLI found the credential and uses it. You only see this field with Docker credentials. For a more generic test, try pulling an image from the registry using the secret. For example:
1kubectl exec -it test-dind -c client -- docker pull ghcr.io/mycorp/super-secret-image:latest
If the CLI has the correct credentials, it should pull the image without any issues. If you see
Error response from daemon: denied
command terminated with exit code 1
Check the secret to make sure you configured it properly (and if it’s a PAT, that it has the correct scopes).
Congratulations! You now have a working DinD container that can authenticate with Docker Hub (or any other registry) from within Kubernetes. This approach lets you run Docker commands from within a container and enables any specific pod to access one or more registries.