Throughout this series, we’ve explored the landscape of secret storage – from the simplicity of file systems and environment variables to federated identities and vault systems. Each approach has distinct strengths and weaknesses, and each can be hardened against common attack vectors. But here’s the reality: no single method provides complete protection on its own.
While using a single method can have elegance and simplicity, it often leaves gaps that can be exploited. When you layer multiple security controls, you create defense-in-depth – if an attacker bypasses one protection, they still face additional barriers.
In this post, I’ll show you how to layer secret management strategies to build a security posture that’s both robust and practical. You’ll learn how different approaches complement each other and how to make architectural decisions that balance security with simplicity. The easiest way to understand this is through examples.
Example 1: GitHub Actions runners
With Actions runners, secrets stored in GitHub at the organization or repository level are provided through expressions like ${{ secrets.MY_SECRET }}. These secrets are then injected at runtime.
The secrets are stored in an encrypted backend service and retrieved and decrypted only when needed for a job. Only the secrets for that repository or environment are available to request. If you use this expression to set a step’s environment variable, the secret is made available to that step and any child processes.
Environments allow this to be further scoped to restrict which secrets are available. In fact, you can use
deployment protection rules to automate validation of a job to ensure only authorized workflows and jobs can access specific secrets, providing an extra layer of protection. This can help protect you from malicious workflows that attempt to serialize and exfiltrate the secrets context by preventing unauthorized jobs from accessing any secrets in the first place.
Each step in the job is spawned as a new, separate process. The runner controls the environment variables for each step, determining what variables are available to that process and its children. This is why variables in one step aren’t accessible in another step unless explicitly added to $GITHUB_ENV or $GITHUB_OUTPUT. By doing this, the runner limits the exposure of secrets to only the processes that need a particular secret and only for the life of that step.
This layered approach uses an external encrypted vault (GitHub Secrets) combined with process-level isolation (separate steps with controlled environment variables) to minimize the risk of secret leakage. A higher-level process creates child processes that only have limited access to restrict what might be further exposed.
In addition, the stdout outputs from the processes are captured by the runner to ensure that any accidental logging of secrets can be redacted before being provided to the logger. This ensures that even if a secret is inadvertently printed to the console, it doesn’t get exposed in the logs.
Since each process runs using a shared user account that can elevate permissions (via sudo), this approach avoids relying on file system permissions, which would globally expose the secrets. In addition, this approach allows the use of containers or lower-privilege processes that can further isolate the execution environment to limit potential exposure.
Example 2: GitHub Runner with a key vault
This example builds on the previous one by incorporating a key vault service, such as Azure Key Vault or AWS Secrets Manager, to manage and access secrets. In this scenario, the GitHub runner retrieves secrets from the key vault at runtime rather than relying solely on GitHub Secrets. You can still use environments to help isolate which jobs and branches can connect to the key vault, adding additional layers of security.
When a job is triggered, the runner authenticates with the key vault using OIDC. This provides a secure, passwordless authentication mechanism to receive a token that is time-limited (and invalidated when the job ends). Because this lacks the automatic masking of secrets expressions, it’s critical to call the
add-mask command to register the secret values for redaction in logs.
This approach can eliminate the secrets context and expressions, but it comes with some tradeoffs. The process that retrieves the secrets must be careful to avoid exposing them. Since all of the secrets are available to that process, you must be especially careful about child and sibling processes that might attempt to access those values.
A common mistake is putting these secrets in $GITHUB_ENV, making them available to all subsequent steps, which increases the risk of exposure. Instead, it’s better to use $GITHUB_OUTPUT to pass secrets directly to the specific steps that need them, minimizing their exposure (making that step output context similar to secrets). It’s also very important not to put secrets into the vault that aren’t used within that job, since they could be exposed if the job is compromised.
To make it more secure, revoke the key vault token immediately after the values are retrieved to minimize the window of exposure. Ideally, running the steps in a single process may further reduce the risk of exposure by limiting the number of process boundaries.
Example 3. 1Password CLI plugins
I mentioned in previous posts that you can use tools like 1Password to provide access to some of my dev container secrets. Let’s look at how that applies a layered approach, especially within a dev container.
The credentials are secured by a service (vault) as well as operating system protections. This ensures the secrets are protected and not directly available to the operating system or other processes unless explicitly retrieved by the user or application with proper authentication.
The op CLI tool is used to read the secrets from the 1Password vault. To make the process of using tools (like gh, the GitHub CLI) more seamless and secure, they have a “plugin” model. Under the covers, there are some useful tricks they use.
First, they rely on environment variables that reference the secret rather than contain the secret. For example, GH_TOKEN might contain op://MyVault/GitHubRepo/token rather than the actual token. This is similar in some ways to how App Settings (environment variables) in Azure App Services can reference a secret in a Key Vault.
Next, they create an alias. For example, for the GitHub CLI, they create an alias like this: alias gh="op plugin run -- gh". Now, when you run gh, you’re actually invoking op instead. The code reads all of the current environment variables, looks for any that reference op:// URIs, and retrieves those secrets from the vault. It then spawns the actual gh command as a child process, injecting the retrieved secrets into the environment of that process only. That means only that particular child process has access to the secret, not the current shell or any other processes.
You could manually do the same thing this way:
1export GH_TOKEN='op://MyVault/GitHubRepo/token'
2op run -- gh repo listIt’s very similar to the patterns used by the GitHub runner. By separating the secret retrieval from the actual command execution, you can limit the exposure of the secrets to only the processes that need them at the time they’re needed. This also has the benefit of not making the secrets available across the entire shell session, reducing the risk of accidental exposure.
There are lots of other variations of this approach. The key takeaway is that by carefully controlling when and how secrets are retrieved and injected into processes, you can significantly reduce their exposure and improve your overall security posture. Just remember to also handle any logging and output redaction as needed to avoid accidental exposure.
Example 4. Kubernetes sidecar pattern
Applications in Kubernetes need access to secrets, and one common pattern to manage this securely is the sidecar. The secret itself may be stored in a Kubernetes Secret, and that secret may itself be loaded or synchronized from an external vault system.
This model typically relies on a dedicated sidecar that has the ability to access the secret. In this model, the sidecar is often the container that has the necessary mount and permissions to access the secret from a file system. If the file changes – meaning the secret has been updated – the sidecar can process that without needing to restart the pod. It can then act as a proxy that the primary container utilizes to access a remote system.
In this approach, the main program has limited access to the secret or the environment where the secret is stored. This reduces the risk of exposure or compromise, since the application container doesn’t have direct access to the secret itself.
Choosing the right approach
There’s no single best method for storing and accessing secrets. The appropriate choice depends on your security requirements, infrastructure, compliance needs, and operational capabilities. In practice, you’ll typically use a combination of these approaches to balance security and usability. By layering these approaches and following hardening best practices, you can build a defense-in-depth strategy that significantly reduces your risk of credential compromise.
The most secure system is the one that matches your actual security requirements while remaining practical for your team to operate and maintain. Start with the strongest feasible approach and adapt based on your specific constraints and threat model.
