Every application needs secrets – API keys, database passwords, certificates, tokens. These credentials are the keys to your kingdom, and how you store and access them can mean the difference between a secure system and a catastrophic breach. The challenge isn’t just keeping secrets safe at rest; it’s making them available to the right processes at the right time without exposing them to attackers.
This week, I’m providing a series of posts exploring secrets and their management. In this article, I’ll explore different approaches for storing and protecting secrets, examining their strengths, weaknesses, and how to harden each method against common exploitation techniques.
The common threats
Before we dive in, I’d like to clarify that no method is completely foolproof. Each approach has trade-offs, and the best choice depends on your specific security requirements, infrastructure, and operational capabilities. To understand that, it’s important to recognize several common vectors through which secrets can be compromised regardless of storage method. You must address these in addition to choosing a storage method:
- Writing to logs or messages
- If you don’t carefully control what gets logged, secrets (or even personally identifiable information) can easily end up in log files, monitoring systems, or error reports.
- Memory
- To be used, secrets must be loaded into application memory. Attackers who can access process memory through debugging or memory scraping can extract these secrets. In addition, memory dumps created during crashes can inadvertently expose secrets. This is why it’s important to clear secrets from memory as soon as they’re no longer needed and to keep them secured in protected memory regions when possible.
- In-process or same-user access to resources
- Any code running in the same process or under the same user context can potentially access the same protected content. This includes third-party libraries, plugins, or any code that shares the same execution environment. In a container, development environment, or when performing CI/CD, many (or all) processes – including package installations from NuGet, NPM, Maven, PyPi, and similar – may run under the same user, creating an opportunity for secrets to be accessed by unintended code.
- High-privileged user accounts (root or administrator)
- Users with elevated privileges can generally bypass access controls and read secrets directly from storage or memory. This is part of why the principle of least privilege is so important.
- Lack of monitoring and auditing
- While not a root cause for breaches, without proper logging and monitoring, unauthorized access to secrets can go undetected. Implementing audit trails and alerting on suspicious access patterns is an essential part of securing secrets.
All of these require some amount of balancing security with usability. The goal is to minimize the exposure of secrets to unauthorized processes and users while ensuring that authorized processes can access them when needed.
What about Kubernetes secrets?
It’s important to know that when Kubernetes secrets are accessed, they are not encrypted. Anyone with access to the Kubernetes API, access to a container that receives the secret, or administrative access to any environment using a secret can potentially read them. While it is possible to encrypt secrets at rest, they are provided unencrypted. Remember that Kubernetes assumes the cluster is the security boundary and that you will understand the implications and how to secure it. Consequently, you must treat Kubernetes secrets with care when they are accessed by operators, applications, and services.
File system storage
Storing secrets in files is the most straightforward approach – configuration files, .env files, or dedicated secret files on disk. It’s simple to implement and easy to work with during development. Kubernetes can also use a version of this approach, mounting secrets as files that can be accessed by containers. This is one of the only ways Kubernetes supports secrets that can be updated without restarting pods. If the secret changes, Kubernetes will update the file with the new value.
For local development, this method is commonly used since it provides a way to separate secrets from code (.NET, in particular, ships with a Secret Manager tool for this purpose).
Strengths
- Simple to implement and understand
- Works across all platforms and languages
- Can be secured to specific users or processes with file system permissions
- Easy to update without restarting applications
- Supports dynamically changing secrets without restarting the process
Weaknesses
The file system is also the easiest target for attackers. Secrets stored as plain text files can be:
- Accidentally committed to version control
- Exposed through misconfigured web servers or directory listings
- Read by any process or user with appropriate file system access
- Copied and exfiltrated without leaving obvious traces
- Discovered through automated scanning tools, like TruffleHog (which is part of how the Shai-Hulud exploit worked)
- Exposed in backups, logs, or crash dumps
Even with file permissions, the root user (or equivalent administrator) can always read these files. If an attacker gains elevated privileges, your secrets are immediately compromised. This is also true in development and CI/CD systems, where the files may be reachable by third-party code in packages or build components.
Common exploitation techniques
Attackers commonly exploit file-based secrets through:
- Path traversal attacks: Using
../sequences to access files outside the intended directory - Local file inclusion: Leveraging application vulnerabilities to read arbitrary files
- Privilege escalation: Gaining access to bypass file permissions
- Backup exploitation: Accessing secrets from backups and backup systems
Hardening strategies
If you must use file-based secrets, implement these protections:
- Store secrets in memory-backed volumes (
tmpfs,ramfs) that limit disk persistence - Use restrictive file permissions so only the owning process can read the files
- Carefully manage child processes that may inherit access
- Avoid secrets in the root of a web application and be cautious with secrets in the same path as the application (for example, use a separate mount or a Kubernetes sidecar)
- Never commit secrets to version control; use
.gitignoreand secret scanning tools - Encrypt secrets at rest and decrypt them only when needed
- Avoid using them with systems such as web hosting platforms that don’t support file system permissions
Environment variables
Environment variables provide a way to inject secrets into processes without storing them in files. They’re widely supported and prevent secrets from being accidentally committed to repositories. When properly secured, they can only be accessed by the process that owns them.
Strengths
- Secrets are only available in memory to the owning process
- Supported by virtually every platform and language
- Easy to inject in containerized environments, services, pipelines, and processes
- Can be set at runtime without changing code
- Prevents accidental commits to version control
- Can be combined with other secret management systems
- Not expanded by the shell by default, reducing injection risks
Weaknesses
While better than files, environment variables still have significant risks:
- By default, exposed to any user who can read
/proc/<pid>/environon Linux - Inherited by child processes, expanding the attack surface
- May be logged by process managers, monitoring tools, or crash handlers
- Visible in container orchestration dashboards
- Can be accessed by debugging tools attached to the process
- Root users can always read process environments
- May be exposed in error messages or stack traces
- Difficult to rotate without restarting the process
Common exploitation techniques
Attackers target environment variables through:
- Process inspection: The process or elevated accounts can read
/proc/<pid>/environshows the values that existed at the start of the process, and process monitoring tools may expose those values - Process exploitation: Accessing the secrets by being in a subprocess that inherited the parent environment
- Container inspection: Using
docker inspectorkubectl describecan view the initial environment variables
Hardening strategies
To improve security when using environment variables:
- Isolate the process that needs access to the secrets
- Avoid running untrusted code in the same process or user context
- Avoid running other processes as root or elevated users
- Restrict subprocess access to environment variables by launching processes programmatically or scripting with commands like
env -i
An example of environment isolation is GitHub runners. Each step in the workflow spawns a completely independent process with a new environment, and the values for that environment are strictly controlled by the runner process. This helps prevent any step from inspecting the environment of another step and ensures a level of control over what environment variables each step can access.
There is a feature in Linux called hidepid that can be set on the /proc filesystem to restrict access to process information. Setting hidepid=2 will prevent users from seeing any information about processes they don’t own, including environment variables. This can be a tool to help protect environment variables, but it has
limitations that you must understand. For example, it can restrict monitoring tools from functioning properly.
Encrypted secrets with Hardware Security Modules (HSM)
Hardware Security Modules provide hardware-based protection for cryptographic keys and operations. With HSM-based secret management, secrets are encrypted and the decryption keys never leave the HSM hardware.
Strengths
- Keys are protected by hardware security with physical tamper resistance
- Cryptographic operations occur within the HSM, and keys never exposed
- Supports secure key generation and storage
- Provides strong audit trails for cryptographic operations
- Supports key backup and recovery procedures
Weaknesses
- Cost of dedicated hardware or cloud HSM services
- Requires specialized knowledge
- Latency for cryptographic operations
- Limited number of operations per second
- Still requires protecting access to the HSM itself
- Secrets are decrypted in application memory after retrieval
Common exploitation techniques
Attackers target HSM-protected secrets through:
- Access control bypass: Exploiting misconfigured HSM access policies
- API exploitation: Abusing HSM API access to perform unauthorized operations
Hardening strategies
To maximize HSM security:
- Implement strict access controls
- Use separate HSM instances for different security domains
- Use audit logging for all HSM operations
- Use cloud HSM services (Azure Dedicated HSM, AWS CloudHSM) for managed infrastructure
- Use HSMs to protect the root keys of your vault systems
- Clear application memory containing decrypted secrets immediately after use
HSMs provide the strongest protection for keys themselves, but remember: once secrets are decrypted for use, they’re vulnerable in application memory. HSMs work best as part of a comprehensive secrets management strategy.
What’s next
We’ve explored the foundation – file systems, environment variables, and HSMs – but there’s a world of sophisticated solutions designed specifically for secrets management. In the upcoming posts, you’ll discover mores ways to secure your secrets and how to build a stronger management solution. Stay tuned!
