The pitfalls and alternatives of this common GitOps practice as you move your deployments to production.
GitOps is the practice of representing system configuration in a Git repository and then using Git workflows to manage changes to that configuration and updates to the system.
That process of representing system configuration in a repository is initially straightforward, seeding configuration files, copying declarative requests from product documentation, and maybe even the occasional scripted sequence.
At first, you feel progress is inevitable, and success awaits a few commits around the corner, then you hit the ultimate GitOps foil: secrets.
Why plain secrets are bad
While it may be evident that placing plain credentials on a Git repository is not the best idea, it is still worth spelling out why it is problematic, not someone feels it may be an acceptable compromise when using a private repository:
The people who work with the GitOps repository may not be the same people authorized to manage the target environments.
Let’s say you are the approver of a pull request and need the resident network expert to review changes related to the firewall. Unaware that credentials are stored in plain text in the repository, you ask the repository manager to add that expert to the list of users. Suddenly, the network expert has access to a customer database full of private data. If that person is not cleared for that level of access, you are looking at all sorts of paperwork and remediation procedures to rotate and deploy new credentials.
Now, let’s assume a better scenario, where you are aware of the credentials in the repository, thus avoiding the accidental disclosure: now you need to go outside the pull request workflow to involve that person in the review process.
When the better scenario is inefficiency and the worst scenario is akin to juggling knives blindfolded, you know it is time to move on to better practices.
A sealed solution
The idea of a sealed secret in GitOps is to encrypt secrets before adding them to the repository, sharing the private encryption key with those who need to use those secrets. Typically, these encryption keys are placed on the target system and used by a local agent to decrypt the credentials and place them wherever they need to be in the target environment.
This technique allows people to work with the repository without the risk of accidentally disclosing credentials, and that is an improvement over storing credentials in plain sight.
That approach is clever, and I was quite fond of it at the beginning of my journey into GitOps. The initial setup is somewhat easy, the repositories are not widely used on a daily basis yet, and it is easy to find others at that same stage of adoption vouching for the approach, so you can also find support in the community.
Getting past the initial stage and going into larger and more permanent deployments, that simplicity gave way to limitations and risks, and I abandoned the practice altogether. In the next sections, I highlight the main reasons you probably should abandon it too.
Reason #1: The keys to which environment?
Let’s say you have a deployment pipeline with a progression of “dev,” “test,” “stage,” and “production” environments. Each environment will be running the same software, but if you use good SecOps practices, they will use different sets of credentials.
Some GitOps repositories are designed to have one subtree per target environment, while others are designed with a single parameterized tree across different environments.
In the case of a folder tree per target environment, the git repository must have separate locations for the keys for each environment. When you work with the ubiquitous presence of Kubernetes clusters in the enterprise, you will be dealing with individual “Secret” resources spread across multiple namespaces, making the sprawl of folders and files unavoidable.
If we look at parameterized trees, then the situation becomes a little better, with all secrets for each environment getting concentrated into a single file per environment.
Regardless of how you design the GitOps repository, using closed secrets generates more folders and files, which increases the amount of information people need to absorb, and the amount of artifacts the delivery pipelines need to handle.
Reason #2: The secrets are … right there.
Yes, they are encrypted, and it takes an encryption key to get to the actual credentials, but they are still in the hands of potentially bad actors. Imagine telling someone how their database credentials are all visible to the world and then proceeding to dissuade their anxiety by explaining how the bad actors still don’t have the key.
You may chime in the comments section and explain the technical reason why this is an unfounded fear, but anyone who works in security will tell you that the psychological aspect is also part of making customers feel safe with their choices.
On the more technical side, I had people argue that having sealed secrets in the open is no different than using public key pairs to encrypt traffic, but those are asymmetric key pairs where you never have the private key out in public, encrypted, or otherwise.
Lastly, while the secret itself is encrypted, the metadata around them isn’t. Bad actors can exploit committer information to seed social engineering exploits, infer the rotation policies for the infrastructure components, determine whether secrets are reused across environments, and gather many other clues that can aid attacks against the target environment.
Retreating from all lines of defense against cyber attacks and pinning all hopes on defending a single point of failure is a terrible starting point for a secure system.
Reason #3: The key to secure all keys is still a key.
The lifecycle of secrets in the repository may differ depending on what they are securing. A database credential may expire every 30 days, while a cluster credential may expire every 60 days.
What about the master encryption key for the sealed secrets themselves? Security policies will eventually force you to rotate that master key, which will require a separate process to redistribute the new key securely to all target environments.
But wait! Not having a separate process to distribute keys to target environments was the reason you chose to seal secrets in the Git repository in the first place. One may argue handling one master secret is better than handling multiple secrets, but the cost of managing one key or multiple keys is virtually the same, with the added challenge of managing the sealed secrets yourself and still needing a password manager of some sort to secure and distribute the master keys.
Reason #4: There are better solutions.
Git repositories were not designed with key management in mind. They have no support for key rotation, no support for serving secrets as symbolic references, no way to perform usage audits, no support for different levels of access to administrators, and so on.
A key management solution is designed to address all these requirements, reduce the surface area for potential leaks, and offer mitigation paths in case a key is ever compromised.
That reduction of surface area is especially important, because you cannot accidentally disclose or lose a key if it never leaves the system. As one example, in the world of Kubernetes, clusters are invariably colocated in a service plane that contains multiple key management solutions. It is common for IaaS providers to offer backend integration between services where key values never have to leave the environment.
Sticking with the example of Kubernetes, where you are likely to be using ArgoCD or Flux for your GitOps practices, they currently lack native integration with key management services, but they expose extension points to achieve that integration. For instance, the ArgoCD documentation says, “Argo CD is un-opinionated about how secrets are managed”but then proceeds to offer a long list of solutions to integrate with dedicated services.
A more promising approach is getting started through the “External Secrets Operator” project, which synchronizes secrets from various key management services into local secrets in a Kubernetes cluster (credit to my colleague Carlos Santana for that reference.)
(Update on 3/16) Thomas Boerger chimed in the comments section about another alternative to sealed secrets: the usage of SOPS with Flux and Argo.
There may be valid reasons to use sealed secrets. Still, I have yet to see one framed in a positive light beyond sealed secrets being “good enough”, which implies they are cheaper to deploy than a proper key management solution. I rarely see the discussion get into the considerations of everything else involved in handling those secrets.
I don’t doubt the engineering prowess of those setting out to mimic a key management service with text files in a code repo, but I question the cost-effectiveness of those approaches. The DIY crowd must resort to a combination of placing files in the git repo, mapping out key rotation cycles to git pull requests, and instrumenting continuous deployment pipelines with decryption keys to parse the repo’s contents. And if you are asking how to manage those master keys, you may find yourself in a constant cycle of coming up with creative ways of making Git act as a key management service.
One can always argue in favor of a build-as-we-grow approach, but that would position sealed secrets as a stepping stone towards using a key management service, and that is not the case. Trying to “grow out” of using sealed secrets means changes to the GitOps backend of choice and re-training operations people to completely change how they handle credentials. There is no natural progression, just paying twice for the same results.
Eschew sealed secrets, start your GitOps practice right, and use a managed key service.
(Update on 5/23: If you like this topic, I wrote a new story including a couple of other things to avoid.)