Kubernetes

Manage secrets in K8s using GitOps without compromising security

Rajith Attapattu
CTO @ Randoli

Secrets increase developer workflows flexibility and versatility by separating sensitive config details from application infrastructures. But how much separation is ideal? 

For starters, there’s a reason they’re called “secrets”. The last thing you want is to discover your groundbreaking service has been leaking sensitive information. Therefore you want to avoid storing configs inside deployment code. 

On the other hand, the Infrastructure as Code philosophy suggests secrets deserve no different treatment than other data. Everything your app needs to work should be stored in versioned, reusable, and shareable configs. 

In this talk, we tried to find the happy medium with the help of Randoli's Rajith Attapattu and Andre Adam. Here's what we learned about managing secrets effectively in K8s using GitOps without compromising security.

The Realities of Kubernetes Secrets: Where You Stand 

You're probably well aware that storing sensitive data in a cluster is extremely foolhardy. This is true even if you follow best practices for Kubernetes Secrets.

For starters, K8s stores secrets as base64-encoded, encrypted strings. This isn't bad, but it's not truly secure. Anyone with the right resources can easily reverse the encryption and laugh all the way to the bank.

Some alternatives aren't much better. For instance, storing secrets in Git is a common vulnerability that ultimately lengthens your to-do list. If you want it to work, you'll need to add pre-commit safety checks, and even then, the data is still reversible. It's usually safest to forbid engineering teams from explicitly defining secrets altogether.

K8s supports encrypting secret data at rest, which you should take advantage of. For this to succeed, you'll need to enable etcd and define a config specifying encryption resource targets by name. 

Unfortunately, encrypt-at-rest has its weaknesses too. Many organizations forget to limit cluster admin roles properly or remove secrets, making it pretty simple to peek behind the curtain. If you go this route, it's essential to implement the correct role-based access controls to manage who can access what.

Breaking down the options for GitOps secret management

The nice thing about the GitOps way is that you still get to be flexible, which enables you to use what suits your organization. Rajith shared two major options: Storing the encrypted secret and storing a reference to a secret. 

Storing an encrypted secret

This approach is appealingly straightforward. You start by creating a secret that only a few people can decrypt using an automation tool like Bitnami Sealed Secrets or Mozilla SOPS

Your automation tool encrypts the secret as you store it in Git. When it's time to decrypt and create a Kubernetes Secret, the tooling steps up to the plate again to handle the details.

Storing a reference to a secret

When storing a reference to a secret, you store the secrets in some kind of backend such as HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, etc.

Next, use a tool like the External Secrets Operator to declaratively specify which secrets you need and where they need to be stored in your Git repo. For instance, your developers might ask for a URL here or a password there. The secrets operator pulls the value from the backend store and applies it to the cluster as part of your CD process.

Comparing the Two

With the encrypted secret method, it doesn't matter if someone gets into your repo, the key isn't there. Yes, you do need to worry about key rotation and other security practices, but it's a sound technique if you're just getting started.

Storing a reference achieves a few important goals. It lets developers clearly communicate to the platform engineering or SRA teams what to provision in the secret store. The reference option also appeals from the management point of view since storing secrets in a purpose-built tool grants you audit logs and other features that promote security and scalability. 

Again, this is one area where it's OK to explore. Do what suits your organization best from a maintainability and sustainability perspective.

Walkthrough of a GitOps-powered Kubernetes Secrets solution

Next, Andre took us on a tour of a real-world secrets storage architecture. 

One provider, many clusters

There are countless ways to connect secrets backends. Andre's team started with what seemed most familiar: Using a single secrets storage instance for multiple clusters and running it in a VM. 

They also chose to use an external secrets store simply because they had more experience with that option. These early design choices made it easier to manage permissions and kept things robust. And because there were no separate cluster regions or convoluted paths, confusion and errors were reduced.

Many providers, many clusters

Later, the team switched to a markedly different alternative and deployed independent secrets providers within the clusters themselves. This approach was easy to kickstart because the engineers could use their deployment tool and build on previous configs. 

This approach required more external storage since each cluster needed its own independent store. It also prevented the team from sharing values between clusters which was not an issue for every use case but something to consider. 

Working With Your Secrets Operator 

As mentioned earlier, you need something to connect to your secrets backend and generate the Kubernetes Secrets. This usually means configuring your secrets backend and letting K8s know about it by writing a CustomResourceDefinition (CRD). The operator handles the rest. After all, you're storing the reference, not the secret itself.

Notice something about this methodology? There's only one outbound connection between the secrets storage backend and the external storage. 

Within the cluster, the team enacted a zero-trust policy to keep the secrets backend from talking to anything but the operator and restricted it to HTTPS. This config style makes it possible to maintain a high degree of security. 

Kubernetes authentication 

Secrets backends can authenticate with K8s in a few different ways. Although Rajith and Andre went with the default Kubernetes authentication, things can quickly get complicated depending on what you choose.

Fortunately, there's a simpler approach, and despite its learning curve, it has big advantages. For instance, it lets you configure things once and make minor adaptations as needed. Here's a five-step breakdown:

  1. Enable K8s authentication in the backend: This is easier with the many providers, and many clusters approach. You can get by with configuring K8s authentication once instead of repeating the process for each cluster path.
  2. Create a service account with TokenReview API permissions: Your setup will require a token with a specific Kubernetes ClusterRole (system:auth-delegator). This lets the backend perform delegated authentication and authorization checks under the Token Review API. Without this step, your secrets backend couldn't confirm the validity of your tokens.
  3. Configure K8s token authentication: You retrieve a token and point it at a K8s resource. Collocating the secrets backend and operator in the cluster comes in handy here. When outbound calls aren't a concern, you only need to reference the host's local domain. 
  4. Create policies: Implementing policies grants you access to your secrets' storage paths.
  5. Create an appropriately configured named role: Finally, you create roles bound to the external secret operator service account in the namespace. 

These last steps have a broader goal. As Rajith put it, if you have a named secret in a given namespace, someone without access to that namespace shouldn't be able to open a backdoor to its protected values. 

In other words, different roles link to different policies, while policies grant users permissions and are bound to namespace-specific service accounts. In this way, the associated services only get access to the secrets they need to run the application. For instance, a service that uses specific secrets to manage a database connection can't also retrieve unrelated encryption keys (or whatever else), just because it's pulling from the same store.

Not all operators allow this type of separation, and it's well worth finding one that does. Sure, using a single instance may seem like the simplest strategy, but it's often easier to have multiple stores with independent backends. 

We advise against overly broadening your attack surface by keeping everything in Git. Instead, try a more measured approach. Commit only the secret stores for necessary applications and the associated namespaced references.

Vital lessons and pointers

Here are some final takeaways for working with secrets.

What to do before you have a secrets backend

If you still haven't fully set up your cluster, you can try using your initialization scripts to inject secrets at runtime. For instance, this might be the easiest (or only) way to bootstrap your backend.

Life when your backend is down

When your backend is down or sealed, your operator won't sync. You'll still be able to use the secrets you've already injected, but you'll have to fix the backend to access new values or changes. 

Use external storage

External storage helps you fix things quickly following failures. If one cluster's external storage goes down, you can point to another—the configs will still be there.

Don't abuse your root token

Only use your root token during the initial config, then disable it. This is not a rule you should break.

Distribute keys intelligently

Think carefully about how you'll distribute keys to teams, especially if they're in different time zones.

Use templates to manage CRDs

Templating common patterns can make path management easier, minimizing the time and effort you devote to config changes.

Keep your toolchain up to date

Having issues getting your secrets operator to sync properly? Consider updating.

Last words

Managing K8s secrets with GitOps doesn't have to mean exposing yourself to hazards or getting stuck in a rut. There are plenty of flexible ways to get the job sorted safely.

Want to hear more? Rajith and Andre also discussed everything from the role of developer trust to the value of using K8s native practices. Get the full details by tuning into the webinar recording