Securing Kubernetes Secrets with Vault


[*]

Kubernetes has become the de facto way of deploying modern applications, this requires maintaining configuration files in order to deploy. One of the biggest challenges is storing and giving these applications the secrets they need to run. At VMware’s Cloud Services Engagement Platform we prioritize security and go the extra mile to make our microservices more secure and less vulnerable.

What is a secret? 

“A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in a container image. Using a Secret means that you don’t need to include confidential data in your application code.” 

A simple example for using a secret looks like this:

apiVersion: v1
kind: Pod
metadata:
 name: mysql-db
spec:
 containers:
 - name: db
   image: mysql
   env:
   - name: DB_USER
     valueFrom:
       secretKeyRef:
         name: db-creds
         key: db_user
   - name: DB_PASSWORD
     valueFrom:
       secretKeyRef:
         name: db-creds
         key: db_password  
   - name: DB_HOST
     valueFrom:
       secretKeyRef:
         name: db-creds
         key: db_host
   command: ["/bin/sh"]
   args: ["-c","mysql -u $DB_USER -p$DB_PASSWORD -h $DB_HOST"]
 
---
apiVersion: v1
kind: Secret
metadata:
  name: db-creds
type: Opaque
data:
  db_user: verfEr=
  db_password: USKoTJCpGJGteNmk=
  db_host: NUthe3c=

By simply mounting this secret, our application has access to this secret(or password) we can now authenticate against other components the application needs, such as databases external services etc… 

There are some downsides to secrets 

Kubernetes provides a warning about using secrets: 

Caution: 

“Kubernetes Secrets are, by default, stored unencrypted in the API server’s underlying data store (etcd). Anyone with API access can retrieve or modify a Secret, and so can anyone with access to etcd. Additionally, anyone who is authorized to create a Pod in a namespace can use that access to read any Secret in that namespace” 

This would allow anyone who gains read access to our namespace to read our secret. 

Another issue we face when maintaining our kubernetes application the deployment/configuration files should be stored in a git repository so anyone who needs it, can download and install the application. This now poses a security threat since our secrets are part of the configuration and publicly exposed. 

Vault to the rescue 

Hashicorp came up with a solution for storing secrets called Vault. It’s goal being to: 

“Secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.” 

Storing our secrets in Vault would give us the security that we’d like for our secrets. But how would we access them? kubernetes does not know how to read secrets from vault:  

Solution: 

Vault provides multiple authentication options such as user/password, token and kubernetes authentication

How does this authentication work? 

Every pod in Kubernetes has an identity for processes it runs, this identity is provided by a serviceAccount.

Vault Configuration: 

We will use the latter which allows deployments run by a specific service account to perform vault operations. 

Let’s enable vault kubernetes authentication: 

$ vault auth enable -path=kube-policy kubernetes 
 
# Create a policy which gives access to our secret: 
$ vault policy write myappp-policy - << EOFpath  
"secret/top-secret/data" {     
       capabilities = ["read", "list"] 
} 
EOF 

Next we’ll get our cluster and service account information: 

# The serviceAccount’s token is stored in a secret 
$ export VAULT_SA_NAME=$(kubectl get sa my-sa      
  --output jsonpath="{.secrets[*]['name']}") 
 
# Read the token 
$ export SA_TOKEN=$(kubectl get secret $VAULT_SA_NAME      
  --output 'go-template={{ .data.token }}' | base64 --decode) 
 
# Get the clusters CA 
$ SA_CA_CRT=$(kubectl config view --raw --minify --flatten    
  --output 'jsonpath={.clusters[].cluster.certificate-authority-data}' | base64 --decode) 
 
$ export K8S_HOST=$(kubectl config view --raw --minify –flatten    
  --output 'jsonpath={.clusters[].cluster.server}') 
 
Give the vault our service accounts information: 
 
$ vault write auth/kube-policy/config        
  token_reviewer_jwt="$SA_TOKEN"             
  kubernetes_host="$K8S_HOST"          
  kubernetes_ca_cert="$SA_CA_CRT"        
  issuer="https://kubernetes.default.svc.cluster.local" 
 
Finally bind the policy to the authentication: 
 
$ vault write auth/kubernetes/role/my-role        
  bound_service_account_names=my-sa        
  bound_service_account_namespaces=app-namespace    
  policies=myappp-policy      
  ttl=24h 

Cluster Configuration: 

We need to give our service account permissions to review tokens: 

apiVersion: rbac.authorization.k8s.io/v1beta1 
kind: ClusterRoleBinding 
metadata: 
  name: tokenreview-binding 
  namespace: default 
roleRef: 
apiGroup: rbac.authorization.k8s.io 
kind: ClusterRole 
name: system:auth-delegator 
subjects: 
- kind: ServiceAccount 
name: my-sa 
namespace: default 

Let’s take a look at what permissions the serviceAccount was given. The clusterRole system:auth-delegator has permissions for: 

$ kubectl describe clusterrole system:auth-delegator
 
Name:         system:auth-delegator
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources                                  Non-Resource URLs  Resource Names  Verbs
  ---------                                  -----------------  --------------  -----
  tokenreviews.authentication.k8s.io         []                 []              [create]
  subjectaccessreviews.authorization.k8s.io  []                 []              [create]

This permission gives the service-account the ability to validate the JWT token against kubernetes api in order to check the token is not expired. 

  • The request needs to be run using the SA’s token 
  • The request needs to come from the specific namespace 
  • The token has to be a valid(which means if I delete the serviceAccount and recreate it the token used in the past will be invalid)

Let’s test this out:

$ kubectl describe sa my-sa
 
Name:                my-sa
Namespace:           default
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   my-sa-token-tw5kp
Tokens:              my-sa-token-tw5kp
Events:              <none>
 
# We'll then get the token associated with this serviceAccount
$ kubectl describe secret my-sa-token-tw5kp
 
Name:         vault-auth-token-tw5kp
Namespace:    default
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: my-sa
              kubernetes.io/service-account.uid: 6a4fa093-ab0d-44db-8c62-64d20433da3c
 
Type:  kubernetes.io/service-account-token
 
Data
====
ca.crt:     1025 bytes
namespace:  7 bytes
token:      eyJhbGciOiJSUzI1NiIsImtp...
 
# Now we can verify the validity of the token by creating a tokenreview object like this tr.yaml
kind: TokenReview
metadata:
  name: auth-test
spec:
  token: eyJhbGciOiJSUzI1NiIsImtp...
 
# Next we'll apply it and view the output
$ kubectl apply -f tr.yaml -o yaml
apiVersion: authentication.k8s.io/v1
kind: TokenReview
metadata:
  name: auth-test
spec:
  token: eyJhbGciOiJSUzI1NiIsImtp...
status:
  audiences:
  - https://kubernetes.default.svc.cluster.local
  authenticated: true
  user:
    groups:
    - system:serviceaccounts
    - system:serviceaccounts:default
    - system:authenticated
    uid: 6a4fa093-ab0d-44db-8c62-64d20433da3c
    username: system:serviceaccount:default:my-sa

We can clearly see a few things:

  • The token is authenticated
  • Groups the user belongs to
  • The username corresponding to the token 

This is how vault validates the token and allows the increase in security. 

Deployment configuration: 

Example1: 

When the application application is our own we can get the secrets using some simple code(e.g. python) 

import hvac 
  
VAULT_URL = " http://my-vault-address.com:8200" 
  
client = hvac.Client(url=VAULT_URL) 
# default mount path
f = open('/var/run/secrets/kubernetes.io/serviceaccount/token') 
jwt = f.read() 
client.auth_kubernetes("my-role", jwt) 

This uses the Mounted kubernetes token to authenticate using the role “my-role” and now we can use the passwords we need from vault. 

Example2

An application where we don’t control the source code(e.g. an open source application) 

A common kubernetes pattern is deploying an init container, this allows running a process that we want to run before the applications main process starts. So what we will try to achieve is having the secret accessible for our application before it starts running. Our init job would use a vault container(or a container with vault cli) to retrieve the secret from our vault instance. 

As an example we’ll create a configMap that has an hcl file with instructions on how to connect to vault which role to use and what to do with the secrets. In this case we will put them in a file. 

apiVersion: v1 
kind: ConfigMap 
metadata: 
  name: vault-config 
data: 
  vault-config.hcl: | 
    exit_after_auth = true 
    pid_file = "/home/vault/pidfile" 
    auto_auth { 
        method "kubernetes" { 
            mount_path = "auth/kube-policy" 
            config = { 
                role = "my-role" 
            } 
        } 
        sink "file" { 
            config = { 
                path = "/home/vault/.vault-token" 
            } 
        } 
    } 
    template { 
    destination = "/etc/secrets/cloudwatch-secrets" 
    contents = <<EOT 
    {{- with secret "secret/data/myapp/config" }} 
    export AWS_ACCESS_KEY_ID='{{ .Data.aws_access_key_id }}' 
    export AWS_SECRET_ACCESS_KEY='{{ .Data.aws_secret_access_key }}' 
    EOT 
    } 

In addition we’ll use a container that needs aws credentials as environment variables.  

apiVersion: v1 
kind: Pod 
metadata: 
  name: my-app 
spec: 
  serviceAccountName: my-sa 
  volumes: 
    - configMap: 
        items: 
          - key: vault-config.hcl 
            path: vault-config.hcl 
        name: vault-config 
      name: config 
    - emptyDir: {} 
      name: shared-data 
  
  initContainers: 
    - args: 
        - agent 
        - -config=/etc/vault/vault-agent-config.hcl 
        - -log-level=debug 
      env: 
        - name: VAULT_ADDR 
          value: http://EXTERNAL_VAULT_ADDR:8200 
      image: vault 
      name: vault-agent 
      volumeMounts: 
        - mountPath: /etc/vault 
          name: config 
        - mountPath: /etc/secrets 
          name: shared-data 
  
  containers: 
    - image: cloudwatch_exporter-0.11.0 
      name: cloudwatch-exporter 
      command: ['/bin/bash', '-c', 'source /etc/cloudwatch-secrets && java -jar /cloudwatch_exporter.jar 9106 /config/config.yml'] 
      ports: 
        - containerPort: 9106 
      volumeMounts: 
        - mountPath: /etc 
          name: shared-data  

Our init container pulls the credentials from vault using the defined service account’s token. The init container then puts these credentials into a file which will look like this: 

export AWS_ACCESS_KEY_ID='my-access-key' 
export AWS_SECRET_ACCESS_KEY='my-secret-key' 

Using the pods shared storage our main container has the file containing these environment variables mounted from the init container . We can now import these environment variables before the main process starts using the source command and be allowed to authenticate. 

Using this technique we avoided using kubernetes secrets altogether but still made them available for our application. Solution is based off this hashicorp tutorial. 

Other solutions

There are several other techniques of getting your secrets into kubernetes without using Vault. A couple examples for this are: 

These solutions allow you to store your secrets encrypted. By using any of these solutions you can keep your secrets safe. 

In conclusion

Security should always be a top priority and keeping credentials as secure as possible is a major part of this. When implementing a solution these goals should be kept in mind.

[*]
[*]Source link