In my Kubernetes cloud I do have FluxCD to manage all components. FluxCD is using SOPS to decrypt all the passwords. This is resulting in a declaration like this:
---
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
name: load-balancer-controller
namespace: flux-system
spec:
interval: 1m
ref:
branch: main
url: https://github.com/fantasyaccount/load-balancer-controller.git
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: load-balancer-controller
namespace: flux-system
spec:
decryption:
provider: sops
secretRef:
name: sops-gpg
interval: 1m
path: "./deployment"
prune: true
sourceRef:
kind: GitRepository
name: load-balancer-controller
Within the load-balancer-controller repo I can use SOPS encrypted secrets. That is clear for me.
However, is it possible to use SOPS as well for encrypting the secret token to have access to the repo itself? I know I can use kubectl create secret ... to add the secret token to Kubernetes as well, but that is now what I want. I would like to use a SOPS encrypted token here as well.
The challenge in encrypting the secret for the initial GitRepository, is to then define what the cluster provisioning process would look like, as this represents a bit of a chicken-egg problem.
One way I can see this working, is to install Flux with a source that supports contextual authentication, such as Bucket. With that, you could store in an S3 Bucket the encrypted Git secret, the GitRepository to current repository, and the Kustomization that applies it to your cluster.
Here's more information about the contextual authentication for EKS:
https://fluxcd.io/docs/components/source/buckets/#aws-ec2-example
Just notice that with this approach, your cluster deployment pipeline would have to store your GPG key, as you would need to deploy that secret before (or soon after) you install Flux into the cluster.
Related
I am using the RabbitMQ Kubernetes operator for a dev-instance and it works great. What isn't great is that the credentials generated by the operator are different for everyone on the team (I'm guessing it generates random creds upon init).
Is there a way to provide a secret and have the operator use those credentials in place of the generated ones?
Yaml:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq-cluster-deployment
namespace: message-brokers
spec:
replicas: 1
service:
type: LoadBalancer
Ideally, I can just configure some yaml to point to a secret and go from there. But, struggling to find the documentation around this piece.
Example Username/Password generated:
user: default_user_wNSgVBIyMIElsGRrpwb
pass: cGvQ6T-5gRt0Rc4C3AdXdXDB43NRS6FJ
I figured it out. Looks like you can just add a secret configured like the below example and it'll work. I figured this out by reverse engineering what the operator generated. So, please chime in if this is bad.
The big thing to remember is the default_user.confg setting. Other than that, it's just a secret.
kind: Secret
apiVersion: v1
metadata:
name: rabbitmq-cluster-deployment-default-user
namespace: message-brokers
stringData:
default_user.conf: |
default_user = user123
default_pass = password123
password: password123
username: user123
type: Opaque
rabbitmq-cluster-deployment-default-user comes from the Deployment mdatadata.name + -default-user (see yaml in question)
I would like to avoid keeping secret in the Git as a best practise, and store it in AWS SSM.
Is there any way to get the value from AWS System Manager and use to create Kubernetes Secret?
I manage to create secret by fetching value from AWS Parameter store using the following script.
cat <<EOF | ./kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: kiali
namespace: istio-system
type: Opaque
data:
passphrase: $(echo -n "`aws ssm get-parameter --name /dev/${env_name}/kubernetes/kiali_password --with-decrypt --region=eu-west-2 --output text --query Parameter.Value`" | base64 -w0)
username: $(echo -n "admin" | base64 -w0)
EOF
For sure, 12factors requires to externalize configuration outside Codebase.
For your question, there is an attempt to integrate AWS SSM (AWS Secret Manager) to be used as the single source of truth for Secrets.
You just need to deploy the controller :
helm repo add secret-inject https://aws-samples.github.io/aws-secret-sidecar-injector/
helm repo update
helm install secret-inject secret-inject/secret-inject
Then annotate your deployment template with 2 annotations:
template:
metadata:
annotations:
secrets.k8s.aws/sidecarInjectorWebhook: enabled
secrets.k8s.aws/secret-arn: arn:aws:secretsmanager:us-east-1:123456789012:secret:database-password-hlRvvF
Other steps are explained here.
But I think that I highlighted the most important steps which clarifies the approach.
You can use GoDaddy external secrets. Installing it, creates a controller, and the controller will sync the AWS secrets within specific intervals. After creating the secrets in AWS SSM and installing GoDaddy external secrets, you have to create an ExternalSecret type as follows:
apiVersion: 'kubernetes-client.io/v1'
kind: ExtrenalSecret
metadata:
name: cats-and-dogs
secretDescriptor:
backendType: secretsManager
data:
- key: cats-and-dogs/mysql-password
name: password`
This will create a Kubernetes secrets for you. That secret can be exposed to your service as an environment variable or through volume mount.
Use Kubernetes External Secret. This below solution uses Secret Manager (not SSM) but servers the purpose.
Deploy using Helm
$ `helm repo add external-secrets https://external-secrets.github.io/kubernetes-external-secrets/`
$ `helm install kubernetes-external-secrets external-secrets/kubernetes-external-secrets`
Create new secret with required parameter in AWS Secret Manager:
For example - create a secret with secret name as "dev/db-cred" with below values.
{"username":"user01","password":"pwd#123"}
Secret.YAML:
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
name: my-kube-secret
namespace: my-namespace
spec:
backendType: secretsManager
region: us-east-1
dataFrom:
- dev/db-cred
Refer it in helm values file as below
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: my-kube-secret
key: password
I know you can use ConfigMap properties as environment variables in the pod spec, but can you use environment variables declared in the pods spec inside the configmap?
For example:
I have a secret password which I wish to access in my configmap application.properties. The secret looks like so:
apiVersion: v1
data:
pw: THV3OE9vcXVpYTll==
kind: Secret
metadata:
name: foo
namespace: foo-bar
type: Opaque
so inside the pod spec I reference the secret as an env var. The configMap will be mounted as a volume from within the spec:
env:
- name: PASSWORD
valueFrom:
secretKeyRef:
name: foo
key: pw
...
and inside my configMap I can then reference the secret value like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: application.properties
namespace: foo-bar
data:
application.properties: /
secret.password=$(PASSWORD)
Anything I've found online is just about consuming configMap values as env vars and doesn't mention consuming env vars in configMap values.
Currently it's not a Kubernetes Feature.
There is a closed issue requesting this feature and it's kind of controversial topic because the discussion is ongoing many months after being closed:
Reference Secrets from ConfigMap #79224
Referencing the closing comment:
Best practice is to not use secret values in envvars, only as mounted files. if you want to keep all config values in a single object, you can place all the values in a secret object and reference them that way.
Referencing secrets via configmaps is a non-goal... it confuses whether things mounting or injecting the config map are mounting confidential values.
I suggest you to read the entire thread to understand his reasons and maybe find another approach for your environment to get this variables.
"OK, but this is Real Life, I need to make this work"
Then I recommend you this workaround:
Import Data to Config Map from Kubernetes Secret
It makes the substitution with a shell in the entrypoint of the container.
I am trying to avoid kubernetes secrets view-able by any user.
I tried sealed secrets, but that is just hiding secrets to be stored in version control.
As soon as I apply that secret, I can see the secret using the below command.
kubectl get secret mysecret -o yaml
This above command is still showing base64 encoded form of secret.
How do I avoid someone seeing the secret ( even in base64 format) with the above simple command.
You can use Hashicrop Vault or kubernetes-external-secrets (https://github.com/godaddy/kubernetes-external-secrets).
Or if you just want to restrict only, then you should create a read-only user and restrict the access for the secret for the read-only user using role & role binding.
Then if anyone tries to describe secret then it will throw access denied error.
Sample code:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: test-secrets
namespace: default
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
- create
- delete
- apiGroups:
- ""
resources:
- pods/exec
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-secrets
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: test-secrets
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: demo
The above role has no access to secrets. Hence the demo user gets access denied.
There is no way to accomplish this with Kubernetes internal tools. You will always have to rely on a third-party tool.
Hashicorps Vault is one very often used solution, which is very powerful and supports some very nice features, like Dynamic Secrets or Envelope Encryption. But it can also get very complex in terms of configuration. So you need to decide for yourself what kind of solution you need.
I would recommend you using Sealed-Secrets. It encrypts your secrets and you can push the encrypted secrets safely in your repository. It has not such a big feature list, but it does exactly what you described.
You can Inject Hashicrop Vault secrets into Kubernetes pods via Init containers and keep them up to date with a sidecar container.
More details here https://www.hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar/
I'd like to deploy pods into my GKE Kubernetes cluster that use images from a private, third-party Docker registry (not GCP's private Docker registry).
How do I provide my GKE Kubernetes cluster with credentials to that private repository so that the images can be pulled when required?
You need to create a secret that holds the credentials needed to download images from the private registry. This process is explained on Kubernetes documentation, but it looks like
kubectl create secret docker-registry regsecret --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
Then, once your secret has been created, you need to specify that you want to use this secret to pull images from the registry when creating the pod's containers with the imagePullSecrets key containing the name of the secret created above, like
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regsecret