How to create kubernetes secret with multiple values for one key? - mongodb

This is how I'm trying to create a secret for my kubernetes mongodb, which gets deployed using the bitnami mongodb helm chart:
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
namespace: mongodb
labels:
app.kubernetes.io/component: mongodb
type: Opaque
data:
mongodb-root-password: 'encoded value'
mongodb-passwords: '???'
mongodb-metrics-password: 'encoded value'
mongodb-replica-set-key: 'encoded value'
The helm chart values.yaml says:
auth:
## MongoDB(®) custom users and databases
## ref: https://github.com/bitnami/containers/tree/main/bitnami/mongodb#creating-a-user-and-database-on-first-run
## #param auth.usernames List of custom users to be created during the initialization
## #param auth.passwords List of passwords for the custom users set at `auth.usernames`
## #param auth.databases List of custom databases to be created during the initialization
##
usernames: []
passwords: []
databases: []
## #param auth.existingSecret Existing secret with MongoDB(®) credentials (keys: `mongodb-passwords`, `mongodb-root-password`, `mongodb-metrics-password`, ` mongodb-replica-set-key`)
## NOTE: When it's set the previous parameters are ignored.
##
existingSecret: ""
So passwords is an array of strings for each username and each database.
How do I have to implement these multiple passwords in my secret?
The helm template should give me a hint, but I don't understand it: secret.yaml
Or is it a simple string with all passwords separated by , and encoded?

Should be something like:
auth:
usernames: ["bob", "alice"]
passwords: ["bobpass", "alicepass"]
databases: ["bobdb", "alicedb"]
If you want to pass those on the cli --set flag instead, you should be able to use curly braces as per this comment: https://github.com/helm/helm/issues/1987#issuecomment-280497496 - like:
--set auth.usernames={bob,alice},auth.passwords={bobpass,alicepass},auth.databases={bobdb,alicedb}
This would produce a secret like following - which you can check with helm template command:
---
# Source: mongodb/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: release-name-mongodb
namespace: default
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-13.4.4
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongodb
type: Opaque
data:
mongodb-root-password: "Uk1tZThhYzNFZg=="
mongodb-passwords: "Ym9icGFzcyxhbGljZXBhc3M="
---
You can decode mongodb-passwords, using:
echo -n Ym9icGFzcyxhbGljZXBhc3M= | base64 -d
and notice that it looks as following: bobpass,alicepass
Also note that there seems to be an option to have mongodb.createSecret flag set to false and creating that secret manually (which may be more secure depending on the exact workflow).

Related

Approach for configmap and secret for a yaml file

I have a yaml file which needs to be loaded into my pods, this yaml file will have both sensitive and non-sensitive data, this yaml file need to be present in a path which i have included as env in containers.
env:
- name: CONFIG_PATH
value: /myapp/config/config.yaml
If my understanding is right, the configmap was the right choice, but i am forced to give the sensitive data like password as plain text in the values.yaml in helm chart.
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-config
labels:
app: {{ .Release.Name }}-config
data:
config.yaml: |
configuration:
settings:
Password: "{{.Values.config.password}}"
Username: myuser
Values.yaml
config:
password: "mypassword"
Mounted the above config map as follows
volumeMounts:
- name: {{ .Release.Name }}-config
mountPath: /myapp/config/
So i wanted to try secret, If i try secret, it is loading as Environment Variables inside pod, but it is not going into this config.yaml file.
If i convert the above yaml file into secret instead of configmap , should i convert the entire config.yaml into base64 secret? my yaml file has more entries and it will look cumbersome and i dont think it as a solution.
If i take secret as a stringdata then the base64 will be taken as it is.
How do i make sure that config.yaml loads into pods with passwords not exposed in the values.yaml Is there a way to combine configmap and secret
I read about projected volumes, but i dont see a use case for merging configmap and secrets into single config.yaml
Any help would be appreciated.
Kubernetes has no real way to construct files out of several parts. You can embed an entire (small) file in a ConfigMap or a Secret, but you can't ask the cluster to assemble a file out of parts in multiple places.
In Helm, one thing you can do is to put the configuration-file data into a helper template
{{- define "config.yaml" -}}
configuration:
settings:
Password: "{{.Values.config.password}}"
Username: myuser
{{ end -}}
In the ConfigMap you can use this helper template rather than embedding the content directly
apiVersion: v1
kind: ConfigMap
metadata: { ... }
data:
config.yaml: |
{{ include "config.yaml" . | indent 4 }}
If you move it to a Secret you do in fact need to base64 encode it. But with the helper template that's just a matter of invoking the template and encoding the result.
apiVersion: v1
kind: Secret
metadata: { ... }
data:
config.yaml: {{ include "config.yaml" . | b64enc }}
If it's possible to set properties in this file directly via environment variables (like Spring properties) or to insert environment-variable references in the file (like a Ruby ERB file) that could let you put the bulk of the file into a ConfigMap, but use a Secret for specific values; you would need a little more wiring to also make the environment variables available.
You briefly note a concern around passing the credential as a Helm value. This does in fact require having it in plain text at deploy time, and an operator could helm get values later to retrieve it. If this is a problem, you'll need some other path to inject or retrieve the secret value.

RabbitMQ Kubernetes Operator - Set Username and Password with Secret

I am using the RabbitMQ Kubernetes operator for a dev-instance and it works great. What isn't great is that the credentials generated by the operator are different for everyone on the team (I'm guessing it generates random creds upon init).
Is there a way to provide a secret and have the operator use those credentials in place of the generated ones?
Yaml:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq-cluster-deployment
namespace: message-brokers
spec:
replicas: 1
service:
type: LoadBalancer
Ideally, I can just configure some yaml to point to a secret and go from there. But, struggling to find the documentation around this piece.
Example Username/Password generated:
user: default_user_wNSgVBIyMIElsGRrpwb
pass: cGvQ6T-5gRt0Rc4C3AdXdXDB43NRS6FJ
I figured it out. Looks like you can just add a secret configured like the below example and it'll work. I figured this out by reverse engineering what the operator generated. So, please chime in if this is bad.
The big thing to remember is the default_user.confg setting. Other than that, it's just a secret.
kind: Secret
apiVersion: v1
metadata:
name: rabbitmq-cluster-deployment-default-user
namespace: message-brokers
stringData:
default_user.conf: |
default_user = user123
default_pass = password123
password: password123
username: user123
type: Opaque
rabbitmq-cluster-deployment-default-user comes from the Deployment mdatadata.name + -default-user (see yaml in question)

How to fix helm namespaces when specified namespace in templates before, but setting the namespace afterwards via the helm -n namepace flag

Some time ago we deployed many different releases where we specified the namespaces in the templates itself, like f.e.:
apiVersion: v1
kind: ConfigMap
metadata:
name: secret-database-config
namespace: {{ .Release.Name }}
labels:
app: secret-database-config
data:
POSTGRES_HOST: 123
...
But we realized now that this is not the correct approach, but you should use the -n namespace flag (see here).
In general, templates should not define a namespace. This is because Helm installs objects into the namespace provided with the --namespace flag. By omitting this information, it also provides templates with some flexibility for post-render operations (like helm template | kubectl create --namespace foo -f -)
So if we fix our files
apiVersion: v1
kind: ConfigMap
metadata:
name: secret-database-config
labels:
app: secret-database-config
data:
POSTGRES_HOST: 123
...
and run now:
helm upgrade --install --debug -n myproject123 -f helm/configs/myproject123.yaml myproject123 helm
We get following errors:
history.go:56: [debug] getting history for release myproject123
Release "myproject123" does not exist. Installing it now.
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /Users/myuser/coding/myrepo/helm
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Namespace "myproject123" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "myproject123": current value is "default"
helm.go:81: [debug] Namespace "myproject123" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "myproject123": current value is "default"
rendered manifests contain a resource that already exists. Unable to continue with install
helm.sh/helm/v3/pkg/action.(*Install).Run
/private/tmp/helm-20210310-44407-1006esy/pkg/action/install.go:276
main.runInstall
/private/tmp/helm-20210310-44407-1006esy/cmd/helm/install.go:242
main.newUpgradeCmd.func2
/private/tmp/helm-20210310-44407-1006esy/cmd/helm/upgrade.go:115
github.com/spf13/cobra.(*Command).execute
/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/github.com/spf13/cobra#v1.1.1/command.go:850
github.com/spf13/cobra.(*Command).ExecuteC
/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/github.com/spf13/cobra#v1.1.1/command.go:958
github.com/spf13/cobra.(*Command).Execute
/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/github.com/spf13/cobra#v1.1.1/command.go:895
main.main
/private/tmp/helm-20210310-44407-1006esy/cmd/helm/helm.go:80
runtime.main
/usr/local/Cellar/go/1.16/libexec/src/runtime/proc.go:225
runtime.goexit
/usr/local/Cellar/go/1.16/libexec/src/runtime/asm_amd64.s:1371
make: *** [ns_upgrade] Error 1
Any ideas how this can be fixed?
It is not possible for us to delete everything and then install it again due to downtimes (and the amount of projects we have already deployed).
Use {{ .Release.Namespace }} instead of {{ .Release.Name }}. Then, you will be able to overwrite the namespace during installation via cli.
apiVersion: v1
kind: ConfigMap
metadata:
name: secret-database-config
namespace: {{ .Release.Namespace }}
labels:
app: secret-database-config
data:
POSTGRES_HOST: 123
...

How to Kubernetes Secret from parameter stored in the AWS System manager

I would like to avoid keeping secret in the Git as a best practise, and store it in AWS SSM.
Is there any way to get the value from AWS System Manager and use to create Kubernetes Secret?
I manage to create secret by fetching value from AWS Parameter store using the following script.
cat <<EOF | ./kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: kiali
namespace: istio-system
type: Opaque
data:
passphrase: $(echo -n "`aws ssm get-parameter --name /dev/${env_name}/kubernetes/kiali_password --with-decrypt --region=eu-west-2 --output text --query Parameter.Value`" | base64 -w0)
username: $(echo -n "admin" | base64 -w0)
EOF
For sure, 12factors requires to externalize configuration outside Codebase.
For your question, there is an attempt to integrate AWS SSM (AWS Secret Manager) to be used as the single source of truth for Secrets.
You just need to deploy the controller :
helm repo add secret-inject https://aws-samples.github.io/aws-secret-sidecar-injector/
helm repo update
helm install secret-inject secret-inject/secret-inject
Then annotate your deployment template with 2 annotations:
template:
metadata:
annotations:
secrets.k8s.aws/sidecarInjectorWebhook: enabled
secrets.k8s.aws/secret-arn: arn:aws:secretsmanager:us-east-1:123456789012:secret:database-password-hlRvvF
Other steps are explained here.
But I think that I highlighted the most important steps which clarifies the approach.
You can use GoDaddy external secrets. Installing it, creates a controller, and the controller will sync the AWS secrets within specific intervals. After creating the secrets in AWS SSM and installing GoDaddy external secrets, you have to create an ExternalSecret type as follows:
apiVersion: 'kubernetes-client.io/v1'
kind: ExtrenalSecret
metadata:
name: cats-and-dogs
secretDescriptor:
backendType: secretsManager
data:
- key: cats-and-dogs/mysql-password
name: password`
This will create a Kubernetes secrets for you. That secret can be exposed to your service as an environment variable or through volume mount.
Use Kubernetes External Secret. This below solution uses Secret Manager (not SSM) but servers the purpose.
Deploy using Helm
$ `helm repo add external-secrets https://external-secrets.github.io/kubernetes-external-secrets/`
$ `helm install kubernetes-external-secrets external-secrets/kubernetes-external-secrets`
Create new secret with required parameter in AWS Secret Manager:
For example - create a secret with secret name as "dev/db-cred" with below values.
{"username":"user01","password":"pwd#123"}
Secret.YAML:
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
name: my-kube-secret
namespace: my-namespace
spec:
backendType: secretsManager
region: us-east-1
dataFrom:
- dev/db-cred
Refer it in helm values file as below
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: my-kube-secret
key: password

Secret is not decoding properly using Kubernetes Secrets

I am using Kubernetes to deploy my grafana dashboard and I am trying to use Kubernetes Secrets for saving grafana admin-password .. Here is my yaml file for secret
apiVersion: v1
kind: Secret
metadata:
name: $APP_INSTANCE_NAME-grafana
labels:
app.kubernetes.io/name: $APP_INSTANCE_NAME
app.kubernetes.io/component: grafana
type: Opaque
data:
# By default, admin-user is set to `admin`
admin-user: YWRtaW4=
admin-password: "$GRAFANA_GENERATED_PASSWORD"
value for GRAFANA_GENERATED_PASSWORD is base64 encoded and exported like
export GRAFANA_GENERATED_PASSWORD="$(echo -n $PASSWORD | base64)"
where PASSWORD is a variable which i exported on my machine like
export PASSWORD=qwerty123
I am trying to pass the value of GRAFANA_GENERATED_PASSWORD to the yaml file for secret like
envsubst '$GRAFANA_GENERATED_PASSWORD' > "grafana_secret.yaml"
The yaml file after passing the base64 encoded value looks like
apiVersion: v1
kind: Secret
metadata:
name: kafka-monitor-grafana
labels:
app.kubernetes.io/name: kafka-monitor
app.kubernetes.io/component: grafana
type: Opaque
data:
# By default, admin-user is set to `admin`
admin-user: YWRtaW4=
admin-password: "cXdlcnR5MTIz"
After deploying all my objects i couldn't login to my dashboard using password qwerty123 which is encoded properly ..
But when i try to encode my password like
export GRAFANA_GENERATED_PASSWORD="$(echo -n 'qwerty123' | base64)"
It is working properly and i can login to my dashboard using the password qwerty123 ..
Looks like the problem occur when i encode my password using a variable ...
But i have encode my password using a variable
As mentioned in #Pratheesh comment, after deploy the grafana for the first time, the persistent volume was not deleted/recreated and the file grafana.db that contains the Grafana dashboard password still keeping the old password.
In order to solve, the PersistentVolume (pv) need to be deleted before apply the secret with the new password.