Kubernetes Secrets per environment - kubernetes

I'm using helm chart to deploy pods to multiple environments. I would like to have one secret file for each environment like dev, sit. I have created secrets.yaml file which is referencing values.yaml of each environment.
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
DB_URL: {{ .Values.secret.db.url }}
And values.yaml for each environment looks like:
templates:
env:
- name: DB_URL
valueFrom:
secretKeyRef:
name: my-secret
key: DB_URL
secret:
db:
url: <base64Encoded_value>
The secret is not getting applied into the environment. What wrong am I doing here?

First of all, Is the value being successfully referenced by the Secret object from the values.yaml file? Try running a "helm template" command to locally render the templates to see the result.
If it's working as expected, it'll be good to see how you're using the secret to apply in your environment.

Related

How to get or call secrets from hashicorp vault using Helm

I'm having a use case with me.
currently, I'm using helm over on premise Kubernetes cluster where all of my environment variables and secrets are stored in helm itself but now I want to store them in hashicorp vault.
as of now its totaly new for me and i'm having some hard time to make it work.
so the use case is something like,
how we can use hashicorp vault to store the values which are getting use by Helm as of now.
Once we store the values which we want how we can call them by using helm it self only.
any help will be greatly appreciated
To use external-secrets.io to mount secrets from Hashicorp Vault as Kubernetes secrets and consume them as environment variables in a pod, you can follow these steps:
Install external-secrets.io into your Kubernetes cluster:
kubectl apply -f https://external-secrets.io/install
Create a Kubernetes secret backed by Vault:
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: mysecret
spec:
provider: hashivault
hashivault:
path: secret/myapp
data:
- key: username
name: USERNAME
- key: password
name: PASSWORD
Create a Kubernetes deployment that consumes the secret:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:latest
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: USERNAME
- name: PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: PASSWORD
Apply the deployment:
kubectl apply -f deployment.yaml
This will mount the secrets from Vault into a Kubernetes secret, and then consume the secrets as environment variables in the pod. You can access the values of the secrets in your application as os.getenv("USERNAME") and os.getenv("PASSWORD").
Note: You will need to have the Hashicorp Vault CLI installed and configured, and have access to the Vault server, in order to complete these steps.

Concating values from configMap and secret

I have a configMap file:
apiVersion: v1
kind: ConfigMap
metadata:
labels:
owner: testdb
name: testdb-configmap
data:
host: postgres
port: "5432"
and a secret file:
aapiVersion: v1
kind: Secret
type: Opaque
metadata:
labels:
owner: testdb
name: testdb-secret
namespace: test
data:
user: dGVzdA==
pwd: dGVzdA==
and I want to build an environment variable CONNECTION_STRING as below:
env:
- name: CONNECTION_STRING
value: "Host=<host-from-configmap>;Username=<user-from-secret>;Password=<password-from-secret>;Port=<port-from-configmap>;Pooling=False;"
I want to know if this is possible and if yes, then how? I have also looked at using .tpl (named templates) but couldn't figure out a way.
NOTE
Since I don't have access to the image which requires CONNECTION_STRING I have to build it this way. These configmap and secret files are also going to remain like this.
Kubernetes can set environment variables based on other environment variables. This is a core Kubernetes Pod capability, and doesn't depend on anything from Helm.
Your value uses four components, two from the ConfigMap and two from the Secret. You need to declare each of these as separate environment variables, and then declare a main environment variable that concatenates them together.
env:
- name: TESTDB_HOST
valueFrom:
configMapRef:
name: testdb-configmap # {{ include "chart.name" . }}
key: host
- name: TESTDB_PORT
valueFrom:
configMapRef:
name: testdb-configmap
key: port
- name: TESTDB_USER
valueFrom:
secretKeyRef:
name: testdb-secret
key: user
- name: TESTDB_PASSWORD
valueFrom:
secretKeyRef:
name: testdb-secret
key: password
- name: CONNECTION_STRING
value: Host=$(TESTDB_HOST);Username=$(TESTDB_USER);Password=$(TESTDB_PASSWORD);PORT=$(TESTDB_PORT);Pooling=False;
I do not believe what you're asking to do is possible.
Furthermore, do not use configs maps for storing information like this. It's best practice to use secrets and then mount them to your container as files or ENV variables.
I would abandon whatever you're thinking and re-evaluate what you're trying to accomplish.

configmaps are not passing properly to the containers

I have a kubectl config map like below.
apiVersion: v1
data:
server.properties: |+
server.hostname=test.com
kind: ConfigMap
metadata:
name: my-config
And I tried to read this config inside a container.
containers:
- name: testserver
env:
- name: server.hostname
valueFrom:
configMapKeyRef:
name: my-config
key: server.properties.server.hostname
However, these configs are not passing to the container properly. Do I need do any changes to my configs?
What you have in there isn't the right key. ConfigMaps are strictly 1 level of k/v pairs. The |+ syntax is YAML for a multiline string but the fact the data inside that is also YAML is not something the system knows. As far as Kubernetes is concerned you have one key there, server.properties, with a string value that is opaque.

Kubernetes: Define environment variables dependent on other ones using "envFrom"

I have two ConfigMap files. One is supposed to be "secret" values and the other has regular values and should import the secrets.
Here's the sample secret ConfigMap:
kind: ConfigMap
metadata:
name: secret-cm
data:
MY_SEKRET: 'SEKRET'
And the regular ConfigMap file:
kind: ConfigMap
metadata:
name: regular-cm
data:
SOME_CONFIG: 123
USING_SEKRET: $(MY_SEKRET)
And my deployment is as follows:
kind: Deployment
spec:
template:
spec:
containers:
- name: my_container
envFrom:
- configMapRef:
name: secret-cm
- configMapRef:
name: regular-cm
I was hoping that my variable USING_SEKRET would be "SEKRET" because of the order the envFrom files are imported but they just appear as "$(MY_SEKRET)" on the Pods.
I've also tried setting the dependent variable as an env directly at the Deployment but it results on the same problem:
kind: Deployment
...
env:
- name: MY_SEKRET
# Not the expected result because the variable is openly visible but should be hidden
value: 'SEKRET'
I was trying to follow the documentation guides, based on the Define an environment dependent variable for a container but I haven't seen examples similar to what I want to do.
Is there a way to do this?
EDIT:
To explain my idea behind this structure, secret-cm whole file will be encrypted at the repository so not all peers will be able to see its contents.
On the other hand, I still want to be able to show everyone where its variables are used, hence the dependency on regular-cm.
With that, authorized peers can run kubectl commands and variable replacements of secret-cm would work properly but for everyone else the file is hidden.
You did not explain why you want to define two configmap (one getting value from another) but I am assuming that you want the env parameter name define in confgimap be independent of paramter name used by your container in pod. If that is the case then create your configmap
kind: ConfigMap metadata: name: secret-cm data: MY_SEKRET: 'SEKRET'
Then in your deployment use the env variable from configmap
kind: Deployment
spec:
template:
spec:
containers:
- name: my_container
env:
- name: USING_SEKRET
valueFrom:
configMapKeyRef:
name: secret-cm
key: MY_SEKRET
Now when you access env variable $USING_SEKRET, it will show value as 'SEKRET'
incase your requirement is different then ignore this response and provide more details.

How to reference kubernetes secrets in helm chart?

I want to make some deployments in kubernetes using helm charts. Here is a sample override-values yaml that I use:
imageRepository: ""
ocbb:
imagePullPolicy: IfNotPresent
TZ: UTC
logDir: /oms_logs
tnsAdmin: /oms/ora_k8
LOG_LEVEL: 3
wallet:
client:
server:
root:
db:
deployment:
imageName: init_db
imageTag:
host: 192.168.88.80
port:
service:
alias:
schemauser: pincloud
schemapass:
schematablespace: pincloud
indextablespace: pincloudx
nls_lang: AMERICAN_AMERICA.AL32UTF8
charset: AL32UTF8
pipelineschemauser: ifwcloud
pipelineschemapass:
pipelineschematablespace: ifwcloud
pipelineindextablespace: ifwcloudx
pipelinealias:
queuename:
In this file I have to set some values involving credentials, for example schemapass, pipelineschemapass...
Documentation states I have to generate kubernetes secrets to do this and add this key to my yaml file with the same path hierarchy.
I generated some kubernetes secrets, for example:
kubectl create secret generic schemapass --from-literal=password='pincloud'
Now I don't know how to reference this newly generated secret in my yaml file. Any tip about how to set schemapass field in yaml chart to reference kubernetes secret?
You cannot use Kubernetes secret in your values.yaml. In values.yaml you only specify the input parameters for the Helm Chart, so it could be the secret name, but not the secret itself (or anything that it resolved).
If you want to use the secret in your container, then you can insert it as an environment variable:
env:
- name: SECRET_VALUE_ENV
valueFrom:
secretKeyRef:
name: schemapass
key: password
You can check more in the Hazelcast Enterprise Helm Chart. We do exactly that. You specify the secret name in values.yaml and then the secret is injected into the container using environment variable.
You can reference K8S values -whether secrets or not- in Helm by specifying them in your container as environment variables.
let your deployment be mongo.yml
--
kind: Deployment
--
--
containers:
--
env:
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
Where mongo-secret is
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
type: Opaque
data:
mongo-user: bW9uZ291c2Vy
mongo-password: bW9uZ29wYXNzd29yZA==
and mongo-config is
apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-config
data:
mongo-url: mongo-service