Im on kubernetes 1.3.5, we are using Deployments with rollingupdates to update the pods in our cluster. However, on rollingupdate, the newly added environment variable never gets added to the pod, is it by design ? what are the ways to get around that ?
Following is the sample deployment yaml files. Basically the deployment was deployed with first version then we updated the yaml with newly added env variable NEW_KEY and basically run through the rolling updated. But the new env does not show up in the PODS.
first version yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: APP_NAME-deployment
labels:
name: APP_NAME
environment: DEV
spec:
revisionHistoryLimit: 2
strategy:
type: RollingUpdate
replicas: 2
template:
metadata:
labels:
name: APP_NAME
environment: DEV
spec:
containers:
- name: APP_NAME
image: repo.app_name:latest
env:
- name: NODE_ENV
value: 'development'
- name: APP_KEY
value: '123'
updated yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: APP_NAME-deployment
labels:
name: APP_NAME
environment: DEV
spec:
revisionHistoryLimit: 2
strategy:
type: RollingUpdate
replicas: 2
template:
metadata:
labels:
name: APP_NAME
environment: DEV
spec:
containers:
- name: APP_NAME
image: repo.app_name:latest
env:
- name: NODE_ENV
value: 'development'
- name: APP_KEY
value: '123'
- name: NEW_KEY
value: 'new'
You can store the env variable in either a ConfigMap or secretKeyRef. For a ConfigMap you would do:
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: node_env
key: node.dev
Or with a secretKeyRef:
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
secretKeyRef:
name: node_env
key: node.dev
Related
I want to remove a few environment variables in a container with kustomize? Is that possible? When I patch, it just adds as you may know.
If it's not possible, can we replace environment variable name, and secret key name/key pair all together?
containers:
- name: container1
env:
- name: NAMESPACE
valueFrom:
secretKeyRef:
name: x
key: y
Any help on this will be appreciated! Thanks!
If you're looking remove that NAMESPACE variable from the manifest, you can use the special $patch: delete directive to do so.
If I start with this Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: example
image: docker.io/traefik/whoami:latest
env:
- name: ENV_VAR_1
valueFrom:
secretKeyRef:
name: someSecret
key: someKeyName
- name: ENV_VAR_2
value: example-value
If I write in my kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
patches:
- patch: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: example
env:
- name: ENV_VAR_1
$patch: delete
Then the output of kustomize build is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- env:
- name: ENV_VAR_2
value: example-value
image: docker.io/traefik/whoami:latest
name: example
Using a strategic merge patch like this has an advantage over a JSONPatch style patch like Nijat's answer because it doesn't depend on the order in which the environment variables are defined.
I am trying to set the two env variables of mongo namely - MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD using kubernetes ConfigMap and Secret as follows:
When I don't use the config map and password, i.e. I hardcode the username and password, it works, but when I try to replace it with configmap and secret, it says
'Authentication failed.'
my username and password is the same, which is admin
Here's the yaml definition for these obects, can someone help me what is wrong?
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-username
data:
username: admin
---
apiVersion: v1
kind: Secret
metadata:
name: mongodb-password
data:
password: YWRtaW4K
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodbtest
spec:
# serviceName: mongodbtest
replicas: 1
selector:
matchLabels:
app: mongodbtest
template:
metadata:
labels:
app: mongodbtest
selector: mongodbtest
spec:
containers:
- name: mongodbtest
image: mongo:3
# env:
# - name: MONGO_INITDB_ROOT_USERNAME
# value: admin
# - name: MONGO_INITDB_ROOT_PASSWORD
# value: admin
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
configMapKeyRef:
name: mongodb-username
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-password
key: password
Finally I was able to find the solution after hours, it is not something I did from kubernetes side, it is when I did base64 encode.
The correct way to encode is with following command:
echo -n 'admin' | base64
and this was the issue with me.
Your deployment yaml is fine, just change spec.containers[0].env to spec.containers[0].envFrom:
spec:
containers:
- name: mongodbtest
image: mongo:3
envFrom:
- configMapRef:
name: mongodb-username
- secretRef:
name: mongodb-password
That will put all keys of your secret and configmap as environment variables in the deployment.
apiVersion: v1
data:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD : password
kind: ConfigMap
metadata:
name: mongo-cred
namespace: default
inject it to deployment like
envFrom:
- configMapRef:
name: mongo-cred
the deployment will be something like
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodbtest
spec:
# serviceName: mongodbtest
replicas: 1
selector:
matchLabels:
app: mongodbtest
template:
metadata:
labels:
app: mongodbtest
selector: mongodbtest
spec:
containers:
- name: mongodbtest
image: mongo:3
envFrom:
- configMapRef:
name: mongo-cred
if you want to save the data in secret, the secret is best practice to store data with encryption base64 and sensitive data.
envFrom:
- secretRef:
name: mongo-cred
you can create the secret with
apiVersion: v1
data:
MONGO_INITDB_ROOT_USERNAME: YWRtaW4K #base 64 encoded
MONGO_INITDB_ROOT_PASSWORD : YWRtaW4K
kind: secret
type: Opaque
metadata:
name: mongo-cred
namespace: default
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: mysql:8.0
env:
- name: MYSQL_ROOT_PASSWORD
value: root
You can use a secret for storing that value and use that secret as env in the pod.
https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#define-container-environment-variables-using-secret-data
I have a config yaml file for a kubernetes deployment that looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: <some_app>
name: <some_app>
namespace: dataengineering
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: <some_app>
spec:
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 30
containers:
- image: 127579856528.dkr.ecr.us-west-2.amazonaws.com/dataengineering/<some_app>:latest
imagePullPolicy: Always
name: <some_app>
env:
- name: ES_HOST
value: "vpc-some-name-dev-wrfkk5v7kidaro67ozjrv4wdeq.us-west-2.es.amazonaws.com"
- name: ES_PORT
value: "443"
- name: DATALOADER_QUEUE
valueFrom:
configMapKeyRef:
name: <some_name>
key: DATALOADER_QUEUE
- name: AWS_DEFAULT_REGION
value: "us-west-2"
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: <some_name>
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: <some_name>
key: AWS_SECRET_ACCESS_KEY
...
Currently, this file is in dev/deployment.yaml but I also want a prod/deployment.yaml. Instead of copying this whole file over, is there a better way to DRY up this file so it can be used for both dev and prod clusters? The parts of this file that differ are some of the environment variables (I used a different DATALOADER_QUEUE variable for prod and dev, and the AWS keys. What can be done?
I looked into some options like a configmap. How does one do this? What's a mounted volume? I'm reading this: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume but I'm not sure what it is.... what is a volume? How do I access the data stored in this "volume"?
Can the image be switched from prod to dev? I know that seems odd...
Something like this would help with the env vars:
envFrom:
- configMapRef:
name: myapp-config
- secretRef:
name: myapp-secrets
You can then use different namespaces for dev vs. prod so the references don't have to vary. For handling labels, look at Kustomize overlays and setting labels at the overlay level.
Below is how I am using kunbernetes on google.
I have one node application let's say Book-portal.
node app is using environment variables for configurations.
Step1: I created docker file and pushed
gcr.io/<project-id>/book-portal:v1
Step2: deployed with following commands
kubectl run book-portal --image=gcr.io/<project-id>/book-portal:v1 --port=5555 --env ENV_VAR_KEY1=value1 --env ENV_VAR_KEY2=value2 --env ENV_VAR_KEY3=value3
Step3:
kubectl expose deployment book-portal --type="LoadBalancer"
Step4: Get public ip with
kubectl get services book-portal
now assume I added new features and new configurations in next release.
So to roll out new version v2
Step1: I created docker file and pushed
gcr.io/<project-id>/book-portal:v2
Step2: Edit deployment
kubectl edit deployment book-portal
---------------yaml---------------
...
spec:
replicas: 1
selector:
matchLabels:
run: book-portal
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: book-portal
spec:
containers:
- env:
- name: ENV_VAR_KEY1
value: value1
- name: ENV_VAR_KEY2
value: value2
- name: ENV_VAR_KEY3
value: value3
image: gcr.io/<project-id>/book-portal:v1
imagePullPolicy: IfNotPresent
name: book-portal
...
----------------------------------
I am successfully able to change
image:gcr.io/<project-id>/book-portal:v1
to
image:gcr.io/<project-id>/book-portal:v2
But I can not add/change environment variables
- env:
- name: ENV_VAR_KEY1
value: value1
- name: ENV_VAR_KEY2
value: value2
- name: ENV_VAR_KEY3
value: value3
- name: ENV_VAR_KEY4
value: value4
Can anyone guide with what is best practices to pass configurations
in node app on kubernetes?
how should I handle environment variable
changes during rolling updates?
I think your best bet is to use configmaps in k8s and then change you pod template to get env variable values from the configmap see Consuming ConfigMap in pods
edit: I appologize I put the wrong link here. I have updated but for the TL;DR
you can do the following.
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
and then pod usage can look like this.
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.how
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.type
restartPolicy: Never