Restart kubernetes deployment after changing configMap - kubernetes

I have a deployment which includes a configMap, persistentVolumeClaim, and a service. I have changed the configMap and re-applied the deployment to my cluster. I understand that this change does not automatically restart the pod in the deployment:
configmap change doesn't reflect automatically on respective pods
Updated configMap.yaml but it's not being applied to Kubernetes pods
I know that I can kubectl delete -f wiki.yaml && kubectl apply -f wiki.yaml. But that destroys the persistent volume which has data I want to survive the restart. How can I restart the pod in a way that keeps the existing volume?
Here's what wiki.yaml looks like:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dot-wiki
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: wiki-config
data:
config.json: |
{
"farm": true,
"security_type": "friends",
"secure_cookie": false,
"allowed": "*"
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wiki-deployment
spec:
replicas: 1
selector:
matchLabels:
app: wiki
template:
metadata:
labels:
app: wiki
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
initContainers:
- name: wiki-config
image: dobbs/farm:restrict-new-wiki
securityContext:
runAsUser: 0
runAsGroup: 0
allowPrivilegeEscalation: false
volumeMounts:
- name: dot-wiki
mountPath: /home/node/.wiki
command: ["chown", "-R", "1000:1000", "/home/node/.wiki"]
containers:
- name: farm
image: dobbs/farm:restrict-new-wiki
command: [
"wiki", "--config", "/etc/config/config.json",
"--admin", "bad password but memorable",
"--cookieSecret", "any-random-string-will-do-the-trick"]
ports:
- containerPort: 3000
volumeMounts:
- name: dot-wiki
mountPath: /home/node/.wiki
- name: config-templates
mountPath: /etc/config
volumes:
- name: dot-wiki
persistentVolumeClaim:
claimName: dot-wiki
- name: config-templates
configMap:
name: wiki-config
---
apiVersion: v1
kind: Service
metadata:
name: wiki-service
spec:
ports:
- name: http
targetPort: 3000
port: 80
selector:
app: wiki

In addition to kubectl rollout restart deployment, there are some alternative approaches to do this:
1. Restart Pods
kubectl delete pods -l app=wiki
This causes the Pods of your Deployment to be restarted, in which case they read the updated ConfigMap.
2. Version the ConfigMap
Instead of naming your ConfigMap just wiki-config, name it wiki-config-v1. Then when you update your configuration, just create a new ConfigMap named wiki-config-v2.
Now, edit your Deployment specification to reference the wiki-config-v2 ConfigMap instead of wiki-config-v1:
apiVersion: apps/v1
kind: Deployment
# ...
volumes:
- name: config-templates
configMap:
name: wiki-config-v2
Then, reapply the Deployment:
kubectl apply -f wiki.yaml
Since the Pod template in the Deployment manifest has changed, the reapplication of the Deployment will recreate all the Pods. And the new Pods will use the new version of the ConfigMap.
As an additional advantage of this approach, if you keep the old ConfigMap (wiki-config-v1) around rather than deleting it, you can revert to a previous configuration at any time by just editing the Deployment manifest again.
This approach is described in Chapter 1 of Kubernetes Best Practices (O'Reilly, 2019).

For the specific question about restarting containers after the configuration is changed, as of kubectl v1.15 you can do this:
# apply the config changes
kubectl apply -f wiki.yaml
# restart the containers in the deployment
kubectl rollout restart deployment wiki-deployment

You should do nothing but change your ConfigMap, and wait for the changes to be applies. The answer you have posted the link is wrong. After a ConfigMap change, it doesn't apply the changes right away, but can take time. Like 5 minutes, or something like that.
If that doesn't happen, you can report a bug about that specific version of k8s.

Related

Populating a Containers environment values with mounted configMap in Kubernetes

I'm currently learning Kubernetes and recently learnt about using ConfigMaps for a Containers environment variables.
Let's say I have the following simple ConfigMap:
apiVersion: v1
data:
MYSQL_ROOT_PASSWORD: password
kind: ConfigMap
metadata:
creationTimestamp: null
name: mycm
I know that a container of some deployment can consume this environment variable via:
kubectl set env deployment mydb --from=configmap/mycm
or by specifying it manually in the manifest like so:
containers:
- env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
configMapKeyRef:
key: MYSQL_ROOT_PASSWORD
name: mycm
However, this isn't what I am after, since I'd to manually change the environment variables each time the ConfigMap changes.
I am aware that mounting a ConfigMap to the Pod's volume allows for the auto-updating of ConfigMap values. I'm currently trying to find a way to set a Container's environment variables to those stored in the mounted config map.
So far I have the following YAML manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mydb
name: mydb
spec:
replicas: 1
selector:
matchLabels:
app: mydb
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mydb
spec:
containers:
- image: mariadb
name: mariadb
resources: {}
args: ["export MYSQL_ROOT_PASSWORD=$(cat /etc/config/MYSQL_ROOT_PASSWORD)"]
volumeMounts:
- name: config-volume
mountPath: /etc/config
env:
- name: MYSQL_ROOT_PASSWORD
value: temp
volumes:
- name: config-volume
configMap:
name: mycm
status: {}
I'm attempting to set the MYSQL_ROOT_PASSWORD to some temporary value, and then update it to mounted value as soon as the container starts via args: ["export MYSQL_ROOT_PASSWORD=$(cat /etc/config/MYSQL_ROOT_PASSWORD)"]
As I somewhat expected, this didn't work, resulting in the following error:
/usr/local/bin/docker-entrypoint.sh: line 539: /export MYSQL_ROOT_PASSWORD=$(cat /etc/config/MYSQL_ROOT_PASSWORD): No such file or directory
I assume this is because the volume is mounted after the entrypoint. I tried adding a readiness probe to wait for the mount but this didn't work either:
readinessProbe:
exec:
command: ["sh", "-c", "test -f /etc/config/MYSQL_ROOT_PASSWORD"]
initialDelaySeconds: 5
periodSeconds: 5
Is there any easy way to achieve what I'm trying to do, or is it impossible?
So I managed to find a solution, with a lot of inspiration from this answer.
Essentially, what I did was create a sidecar container based on the alpine K8s image that mounts the configmap and constantly watches for any changes, since the K8s API automatically updates the mounted configmap when the configmap is changed. This required the following script, watch_passwd.sh, which makes use of inotifywait to watch for changes and then uses the K8s API to rollout the changes accordingly:
update_passwd() {
kubectl delete secret mysql-root-passwd > /dev/null 2>&1
kubectl create secret generic mysql-root-passwd --from-file=/etc/config/MYSQL_ROOT_PASSWORD
}
update_passwd
while true
do
inotifywait -e modify "/etc/config/MYSQL_ROOT_PASSWORD"
update_passwd
kubectl rollout restart deployment $1
done
The Dockerfile is then:
FROM docker.io/alpine/k8s:1.25.6
RUN apk update && apk add inotify-tools
COPY watch_passwd.sh .
After building the image (locally in this case) as mysidecar, I create the ServiceAccount, Role, and RoleBinding outlined here, adding rules for deployments so that they can be restarted by the sidecar.
After this, I piece it all together to create the following YAML Manifest (note that imagePullPolicy is set to Never, since I created the image locally):
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mydb
name: mydb
spec:
replicas: 3
selector:
matchLabels:
app: mydb
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mydb
spec:
serviceAccountName: secretmaker
containers:
- image: mysidecar
name: mysidecar
imagePullPolicy: Never
command:
- /bin/sh
- -c
- |
./watch_passwd.sh $(DEPLOYMENT_NAME)
env:
- name: DEPLOYMENT_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app']
volumeMounts:
- name: config-volume
mountPath: /etc/config
- image: mariadb
name: mariadb
resources: {}
envFrom:
- secretRef:
name: mysql-root-passwd
volumes:
- name: config-volume
configMap:
name: mycm
status: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: secretmaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app: mydb
name: secretmaker
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete", "list"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app: mydb
name: secretmaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secretmaker
subjects:
- kind: ServiceAccount
name: secretmaker
namespace: default
---
It all works as expected! Hopefully this is able to help someone out in the future. Also, if anybody comes across this and has a better solution please feel free to let me know :)

Kubernetes Persistent Volume Claim doesn't save the data

I made a persistent volume claim on kubernetes to save mongodb data after restarting the deployment I found that data is not existed also my PVC is in bound state.
here is my yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
volumes:
- name: auth-mongo-data
persistentVolumeClaim:
claimName: auth-mongo-pvc
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
name: 'auth-mongo-port'
volumeMounts:
- name: auth-mongo-data
mountPath: '/data/db'
---
# Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: auth-mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
and I made clusterIP service for the deployment
First off, if the PVC status is still Bound and the desired pod happens to start on another node, it will fail as the PV can't be mounted into the pod. This happens because the reclaimPolicy: Retain of the StorageClass (can also be set on the PV directly persistentVolumeReclaimPolicy: Retain). In order to fix this, you have to manually overrite/delete the claimRef of the PV. Use kubectl patch pv PV_NAME -p '{"spec":{"claimRef": null}}' to do this, after doing so the PV's status should be Available.
In order to see if the your application writes any data to the desired path, run your application and exec into it (kubectl -n NAMESPACE POD_NAME -it -- /bin/sh) and check your /data/db. You could also create an file with some random text, restart your application and check again.
I'm fairly certain that if your PV isn't being recreated every time your application starts (which shouldn't be the case, because of Retain), then it's highly that your Application isn't writing to the path specified. But you could also share your PersistentVolume config with us, as there might be some misconfiguration there as well.

Get the Kubernetes uid of the Deployment that created the pod, from within the pod

I want to be able to know the Kubernetes uid of the Deployment that created the pod, from within the pod.
The reason for this is so that the Pod can spawn another Deployment and set the OwnerReference of that Deployment to the original Deployment (so it gets Garbage Collected when the original Deployment is deleted).
Taking inspiration from here, I've tried*:
Using field refs as env vars:
containers:
- name: test-operator
env:
- name: DEPLOYMENT_UID
valueFrom:
fieldRef: {fieldPath: metadata.uid}
Using downwardAPI and exposing through files on a volume:
containers:
volumeMounts:
- mountPath: /etc/deployment-info
name: deployment-info
volumes:
- name: deployment-info
downwardAPI:
items:
- path: "uid"
fieldRef: {fieldPath: metadata.uid}
*Both of these are under spec.template.spec of a resource of kind: Deployment.
However for both of these the uid is that of the Pod, not the Deployment. Is what I'm trying to do possible?
The behavior is correct, the Downward API is for pod rather than deployment/replicaset.
So I guess the solution is set the name of deployment manually in spec.template.metadata.labels, then adopt Downward API to inject the labels as env variables.
I think it's impossible to get the UID of Deployment itself, you can set any range of runAsUser while creating the deployment.
Try this command to get the UIDs of the existing pods:
kubectl get pod -o jsonpath='{range .items[*]}{#.metadata.name}{" runAsUser: "}{#.spec.containers[*].securityContext.runAsUser}{" fsGroup: "}{#.spec.securityContext.fsGroup}{" seLinuxOptions: "}{#.spec.securityContext.seLinuxOptions.level}{"\n"}{end}'
It's not the exact what you wanted to get, but it can be a hint for you.
To set the UID while creating the Deployment, see the example below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: toolbox2
labels:
app: toolbox2
spec:
replicas: 3
selector:
matchLabels:
app: toolbox2
template:
metadata:
labels:
app: toolbox2
spec:
securityContext:
supplementalGroups: [1000620001]
seLinuxOptions:
level: s0:c25,c10
containers:
- name: net-toolbox
image: quay.io/wcaban/net-toolbox
ports:
- containerPort: 2000
securityContext:
runAsUser: 1000620001

kubernetes deployment mounts secret as a folder instead of a file

I am having a config file as a secret in kubernetes and I want to mount it into a specific location inside the container. The problem is that the volume that is created inside the container is a folder instead of a file with the content of the secrets in it. Any way to fix it?
My deployment looks like this:
kind: Deployment
apiVersion: apps/v1
metadata:
name: jetty
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: jetty
template:
metadata:
labels:
app: jetty
spec:
containers:
- name: jetty
image: quay.io/user/jetty
ports:
- containerPort: 8080
volumeMounts:
- name: config-properties
mountPath: "/opt/jetty/config.properties"
subPath: config.properties
- name: secrets-properties
mountPath: "/opt/jetty/secrets.properties"
- name: doc-path
mountPath: /mnt/storage/
resources:
limits:
cpu: '1000m'
memory: '3000Mi'
requests:
cpu: '750m'
memory: '2500Mi'
volumes:
- name: config-properties
configMap:
name: jetty-config-properties
- name: secrets-properties
secret:
secretName: jetty-secrets
- name: doc-path
persistentVolumeClaim:
claimName: jetty-docs-pvc
imagePullSecrets:
- name: rcc-quay
Secrets vs ConfigMaps
Secrets let you store and manage sensitive information (e.g. passwords, private keys) and ConfigMaps are used for non-sensitive configuration data.
As you can see in the Secrets and ConfigMaps documentation:
A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key.
A ConfigMap allows you to decouple environment-specific configuration from your container images, so that your applications are easily portable.
Mounting Secret as a file
It is possible to create Secret and pass it as a file or multiple files to Pods.
I've create simple example for you to illustrate how it works.
Below you can see sample Secret manifest file and Deployment that uses this Secret:
NOTE: I used subPath with Secrets and it works as expected.
---
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
secret.file1: |
c2VjcmV0RmlsZTEK
secret.file2: |
c2VjcmV0RmlsZTIK
---
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: secrets-files
mountPath: "/mnt/secret.file1" # "secret.file1" file will be created in "/mnt" directory
subPath: secret.file1
- name: secrets-files
mountPath: "/mnt/secret.file2" # "secret.file2" file will be created in "/mnt" directory
subPath: secret.file2
volumes:
- name: secrets-files
secret:
secretName: my-secret # name of the Secret
Note: Secret should be created before Deployment.
After creating Secret and Deployment, we can see how it works:
$ kubectl get secret,deploy,pod
NAME TYPE DATA AGE
secret/my-secret Opaque 2 76s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 76s
NAME READY STATUS RESTARTS AGE
pod/nginx-7c67965687-ph7b8 1/1 Running 0 76s
$ kubectl exec nginx-7c67965687-ph7b8 -- ls /mnt
secret.file1
secret.file2
$ kubectl exec nginx-7c67965687-ph7b8 -- cat /mnt/secret.file1
secretFile1
$ kubectl exec nginx-7c67965687-ph7b8 -- cat /mnt/secret.file2
secretFile2
Projected Volume
I think a better way to achieve your goal is to use projected volume.
A projected volume maps several existing volume sources into the same directory.
In the Projected Volume documentation you can find detailed explanation but additionally I created an example that might help you understand how it works.
Using projected volume I mounted secret.file1, secret.file2 from Secret and config.file1 from ConfigMap as files into the Pod.
---
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
secret.file1: |
c2VjcmV0RmlsZTEK
secret.file2: |
c2VjcmV0RmlsZTIK
---
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
config.file1: |
configFile1
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: all-in-one
mountPath: "/config-volume"
readOnly: true
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: my-secret
items:
- key: secret.file1
path: secret-dir1/secret.file1
- key: secret.file2
path: secret-dir2/secret.file2
- configMap:
name: my-config
items:
- key: config.file1
path: config-dir1/config.file1
We can check how it works:
$ kubectl exec nginx -- ls /config-volume
config-dir1
secret-dir1
secret-dir2
$ kubectl exec nginx -- cat /config-volume/config-dir1/config.file1
configFile1
$ kubectl exec nginx -- cat /config-volume/secret-dir1/secret.file1
secretFile1
$ kubectl exec nginx -- cat /config-volume/secret-dir2/secret.file2
secretFile2
If this response doesn't answer your question, please provide more details about your Secret and what exactly you want to achieve.

How to run a command on PersistentVolume creation?

I have a StatefulSet which looks like this
apiVersion: v1
kind: StatefulSet
metadata:
name: web
spec:
...
volumeClaimTemplates:
— metadata:
name: www
spec:
resources:
requests:
storage: 1Gi
It will create a PersistentVolumeClaim (PVC) and a PersistentVolume (PV) for each Pod of a Service it controls.
I want to execute some commands on those PVs before the Pod creation.
I was thinking to create a Job which mounts those PVs and runs the commands but how do I know how many of PVs were created?
Is there a kubernetes-native solution to trigger some pod execution on PV creation?
The solution is InitContianer.
You can add it to a spec of your StatufulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
...
spec:
initContainers:
- name: init-myapp
image: ubuntu:latest
command:
- bash
- "-c"
- "your command"
volumeMounts:
- name: yourvolume
mountPath: /mnt/myvolume