Kubernetes rolling update in case of secret update - kubernetes

I have a Replication Controller with one replica using a secret. How can I update or recreate its (lone) pod—without downtime—with latest secret value when the secret value is changed?
My current workaround is increasing number of replicas in the Replication Controller, deleting the old pods, and changing the replica count back to its original value.
Is there a command or flag to induce a rolling update retaining the same container image and tag? When I try to do so, it rejects my attempt with the following message:
error: Specified --image must be distinct from existing container image

A couple of issues #9043 and #13488 describe the problem reasonably well, and I suspect a rolling update approach will eventuate shortly (like most things in Kubernetes), though unlikely for 1.3.0. The same issue applies with updating ConfigMaps.
Kubernetes will do a rolling update whenever anything in the deployment pod spec is changed (eg. typically image to a new version), so one suggested workaround is to set an env variable in your deployment pod spec (eg. RESTART_)
Then when you've updated your secret/configmap, bump the env value in your deployment (via kubectl apply, or patch, or edit), and Kubernetes will start a rolling update of your deployment.
Example Deployment spec:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-nginx
spec:
replicas: 2
template:
metadata:
spec:
containers:
- name: nginx
image: "nginx:stable"
ports:
- containerPort: 80
- mountPath: /etc/nginx/conf.d
name: config
readOnly: true
- mountPath: /etc/nginx/auth
name: tokens
readOnly: true
env:
- name: RESTART_
value: "13"
volumes:
- name: config
configMap:
name: test-nginx-config
- name: tokens
secret:
secretName: test-nginx-tokens
Two tips:
your environment variable name can't start with an _ or it magically disappears somehow.
if you use a number for your restart variable you need to wrap it in quotes

If I understand correctly, Deployment should be what you want.
Deployment supports rolling update for almost all fields in the pod template.
See http://kubernetes.io/docs/user-guide/deployments/

Related

K8s configmap for application dynamic configuration

I have a microservice for handling retention policy.
This application has default configuration for retention, e.g.: size for retention, files location etc.
But we also want create an API for the user to change this configuration with customized values on runtime.
I created a configmap with the default values, and in the application I used k8s client library to get/update/watch the configmap.
My question is, is it correct to use configmap for dynamic buisness configuration? or is it meant for static configuration that user is not supposed to touch during runtime?
Thanks in advance
There are no rules against it. A lot of software leverages kube API to do some kind of logic / state, ie. leader election. All of those require the app to apply changes to a kube resource. With that in mind do remember it always puts some additional load on your API and if you're unlucky that might become an issue. About two years ago we've been experiencing API limits exhaustion on one of the managed k8s services cause we were using a lot of deployments that had rather intensive leader election logic (2 requests per pod every 5 sec). The issue is long gone since then, but it shows what you have to take into account when designing interactions like this (retries, backoffs etc.)
Using configMaps is perfectly fine for such use cases. You can use a client library in order to watch for updates on the given configMap, however a cleaner solution would be to mount the configMap as a file into the pod and have your configuration set up from the given file. Since you're mounting the configMap as a Volume, changes won't need a pod restart for changes to be visible within the pod (unlike env variables that only "refresh" once the pod get's recreated).
Let's say you have this configMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
And then you mount this configMap as a Volume into your Pod:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: registry.k8s.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never
When the pod runs, the command ls /etc/config/ produces the output below:
SPECIAL_LEVEL
SPECIAL_TYPE
This way you would also reduce "noise" to the API-Server as you can simply query the given files for updates to any configuration.

Kubernetes: Restart pods when config map values change

I have a pod with the following specs
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
env:
- name: WATCH_NAMESPACE
valueFrom:
configMapKeyRef:
name: watch-namespace-config
key: WATCH_NAMESPACE
restartPolicy: Always
I also created a ConfigMap
kubectl create configmap watch-namespace-config \
--from-literal=WATCH_NAMESPACE=dev
The pod looks for values in the watch-namespace-config configmap.
When I manually change the configmap values, I want the pod to restart automatically to reflect this change. Checking if that is possible in any way.
This is currently a feature in progress https://github.com/kubernetes/kubernetes/issues/22368
For now, use Reloader - https://github.com/stakater/Reloader
It watches if some change happens in ConfigMap and/or Secret; then performs a rolling upgrade on relevant DeploymentConfig, Deployment, Daemonset, Statefulset and Rollout
How to use it - https://github.com/stakater/Reloader#how-to-use-reloader
As you mentioned correctly once you update a ConfigMap or Secret the Deployment/Pod/Stateful set is not updated.
An optional solution for this scenario is to use Kustomization.
Kustomization generates a unique name every time you update the ConfigMap/Secret with a generated hash, for example: ConfigMap-xxxxxx.
If you will will use:
kubectl kustomize . | kubectl apply -f -
kubectl will "update" the changes with the new config map values.
Working Example(s) using Kustomization:
https://github.com/nirgeier/KubernetesLabs/tree/master/Labs/08-Kustomization

How to mount same volume on to all pods in a kubernetes namespace

We have a namespace in kubernetes where I would like some secrets (files like jks,properties,ts,etc.) to be made available to all the containers in all the pods (we have one JVM per container & one container per pod kind of Deployment).
I have created secrets using kustomization and plan to use it as a volume for spec of each Deployment & then volumeMount it for the container of this Deployment. I would like to have this volume to be mounted on each of the containers deployed in our namespace.
I want to know if kustomize (or anything else) can help me to mount this volume on all the deployments in this namespace?
I have tried the following patchesStrategicMerge
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: myNamespace
spec:
template:
spec:
imagePullSecrets:
- name: pull-secret
containers:
- volumeMounts:
- name: secret-files
mountPath: "/secrets"
readOnly: true
volumes:
- name: secret-files
secret:
secretName: mySecrets
items:
- key: key1
path: ...somePath
- key: key2
path: ...somePath
It requires name in metadata section which does not help me as all my Deployments have different names.
Inject Information into Pods Using a PodPreset
You can use a PodPreset object to inject information like secrets, volume mounts, and environment variables etc into pods at creation time.
Update: Feb 2021. The PodPreset feature only made it to alpha. It was removed in v1.20 of kubernetes. See release note https://kubernetes.io/docs/setup/release/notes/
The v1alpha1 PodPreset API and admission plugin has been removed with
no built-in replacement. Admission webhooks can be used to modify pods
on creation. (#94090, #deads2k) [SIG API Machinery, Apps, CLI, Cloud
Provider, Scalability and Testing]
PodPresent (https://kubernetes.io/docs/tasks/inject-data-application/podpreset/) is one way to do this but for this all pods in your namespace should match the label you specify in PodPresent spec.
Another way (which is most popular) is to use Dynamic Admission Control (https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) and write a Mutating webhook in your cluster which will edit your pod spec and add all the secrets you want to mount. Using this you can also make other changes in your pod spec like mounting volumes, adding label and many more.
Standalone kustomize support a patch to many resources. Here is an example Patching multiple resources at once. the built-in kustomize in kubectl doesn't support this feature.
To mount secret as volume you need to update yaml construct for your pod/deployment manifest files and rebuild them.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
volumeMounts:
- name: my-secret-volume
mountPath: /etc/secretpath
volumes:
- name: my-secret-volume
secret:
secretName: my-secret
kustomize (or anything else) will not mount it for you.

Invalid spec when I run pod.yaml

When I run my Pod I get the Pod (cas-de) is invalid spec : forbidden pod updates may not change fields other than the spec.containers[*].image, spec.initContainers[*].image, spec.activeDeadlineSeconds or spec.tolerations (only additions to existing tolerations)
However, I searched on the kubernetes website and I didn't find anything wrong:
(I really don't understand where is my mistake)
Does it better to set volumeMounts in a Pod or in Deployment?
apiVersion: v1
kind: Pod
metadata:
name: cas-de
namespace: ds-svc
spec:
containers:
- name: ds-mg-cas
image: "docker-all.xxx.net/library/ds-mg-cas:latest"
imagePullPolicy: Always
ports:
- containerPort: 8443
- containerPort: 6402
env:
- name: JAVA_APP_CONFIGS
value: "/apps/ds-cas/configs"
- name: JAVA_EXTRA_PARAMS
value: "-Djava.security.auth.login.config=./config/jaas.config -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=6402"
volumeMounts:
- name: ds-cas-config
mountPath: "/apps/ds-cas/context"
volumes:
- name: ds-cas-config
hostPath:
path: "/apps/ds-cas/context"
YAML template is valid. Some of the fields might have been changed that are forbidden and then kubectl apply .... is executed.
Looks like more like a development. Solution is to delete the existing pod using kubectl delete pod cas-de command and then execute kubectl apply -f file.yaml or kubectl create -f file.yaml.
There are several fields on objects that you simply aren't allowed to change after the object has initially been created. As a specific example, the reference documentation for Containers notes that volumeMounts "cannot be updated". If you hit one of these cases, you need to delete and recreate the object (possibly creating the new one first with a different name).
Does it better to set volumeMounts in a Pod or in Deployment?
Never use bare Pods; always prefer using one of the Controllers that manages Pods, most often a Deployment.
Changing to a Deployment will actually solve this problem because updating a Deployment's pod spec will go through the sequence of creating a new Pod, waiting for it to become available, and then deleting the old one for you. It never tries to update a Pod in place.

Kubernetes persistent volume data corrupted after multiple pod deletions

I am stuggling with a simple one replica deployment of the official event store image on a Kubernetes cluster. I am using a persistent volume for the data storage.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-eventstore
spec:
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
app: my-eventstore
spec:
imagePullSecrets:
- name: runner-gitlab-account
containers:
- name: eventstore
image: eventstore/eventstore
env:
- name: EVENTSTORE_DB
value: "/usr/data/eventstore/data"
- name: EVENTSTORE_LOG
value: "/usr/data/eventstore/log"
ports:
- containerPort: 2113
- containerPort: 2114
- containerPort: 1111
- containerPort: 1112
volumeMounts:
- name: eventstore-storage
mountPath: /usr/data/eventstore
volumes:
- name: eventstore-storage
persistentVolumeClaim:
claimName: eventstore-pv-claim
And this is the yaml for my persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: eventstore-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
The deployments work fine. It's when I tested for durability that I started to encounter a problem. I delete a pod to force actual state from desired state and see how Kubernetes reacts.
It immediately launched a new pod to replace the deleted one. And the admin UI was still showing the same data. But after deleting a pod for the second time, the new pod did not come up. I got an error message that said "record too large" that indicated corrupted data according to this discussion. https://groups.google.com/forum/#!topic/event-store/gUKLaxZj4gw
I tried again for a couple of times. Same result every time. After deleting the pod for the second time the data is corrupted. This has me worried that an actual failure will cause similar result.
However, when deploying new versions of the image or scaling the pods in the deployment to zero and back to one no data corruption occurs. After several tries everything is fine. Which is odd since that also completely replaces pods (I checked the pod id's and they changed).
This has me wondering if deleting a pod using kubectl delete is somehow more forcefull in the way that a pod is terminated. Do any of you have similar experience? Of insights on if/how delete is different? Thanks in advance for your input.
Regards,
Oskar
I was refered to this pull request on Github that stated the the proces was not killed properly: https://github.com/EventStore/eventstore-docker/pull/52
After building a new image with the Docker file from the pull request put this image in the deployment. I am killing pods left and right, no data corruption issues anymore.
Hope this helps someone facing the same issue.