Kubernetes: Restart pods when config map values change - kubernetes

I have a pod with the following specs
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
env:
- name: WATCH_NAMESPACE
valueFrom:
configMapKeyRef:
name: watch-namespace-config
key: WATCH_NAMESPACE
restartPolicy: Always
I also created a ConfigMap
kubectl create configmap watch-namespace-config \
--from-literal=WATCH_NAMESPACE=dev
The pod looks for values in the watch-namespace-config configmap.
When I manually change the configmap values, I want the pod to restart automatically to reflect this change. Checking if that is possible in any way.

This is currently a feature in progress https://github.com/kubernetes/kubernetes/issues/22368
For now, use Reloader - https://github.com/stakater/Reloader
It watches if some change happens in ConfigMap and/or Secret; then performs a rolling upgrade on relevant DeploymentConfig, Deployment, Daemonset, Statefulset and Rollout
How to use it - https://github.com/stakater/Reloader#how-to-use-reloader

As you mentioned correctly once you update a ConfigMap or Secret the Deployment/Pod/Stateful set is not updated.
An optional solution for this scenario is to use Kustomization.
Kustomization generates a unique name every time you update the ConfigMap/Secret with a generated hash, for example: ConfigMap-xxxxxx.
If you will will use:
kubectl kustomize . | kubectl apply -f -
kubectl will "update" the changes with the new config map values.
Working Example(s) using Kustomization:
https://github.com/nirgeier/KubernetesLabs/tree/master/Labs/08-Kustomization

Related

Pod identifier number

I'm trying to generate a Unique Key in the application, using date/time and a number sequence managed by the application. It works fine as we don't have multiapplication.
The application is running in a Kubernetes pod with auto scaling configured.
Is there any way to generate or get a unique and numeric identifier per pod and put them in the container environment variables? there is no need for the intentifier to be fixed to use the statefulSets
UPDATE
the problem we are having with the uid is the size of the collections, tha's why we're are looking for a solution that's about the size of a bigInt, and if there is any other numberic unique id similar as an alternative of use for the UID.
...get a unique and numeric identifier per pod and put them in the container environment variables?
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
run: busybox
spec:
restartPolicy: Never
containers:
- name: busybox
image: busybox
command: ["ash","-c","echo ${MY_UID} && sleep 3600"]
env:
- name: MY_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
Run kubectl logs <pod> will print you the unique ID assigned to the environment variable in your pod.

kubectl apply -f works on PC but not in Gitlab Runner

I am trying to deploy to kubernetes using Gitlab CICD. No matter what I do, kubectl apply -f helloworld-deployment.yml --record in my .gitlab-ci.yml always returns that the deployment is unchanged:
$ kubectl apply -f helloworld-deployment.yml --record
deployment.apps/helloworld-deployment unchanged
Even if I change the tag on the image, or if the deployment doesn't exist at all. However, if I run kubectl apply -f helloworld-deployment.yml --record from my own computer, it works fine and updates when a tag changes and creates the deployment when no deployment exist. Below is my .gitlab-ci.yml that I'm testing with:
image: docker:dind
services:
- docker:dind
stages:
- deploy
deploy-prod:
stage: deploy
image: google/cloud-sdk
environment: production
script:
- kubectl apply -f helloworld-deployment.yml --record
Below is helloworld-deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-deployment
spec:
replicas: 2
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: registry.gitlab.com/repo/helloworld:test
imagePullPolicy: Always
ports:
- containerPort: 3000
imagePullSecrets:
- name: regcred
Update:
This is what I see if I run kubectl rollout history deployments/helloworld-deployment and there is no existing deployment:
Error from server (NotFound): deployments.apps "helloworld-deployment" not found
If the deployment already exists, I see this:
REVISION CHANGE-CAUSE
1 kubectl apply --filename=helloworld-deployment.yml --record=true
With only one revision.
I did notice this time that when I changed the tag, the output from my Gitlab Runner was:
deployment.apps/helloworld-deployment configured
However, there were no new pods. When I ran it from my PC, then I did see new pods created.
Update:
Running kubectl get pods shows two different pods in Gitlab runner than I see on my PC.
I definitely only have one kubernetes cluster, but kubectl config view shows some differences (the server url is the same). The output for contexts shows different namespaces. Does this mean I need to set a namespace either in my yml file or pass it in the command? Here is the output from the Gitlab runner:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: URL
name: gitlab-deploy
contexts:
- context:
cluster: gitlab-deploy
namespace: helloworld-16393682-production
user: gitlab-deploy
name: gitlab-deploy
current-context: gitlab-deploy
kind: Config
preferences: {}
users:
- name: gitlab-deploy
user:
token: [MASKED]
And here is the output from my PC:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: URL
contexts:
- context:
cluster: do-nyc3-helloworld
user: do-nyc3-helloworld-admin
name: do-nyc3-helloworld
current-context: do-nyc3-helloworld
kind: Config
preferences: {}
users:
- name: do-nyc3-helloworld-admin
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- kubernetes
- cluster
- kubeconfig
- exec-credential
- --version=v1beta1
- --context=default
- VALUE
command: doctl
env: null
It looks like Gitlab adds their own default for namespace:
<project_name>-<project_id>-<environment>
Because of this, I put this in the metadata section of helloworld-deployment.yml:
namespace: helloworld-16393682-production
And then it worked as expected. It was deploying before, but kubectl get pods didn't show it since that command was using the default namespace.
Since Gitlab use a custom namespace you need to add a namespace flag to you command to display your pods:
kubectl get pods -n helloworld-16393682-production
You can set the default namespace for kubectl commands. See here.
You can permanently save the namespace for all subsequent kubectl commands in that contex
In your case it could be:
kubectl config set-context --current --namespace=helloworld-16393682-production
Or if you are using multiples cluster, you can switch between namespaces using:
kubectl config use-context helloworld-16393682-production
In this link you can see a lot of useful commands and configurations.
I hope it helps! =)

Invalid spec when I run pod.yaml

When I run my Pod I get the Pod (cas-de) is invalid spec : forbidden pod updates may not change fields other than the spec.containers[*].image, spec.initContainers[*].image, spec.activeDeadlineSeconds or spec.tolerations (only additions to existing tolerations)
However, I searched on the kubernetes website and I didn't find anything wrong:
(I really don't understand where is my mistake)
Does it better to set volumeMounts in a Pod or in Deployment?
apiVersion: v1
kind: Pod
metadata:
name: cas-de
namespace: ds-svc
spec:
containers:
- name: ds-mg-cas
image: "docker-all.xxx.net/library/ds-mg-cas:latest"
imagePullPolicy: Always
ports:
- containerPort: 8443
- containerPort: 6402
env:
- name: JAVA_APP_CONFIGS
value: "/apps/ds-cas/configs"
- name: JAVA_EXTRA_PARAMS
value: "-Djava.security.auth.login.config=./config/jaas.config -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=6402"
volumeMounts:
- name: ds-cas-config
mountPath: "/apps/ds-cas/context"
volumes:
- name: ds-cas-config
hostPath:
path: "/apps/ds-cas/context"
YAML template is valid. Some of the fields might have been changed that are forbidden and then kubectl apply .... is executed.
Looks like more like a development. Solution is to delete the existing pod using kubectl delete pod cas-de command and then execute kubectl apply -f file.yaml or kubectl create -f file.yaml.
There are several fields on objects that you simply aren't allowed to change after the object has initially been created. As a specific example, the reference documentation for Containers notes that volumeMounts "cannot be updated". If you hit one of these cases, you need to delete and recreate the object (possibly creating the new one first with a different name).
Does it better to set volumeMounts in a Pod or in Deployment?
Never use bare Pods; always prefer using one of the Controllers that manages Pods, most often a Deployment.
Changing to a Deployment will actually solve this problem because updating a Deployment's pod spec will go through the sequence of creating a new Pod, waiting for it to become available, and then deleting the old one for you. It never tries to update a Pod in place.

Kubectl apply does not update pods or deployments

I'm using a CI to update my kubernetes cluster whenever there's an update to an image. Whenever the image is pushed and has the latest tag it kubectl apply's the existing deployment but nothing gets updated.
this is what runs
$ kubectl apply --record --filename /tmp/deployment.yaml
My goal is when the apply is ran that a rolling deployment gets executed.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
spec:
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: us.gcr.io/joule-eed41/api:latest
imagePullPolicy: Always
ports:
- containerPort: 1337
args:
- /bin/sh
- -c
- echo running api;npm start
env:
- name: NAMESPACE
valueFrom:
configMapKeyRef:
name: config
key: NAMESPACE
As others suggested, have a specific tag.
Set new image using following command
kubectl set image deployment/deployment_name deployment_name=image_name:image_tag
In your case it would be
kubectl set image deployment/api api=us.gcr.io/joule-eed41/api:0.1
As #ksholla20 mentionedm using kubectl set image is a good option for many (most?) cases.
But if you can't change the image tag consider using:
1 ) kubectl rollout restart deployment/<name>
(reference).
2 ) kubectl patch deployment <name> -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"version\":\"$CURRENT_BUILD_HASH_OR_DATE\"}}}}}}" (reference)
(*) Notice that the patch command allow you to change specific properties in the deployment (or any other object chosen) like the label selector and the pod label or other properties like the value of the NAMESPACE environment variable in your example.
I've run into the same problem and none of the solutions posted so far will help. The solution is easy, but not easy to see or predict. The applied yaml will generate both a deployment and a replicaset the first time it's run. Unfortunately, applying changes to the manifest likely only replaces the replicaset, while the deployment will remain unchanged. This is a problem because some changes need to happen at the deployment level, but the old deployment hangs around. To have best results, delete the deployment and ensure all previous deployments and replicasets are deleted. Then apply the updated manifest.

Kubernetes rolling update in case of secret update

I have a Replication Controller with one replica using a secret. How can I update or recreate its (lone) pod—without downtime—with latest secret value when the secret value is changed?
My current workaround is increasing number of replicas in the Replication Controller, deleting the old pods, and changing the replica count back to its original value.
Is there a command or flag to induce a rolling update retaining the same container image and tag? When I try to do so, it rejects my attempt with the following message:
error: Specified --image must be distinct from existing container image
A couple of issues #9043 and #13488 describe the problem reasonably well, and I suspect a rolling update approach will eventuate shortly (like most things in Kubernetes), though unlikely for 1.3.0. The same issue applies with updating ConfigMaps.
Kubernetes will do a rolling update whenever anything in the deployment pod spec is changed (eg. typically image to a new version), so one suggested workaround is to set an env variable in your deployment pod spec (eg. RESTART_)
Then when you've updated your secret/configmap, bump the env value in your deployment (via kubectl apply, or patch, or edit), and Kubernetes will start a rolling update of your deployment.
Example Deployment spec:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-nginx
spec:
replicas: 2
template:
metadata:
spec:
containers:
- name: nginx
image: "nginx:stable"
ports:
- containerPort: 80
- mountPath: /etc/nginx/conf.d
name: config
readOnly: true
- mountPath: /etc/nginx/auth
name: tokens
readOnly: true
env:
- name: RESTART_
value: "13"
volumes:
- name: config
configMap:
name: test-nginx-config
- name: tokens
secret:
secretName: test-nginx-tokens
Two tips:
your environment variable name can't start with an _ or it magically disappears somehow.
if you use a number for your restart variable you need to wrap it in quotes
If I understand correctly, Deployment should be what you want.
Deployment supports rolling update for almost all fields in the pod template.
See http://kubernetes.io/docs/user-guide/deployments/