I'm using a CI to update my kubernetes cluster whenever there's an update to an image. Whenever the image is pushed and has the latest tag it kubectl apply's the existing deployment but nothing gets updated.
this is what runs
$ kubectl apply --record --filename /tmp/deployment.yaml
My goal is when the apply is ran that a rolling deployment gets executed.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
spec:
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: us.gcr.io/joule-eed41/api:latest
imagePullPolicy: Always
ports:
- containerPort: 1337
args:
- /bin/sh
- -c
- echo running api;npm start
env:
- name: NAMESPACE
valueFrom:
configMapKeyRef:
name: config
key: NAMESPACE
As others suggested, have a specific tag.
Set new image using following command
kubectl set image deployment/deployment_name deployment_name=image_name:image_tag
In your case it would be
kubectl set image deployment/api api=us.gcr.io/joule-eed41/api:0.1
As #ksholla20 mentionedm using kubectl set image is a good option for many (most?) cases.
But if you can't change the image tag consider using:
1 ) kubectl rollout restart deployment/<name>
(reference).
2 ) kubectl patch deployment <name> -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"version\":\"$CURRENT_BUILD_HASH_OR_DATE\"}}}}}}" (reference)
(*) Notice that the patch command allow you to change specific properties in the deployment (or any other object chosen) like the label selector and the pod label or other properties like the value of the NAMESPACE environment variable in your example.
I've run into the same problem and none of the solutions posted so far will help. The solution is easy, but not easy to see or predict. The applied yaml will generate both a deployment and a replicaset the first time it's run. Unfortunately, applying changes to the manifest likely only replaces the replicaset, while the deployment will remain unchanged. This is a problem because some changes need to happen at the deployment level, but the old deployment hangs around. To have best results, delete the deployment and ensure all previous deployments and replicasets are deleted. Then apply the updated manifest.
Related
I am trying to update a deployment via the YAML file, similar to this question. I have the following yaml file...
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-server-deployment
labels:
app: simple-server
spec:
replicas: 3
selector:
matchLabels:
app: simple-server
template:
metadata:
labels:
app: simple-server
spec:
containers:
- name: simple-server
image: nginx
ports:
- name: http
containerPort: 80
I tried changing the code by changing replicas: 3 to replicas: 1. Next I redeployed like kubectl apply -f simple-deployment.yml and I get deployment.apps/simple-server-deployment configured. However, when I run kubectl rollout history deployment/simple-server-deployment I only see 1 entry...
REVISION CHANGE-CAUSE
1 <none>
How do I do the same thing while increasing the revision so it is possible to rollback?
I know this can be done without the YAML but this is just an example case. In the real world I will have far more changes and need to use the YAML.
You can use --record flag so in your case the command will look like:
kubectl apply -f simple-deployment.yml --record
However, a few notes.
First, --record flag is deprecated - you will see following message when you will run kubectl apply with the --record flag:
Flag --record has been deprecated, --record will be removed in the future
However, there is no replacement for this flag yet, but keep in mind that in the future there probably will be.
Second thing, not every change will be recorded (even with --record flag) - I tested your example from the main question and there is no new revision. Why? It's because::
#deech this is expected behavior. The Deployment only create a new revision (i.e. another Replica Set) when you update its pod template. Scaling it won't create another revision.
Considering the two above, you need to think (and probably test) if the --record flag is suitable for you. Maybe it's better to use some version control system like git, but as I said, it depends on your requirements.
Change of replicas does not create new history record. You can add --record to you apply command and check the annotation later to see what was the last spec applied.
I tried to automate the rolling update when the configmap changes are made. But, I am confused about how can I verify if the rolling update is successful or not. I found out the command
kubectl rollout status deployment test-app -n test
But I guess this is used when we are performing the rollback rather than for rolling update. What's the best way to know if the rolling update is successful or not?
I think it is fine,
kubectl rollout status deployments/test-app -n test can be used to verify the rollout deployment as well.
As an additional step, you can run,
kubectl rollout history deployments/test-app -n test as well.
if you need further clarity,
kubectl get deployments -o wide and check the READY and IMAGE fields.
ConfigMap generation and rolling update
I tried to automate the rolling update when the configmap changes are made
It is a good practice to create new resources instead of mutating (update in-place). kubectl kustomize is supporting this workflow:
The recommended way to change a deployment's configuration is to
create a new configMap with a new name,
patch the deployment, modifying the name value of the appropriate configMapKeyRef field.
You can deploy using Kustomize to automatically create a new ConfigMap every time you want to change content by using configMapGenerator. The old ones can be garbage collected when not used anymore.
Whith Kustomize configMapGenerator can you get a generated name.
Example
kind: ConfigMap
metadata:
name: example-configmap-2-g2hdhfc6tk
and this name get reflected to your Deployment that then trigger a new rolling update, but with a new ConfigMap and leave the old unchanged.
Deploy both Deployment and ConfigMap using
kubectl apply -k <kustomization_directory>
When handling change this way, you are following the practices called Immutable Infrastructure.
Verify deployment
To verify a successful deployment, you are right. You should use:
kubectl rollout status deployment test-app -n test
and when leaving the old ConfigMap unchanged but creating a new ConfigMap for the new ReplicaSet it is clear which ConfigMap belongs to which ReplicaSet.
Also rollback will be easier to understand since both old and new ReplicaSet use its own ConfigMap (on change of content).
Your command is fine to check if an update went through.
Now, a ConfigMap change eventually gets applied to the Pod. There is no need to do a rolling update for that. Depending on what you have passed in the ConfigMap, you probably could have restarted the service and that's it.
What's the best way to know if the rolling update is successful or not?
To check if you rolling update was executed correctly, you command works fine, you could check also if you replicas is running.
I tried to automate the rolling update when the configmap changes are made.
You could use Reloader to perform your rolling updates automatically when a configmap/secret changed.
Reloader can watch changes in ConfigMap and Secret and do rolling upgrades on Pods with their associated DeploymentConfigs, Deployments, Daemonsets and Statefulsets.
Let's explore how Reloader works in a pratical way using a nginx deployment as exxample.
First install Reloader in your cluster:
kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml
You will see a new container named reloader-reloader-... this container is responsible to 'watch' your deployments and make the rolling updates when necessary.
Create a configMap with your values, in my case I'll create a my-config and set a variable called myvar.value with value hello:
kubectl create configmap my-config --from-literal=myvar.value=hello
Now, let's create a simple deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-example
labels:
app: nginx
metadata:
annotations:
configmap.reloader.stakater.com/reload: my-config
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
env:
- name: MYVAR
valueFrom:
configMapKeyRef:
name: my-config
key: myvar.value
In this example, the nginx image will be used getting the value from my configMap and set in a variable called MYVAR.
To Reloader works, you must specify the name of your configMap in annotations, in the example above it will be:
metadata:
annotations:
configmap.reloader.stakater.com/reload: my-config
Apply the deployment example with kubectl apply -f mydeplyment-example.yaml and check the variable from the new pod.
$ kubectl exec -it $(kubectl get pods -l=app=nginx --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}') env | grep MYVAR
MYVAR=hello
Now let's change the value of the variable:
Edit configmap with kubectl edit configmap my-config, change the value of myvar.value to hi, save and close.
After save, Reloader will recreate your container and get the new value from configmap.
To check if the rolling update was executed successfully:
kubectl rollout status deployment deployment-example
Check the new value:
$ kubectl exec -it $(kubectl get pods -l=app=nginx --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}') env | grep MYVAR
MYVAR=hi
That's all!
Check the Reloader github see more options to use.
I hope it helps!
Mounted ConfigMaps are updated automatically reference
To Check roll-out history used below steps
ubuntu#dlv-k8s-cluster-master:~$ kubectl rollout history deployment busybox
REVISION CHANGE-CAUSE
1 kubectl create --filename=busybox.yaml --record=true
Update image on deployment as below
$ kubectl set image deployment.app/busybox *=busybox --record
deployment.apps/busybox image updated
Check new rollout history which will list the new change cause for rollout
$ kubectl rollout history deployment busybox
REVISION CHANGE-CAUSE
1 kubectl create --filename=busybox.yaml --record=true
2 kubectl set image deployment.app/busybox *=busybox --record=true
To Rollback Deployment : use undo flag along rollout command
ubuntu#dlv-k8s-cluster-master:~$ kubectl rollout undo deployment busybox deployment.apps/busybox rolled back
ubuntu#dlv-k8s-cluster-master:~$ kubectl rollout history deployment busybox
REVISION CHANGE-CAUSE
2 kubectl set image deployment.app/busybox *=busybox --record=true
3 kubectl create --filename=busybox.```
When I run my Pod I get the Pod (cas-de) is invalid spec : forbidden pod updates may not change fields other than the spec.containers[*].image, spec.initContainers[*].image, spec.activeDeadlineSeconds or spec.tolerations (only additions to existing tolerations)
However, I searched on the kubernetes website and I didn't find anything wrong:
(I really don't understand where is my mistake)
Does it better to set volumeMounts in a Pod or in Deployment?
apiVersion: v1
kind: Pod
metadata:
name: cas-de
namespace: ds-svc
spec:
containers:
- name: ds-mg-cas
image: "docker-all.xxx.net/library/ds-mg-cas:latest"
imagePullPolicy: Always
ports:
- containerPort: 8443
- containerPort: 6402
env:
- name: JAVA_APP_CONFIGS
value: "/apps/ds-cas/configs"
- name: JAVA_EXTRA_PARAMS
value: "-Djava.security.auth.login.config=./config/jaas.config -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=6402"
volumeMounts:
- name: ds-cas-config
mountPath: "/apps/ds-cas/context"
volumes:
- name: ds-cas-config
hostPath:
path: "/apps/ds-cas/context"
YAML template is valid. Some of the fields might have been changed that are forbidden and then kubectl apply .... is executed.
Looks like more like a development. Solution is to delete the existing pod using kubectl delete pod cas-de command and then execute kubectl apply -f file.yaml or kubectl create -f file.yaml.
There are several fields on objects that you simply aren't allowed to change after the object has initially been created. As a specific example, the reference documentation for Containers notes that volumeMounts "cannot be updated". If you hit one of these cases, you need to delete and recreate the object (possibly creating the new one first with a different name).
Does it better to set volumeMounts in a Pod or in Deployment?
Never use bare Pods; always prefer using one of the Controllers that manages Pods, most often a Deployment.
Changing to a Deployment will actually solve this problem because updating a Deployment's pod spec will go through the sequence of creating a new Pod, waiting for it to become available, and then deleting the old one for you. It never tries to update a Pod in place.
I have defined a Deployment for my app:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: 172.20.34.206:5000/myapp_img:2.0
ports:
- containerPort: 8080
Now, if I want update my app's image 2.0 to 3.0, I do this:
$ kubectl edit deployment/myapp-deployment
vim is open. I change the image version from 2.0 to 3.0 and save.
How can it be automated? Is there a way to do it just running a command? Something like:
$ kubectl edit deployment/myapp-deployment --image=172.20.34.206:5000/myapp:img:3.0
I thought using Kubernetes API REST but I don't understand the documentation.
You could do it via the REST API using the PATCH verb. However, an easier way is to use kubectl patch. The following command updates your app's tag:
kubectl patch deployment myapp-deployment -p \
'{"spec":{"template":{"spec":{"containers":[{"name":"myapp","image":"172.20.34.206:5000/myapp:img:3.0"}]}}}}'
According to the documentation, YAML format should be accepted as well. See Kubernetes issue #458 though (and in particular this comment) which may hint at a problem.
There is a set image command which may be useful in simple cases
Update existing container image(s) of resources.
Possible resources include (case insensitive):
pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), job, replicaset (rs)
kubectl set image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N
http://kubernetes.io/docs/user-guide/kubectl/kubectl_set_image/
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
deployment "nginx-deployment" image updated
http://kubernetes.io/docs/user-guide/deployments/
(I would have posted this as a comment if I had enough reputation)
Yes, as per http://kubernetes.io/docs/user-guide/kubectl/kubectl_patch/ both JSON and YAML formats are accepted.
But I see that all the examples there are using JSON format.
Filed https://github.com/kubernetes/kubernetes.github.io/issues/458 to add a YAML format example.
I have recently built a tool to automate deployment updates when new images are available, it works with Kubernetes and Helm:
https://github.com/rusenask/keel
You only have to label your deployments with Keel policy like keel.sh/policy=major to enable major version updates, more info in the readme. Works similarly with Helm, no additional CLI/UI required.
I have defined a Deployment for my app:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: 172.20.34.206:5000/myapp_img:2.0
ports:
- containerPort: 8080
Now, if I want update my app's image 2.0 to 3.0, I do this:
$ kubectl edit deployment/myapp-deployment
vim is open. I change the image version from 2.0 to 3.0 and save.
How can it be automated? Is there a way to do it just running a command? Something like:
$ kubectl edit deployment/myapp-deployment --image=172.20.34.206:5000/myapp:img:3.0
I thought using Kubernetes API REST but I don't understand the documentation.
You could do it via the REST API using the PATCH verb. However, an easier way is to use kubectl patch. The following command updates your app's tag:
kubectl patch deployment myapp-deployment -p \
'{"spec":{"template":{"spec":{"containers":[{"name":"myapp","image":"172.20.34.206:5000/myapp:img:3.0"}]}}}}'
According to the documentation, YAML format should be accepted as well. See Kubernetes issue #458 though (and in particular this comment) which may hint at a problem.
There is a set image command which may be useful in simple cases
Update existing container image(s) of resources.
Possible resources include (case insensitive):
pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), job, replicaset (rs)
kubectl set image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N
http://kubernetes.io/docs/user-guide/kubectl/kubectl_set_image/
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
deployment "nginx-deployment" image updated
http://kubernetes.io/docs/user-guide/deployments/
(I would have posted this as a comment if I had enough reputation)
Yes, as per http://kubernetes.io/docs/user-guide/kubectl/kubectl_patch/ both JSON and YAML formats are accepted.
But I see that all the examples there are using JSON format.
Filed https://github.com/kubernetes/kubernetes.github.io/issues/458 to add a YAML format example.
I have recently built a tool to automate deployment updates when new images are available, it works with Kubernetes and Helm:
https://github.com/rusenask/keel
You only have to label your deployments with Keel policy like keel.sh/policy=major to enable major version updates, more info in the readme. Works similarly with Helm, no additional CLI/UI required.