How to rolling update deployment in kubernetes? - kubernetes

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test
spec:
replicas: 1
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: xxx:latest
ports:
- containerPort: 80
imagePullPolicy: Always
imagePullSecrets:
- name: aaaa
I use the tag "latest". When I update the image,and the new image is still "latest". When I "kubectl set image deployments/test test=xxx:latest",nothing happend. What should I do?

a RollingUpdate is always triggered when the PodTemplateSpec under template is changed.
While using the :latest tag is not suggested it can still work when using imagePullPolicy: Always and a label which is changed with every image adjustment. Sth like this:
kubectl patch deployment test -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\"‌​:\"$(date +%s)\"}}}}}"

Rolling update depends on docker image tags. If you're using the latest tag in your deployment, you need to cut a new image with a new version.
The deployment resource can't determine if the images has changed if you're always using the latest tag. As far as k8s is concerned, you're already running an image with the tag latest, so it doesn't have anything to do.
It's highly recommended not to use latest for deployments for this reason. You'll have a much easier time if you version your docker images correctly.

Related

kubectl rollout restart deployment <deployment-name> doesn't get the latest image

Why does kubectl rollout restart <deployment-name> doesn't get my latest image? I had rebuild my image but it seems that kubernetes doesn't update my deployment with the latest image.
tl;dr
I just wanted to add an answer here regarding the failure of kubectl rollout restart deployment [my-deployment-name]. My problem was that I changed the image name, without running kubectl apply -f [my-deployment-filename>.yaml first.
Long Answer
So my earlier image name is microservices/posts which is in my local and looked like this.
# This is a file named `posts-depl.yaml`
apiVersion: apps/v1
kind: Deployment
metadata:
name: posts-depl
spec:
replicas: 1
selector:
matchLabels:
app: posts
template:
metadata:
labels:
app: posts
spec:
containers:
- name: posts
image: microservices/posts
However, since I need to push it to Docker Hub, I rebuild the image with a new name of [my docker hub username]/microservices_posts then I pushed. Then I updated the posts-depl.yaml to look like this.
# Still same file `posts-depl.yaml` but updated
apiVersion: apps/v1
kind: Deployment
metadata:
name: posts-depl
spec:
replicas: 1
selector:
matchLabels:
app: posts
template:
metadata:
labels:
app: posts
spec:
containers:
- name: posts
image: [my docker hub username]/microservices_posts # Notice that I only change this part
Apparently, when I ran kubectl rollout restart deployment posts-depl, it didn't update. Then I finally decided to go to StackOverflow. I just thought I had a wrong mistake or probably meet up with kubernetes bug or something.
But turns out I had to run kubectl apply -f <your deployment filename>.yaml again. Then it's running fine.
Just sharing, might change someone's life. ;)
So for a review here..
It seems that my past deployment which is posts-depl is cached with the image name of my earlier image which is microservices/posts, and since I build a new image named [my docker hub username]/microservices_posts it doesn't acknowledge that. So when I run kubectl rollout restart deployment <deployment name>. What it does instead was looking for the microservices/posts image which is on my local! But since it was not updated, it doesn't do a thing!
Hence, what I should be doing was re-running kubectl apply -f <my deployment filename>.yaml again which already has been updated with the new image name as [my docker hub username]/microservices_posts!
Then, I live happily ever after.
Hope that helps and may you live happily ever after too.

GKE automating deploy of multiple deployments/services with different images

I'm currently looking at GKE and some of the tutorials on google cloud. I was following this one here https://cloud.google.com/solutions/integrating-microservices-with-pubsub#building_images_for_the_app (source code https://github.com/GoogleCloudPlatform/gke-photoalbum-example)
This example has 3 deployments and one service. The example tutorial has you deploy everything via the command line which is fine and all works. I then started to look into how you could automate deployments via cloud build and discovered this:
https://cloud.google.com/build/docs/deploying-builds/deploy-gke#automating_deployments
These docs say you can create a build configuration for your a trigger (such as pushing to a particular repo) and it will trigger the build. The sample yaml they show for this is as follows:
# deploy container image to GKE
- name: "gcr.io/cloud-builders/gke-deploy"
args:
- run
- --filename=kubernetes-resource-file
- --image=gcr.io/project-id/image:tag
- --location=${_CLOUDSDK_COMPUTE_ZONE}
- --cluster=${_CLOUDSDK_CONTAINER_CLUSTER}
I understand how the location and cluster parameters can be passed in and these docs also say the following about the resource file (filename parameter) and image parameter:
kubernetes-resource-file is the file path of your Kubernetes configuration file or the directory path containing your Kubernetes resource files.
image is the desired name of the container image, usually the application name.
Relating this back to the demo application repo where all the services are in one repo, I believe I could supply a folder path to the filename parameter such as the config folder from the repo https://github.com/GoogleCloudPlatform/gke-photoalbum-example/tree/master/config
But the trouble here is that those resource files themselves have an image property in them so I don't know how this would relate to the image property of the cloud build trigger yaml. I also don't know how you could then have multiple "image" properties in the trigger yaml where each deployment would have it's own container image.
I'm new to GKE and Kubernetes in general, so I'm wondering if I'm misinterpreting what the kubernetes-resource-file should be in this instance.
But is it possible to automate deploying of multiple deployments/services in this fashion when they're all bundled into one repo? Or have Google just over simplified things for this tutorial - the reality being that most services would be in their own repo so as to be built/tested/deployed separately?
Either way, how would the image property relate to the fact that an image is already defined in the deployment yaml? e.g:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: photoalbum-app
name: photoalbum-app
spec:
replicas: 3
selector:
matchLabels:
name: photoalbum-app
template:
metadata:
labels:
name: photoalbum-app
spec:
containers:
- name: photoalbum-app
image: gcr.io/[PROJECT_ID]/photoalbum-app#[DIGEST]
tty: true
ports:
- containerPort: 8080
env:
- name: PROJECT_ID
value: "[PROJECT_ID]"
The command that you use is perfect for testing the deployment of one image. But when you work with Kubernetes (K8S), and the managed version of GCP (GKE), you usually never do this.
You use YAML file to describe your deployments, services and all other K8S object that you want. When you deploy, you can perform something like this
kubectl apply -f <file.yaml>
If you have several file, you can use wildcard is you want
kubectl apply -f config/*.yaml
If you prefer to use only one file, you can separate the object with ---
apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx
spec:...
...

Image tag in Replication Controller file

I have an image pushed to Google Container Registry, named gcr.io/$(PROJECT_ID)/img-name:46d49ab.
In my replication controller I have:
apiVersion: v1
kind: ReplicationController
metadata:
name: go-server-rc
spec:
replicas: 3
selector:
name: go-server
version: v8
template:
metadata:
labels:
name: go-server
version: v8
spec:
containers:
- name: go-server
image: gcr.io/$(PROJECT_ID)/img-name:46d49ab
ports:
- containerPort: 5000
This works, but it doesn't when I remove the commit hash tag 46d49ab. I don't want to have to change the tag every single time I commit.
I have also set up a trigger on Google Container Builder to pull the master branch of my repository after every commit, and create an image gcr.io/$(PROJECT_ID)/img-name:$(COMMIT_HASH).
How can I edit my replication controller file to just get the most recent? What workflows do people use?
It's possible to use the latest tag to ensure Kubernetes pulls the image each time it runs. Every time you create a new image tag it with latest and push it to the container registry. However, I would not recommend this.
You won't know which pods are running which version of your code. I do exactly as you mention in your question. I find it's better to update your deployment object each time your image updates. This will ensure the deployment is in the state you expect and troubleshooting will be clearer when looking at images.

Kubernetes rolling update in case of secret update

I have a Replication Controller with one replica using a secret. How can I update or recreate its (lone) pod—without downtime—with latest secret value when the secret value is changed?
My current workaround is increasing number of replicas in the Replication Controller, deleting the old pods, and changing the replica count back to its original value.
Is there a command or flag to induce a rolling update retaining the same container image and tag? When I try to do so, it rejects my attempt with the following message:
error: Specified --image must be distinct from existing container image
A couple of issues #9043 and #13488 describe the problem reasonably well, and I suspect a rolling update approach will eventuate shortly (like most things in Kubernetes), though unlikely for 1.3.0. The same issue applies with updating ConfigMaps.
Kubernetes will do a rolling update whenever anything in the deployment pod spec is changed (eg. typically image to a new version), so one suggested workaround is to set an env variable in your deployment pod spec (eg. RESTART_)
Then when you've updated your secret/configmap, bump the env value in your deployment (via kubectl apply, or patch, or edit), and Kubernetes will start a rolling update of your deployment.
Example Deployment spec:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-nginx
spec:
replicas: 2
template:
metadata:
spec:
containers:
- name: nginx
image: "nginx:stable"
ports:
- containerPort: 80
- mountPath: /etc/nginx/conf.d
name: config
readOnly: true
- mountPath: /etc/nginx/auth
name: tokens
readOnly: true
env:
- name: RESTART_
value: "13"
volumes:
- name: config
configMap:
name: test-nginx-config
- name: tokens
secret:
secretName: test-nginx-tokens
Two tips:
your environment variable name can't start with an _ or it magically disappears somehow.
if you use a number for your restart variable you need to wrap it in quotes
If I understand correctly, Deployment should be what you want.
Deployment supports rolling update for almost all fields in the pod template.
See http://kubernetes.io/docs/user-guide/deployments/

Upgrade image in a Deployment's pods

I have a Deployment with 3 replicas of a pod running the image 172.20.20.20:5000/my_app, that is located in my private registry.
I want do a rolling-update in the deployment when I push a new latest version of that image to my registry.
I push the new image this way (tag v3.0 to latest):
$ docker tag 172.20.20.20:5000/my_app:3.0 172.20.20.20:5000/my_app
$ docker push 172.20.20.20:5000/my_app
But nothing happens. Pods' images are not upgraded. This is my deployment definition:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: 172.20.20.20:5000/my_app:latest
ports:
- containerPort: 8080
Is there a way to do that automatically? Should I run a command like rolling-update like in ReplicaControllers?
In order to upgrade a Deployment you have to modify the Deployment resource with the new image. So for example, change 172.20.20.20:5000/my_app:v1 to 172.20.20.20:5000/my_app:v2. Since you're just modifying the image within the Docker registry doesn't notice the change.
If you (manually) kill the individual Pods, the Deployment will restart them. Since the Deployment image specifies the "latest" tag Kubernetes will download the latest version (now "v3" in your case) due to the implied "Always" ImagePullPolicy.