Why does kubectl rollout restart <deployment-name> doesn't get my latest image? I had rebuild my image but it seems that kubernetes doesn't update my deployment with the latest image.
tl;dr
I just wanted to add an answer here regarding the failure of kubectl rollout restart deployment [my-deployment-name]. My problem was that I changed the image name, without running kubectl apply -f [my-deployment-filename>.yaml first.
Long Answer
So my earlier image name is microservices/posts which is in my local and looked like this.
# This is a file named `posts-depl.yaml`
apiVersion: apps/v1
kind: Deployment
metadata:
name: posts-depl
spec:
replicas: 1
selector:
matchLabels:
app: posts
template:
metadata:
labels:
app: posts
spec:
containers:
- name: posts
image: microservices/posts
However, since I need to push it to Docker Hub, I rebuild the image with a new name of [my docker hub username]/microservices_posts then I pushed. Then I updated the posts-depl.yaml to look like this.
# Still same file `posts-depl.yaml` but updated
apiVersion: apps/v1
kind: Deployment
metadata:
name: posts-depl
spec:
replicas: 1
selector:
matchLabels:
app: posts
template:
metadata:
labels:
app: posts
spec:
containers:
- name: posts
image: [my docker hub username]/microservices_posts # Notice that I only change this part
Apparently, when I ran kubectl rollout restart deployment posts-depl, it didn't update. Then I finally decided to go to StackOverflow. I just thought I had a wrong mistake or probably meet up with kubernetes bug or something.
But turns out I had to run kubectl apply -f <your deployment filename>.yaml again. Then it's running fine.
Just sharing, might change someone's life. ;)
So for a review here..
It seems that my past deployment which is posts-depl is cached with the image name of my earlier image which is microservices/posts, and since I build a new image named [my docker hub username]/microservices_posts it doesn't acknowledge that. So when I run kubectl rollout restart deployment <deployment name>. What it does instead was looking for the microservices/posts image which is on my local! But since it was not updated, it doesn't do a thing!
Hence, what I should be doing was re-running kubectl apply -f <my deployment filename>.yaml again which already has been updated with the new image name as [my docker hub username]/microservices_posts!
Then, I live happily ever after.
Hope that helps and may you live happily ever after too.
Related
I am trying to update a deployment via the YAML file, similar to this question. I have the following yaml file...
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-server-deployment
labels:
app: simple-server
spec:
replicas: 3
selector:
matchLabels:
app: simple-server
template:
metadata:
labels:
app: simple-server
spec:
containers:
- name: simple-server
image: nginx
ports:
- name: http
containerPort: 80
I tried changing the code by changing replicas: 3 to replicas: 1. Next I redeployed like kubectl apply -f simple-deployment.yml and I get deployment.apps/simple-server-deployment configured. However, when I run kubectl rollout history deployment/simple-server-deployment I only see 1 entry...
REVISION CHANGE-CAUSE
1 <none>
How do I do the same thing while increasing the revision so it is possible to rollback?
I know this can be done without the YAML but this is just an example case. In the real world I will have far more changes and need to use the YAML.
You can use --record flag so in your case the command will look like:
kubectl apply -f simple-deployment.yml --record
However, a few notes.
First, --record flag is deprecated - you will see following message when you will run kubectl apply with the --record flag:
Flag --record has been deprecated, --record will be removed in the future
However, there is no replacement for this flag yet, but keep in mind that in the future there probably will be.
Second thing, not every change will be recorded (even with --record flag) - I tested your example from the main question and there is no new revision. Why? It's because::
#deech this is expected behavior. The Deployment only create a new revision (i.e. another Replica Set) when you update its pod template. Scaling it won't create another revision.
Considering the two above, you need to think (and probably test) if the --record flag is suitable for you. Maybe it's better to use some version control system like git, but as I said, it depends on your requirements.
Change of replicas does not create new history record. You can add --record to you apply command and check the annotation later to see what was the last spec applied.
I'm using K8s on GCP.
Here is my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: simpleapp-direct
labels:
app: simpleapp-direct
role: backend
stage: test
spec:
replicas: 1
selector:
matchLabels:
app: simpleapp-direct
version: v0.0.1
template:
metadata:
labels:
app: simpleapp-direct
version: v0.0.1
spec:
containers:
- name: simpleapp-direct
image: gcr.io/applive/simpleapp-direct:latest
imagePullPolicy: Always
I first apply the deployment file with kubectl apply command
kubectl apply -f deployment.yaml
The pods were created properly.
I was expecting that every time I would push a new image with the tag latest, the pods would be automatically killed and restart using the new images.
I tried the rollout command
kubectl rollout restart deploy simpleapp-direct
The pods restart as I wanted.
However, I don't want to run this command every time there is a new latest build.
How can I handle this situation ?.
Thanks a lot
Try to use image hash instead of tag in your Pod Definition.
Generally: there is no way to automatically restart pods when the new image is ready. It is generally advisable not to use image:latest (or just image_name) in Kubernetes as it can cause difficulties with rollback of your deployment. Also you need to make sure that the flag: imagePullPolicy is set to Always. Normally when you use CI/CD or git-ops your deployment is updated automatically by these tools when the new image is ready and passed thru the tests.
When your Docker image is updated, you need to setup a trigger on this update within your CI/CD pipilne to re-run the deployment. Not sure about the base system/image where you build your docker image, but you can add there kubernetes certificates and run the above commands like you do on your local computer.
When using Google Stackdriver I can use the log query to find the exact log statements I am looking for.
This might look like this:
resource.type="k8s_container"
resource.labels.project_id="my-project"
resource.labels.location="europe-west3-a"
resource.labels.cluster_name="my-cluster"
resource.labels.namespace_name="dev"
resource.labels.pod_name="my-app-pod-7f6cf95b6c-nkkbm"
resource.labels.container_name="container"
However as you can see in this query argument resource.labels.pod_name="my-app-pod-7f6cf95b6c-nkkbm" that I am looking for a pod with the id 7f6cf95b6c-nkkbm. Because of this I can not use this Stackdriver view with this exact query if I deployed a new revision of my-app therefore having a new ID and the one in the curreny query becomes invalid or not locatable.
Now I don't always want to look for the new ID every time I want to have the current view of my my-app logs. So I tried to add a special label stackdriver: my-app to my Kubernetes YAML file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
metadata:
labels:
stackdriver: my-app <<<
Revisiting my newly deployed Pod I can assure that the label stackdriver: my-app is indeed existing.
Now I want to add this new label to use as a query argument:
resource.type="k8s_container"
resource.labels.project_id="my-project"
resource.labels.location="europe-west3-a"
resource.labels.cluster_name="my-cluster"
resource.labels.namespace_name="dev"
resource.labels.pod_name="my-app-pod-7f6cf95b6c-nkkbm"
resource.labels.container_name="container"
resource.labels.stackdriver=my-app <<< the kubernetes label
As you can guess this did not work otherwise I'd have no reason to write this question ;)
Any idea how the thing I am about to do can be achieved?
Any idea how the thing I am about to do can be achieved?
Yes! In fact, I've prepared an example to show you the whole process :)
Let's assume:
You have a GKE cluster named: gke-label
You have a Cloud Operations for GKE enabled (logging)
You have a Deployment named nginx with a following label:
stackdriver: look_here_for_me
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
stackdriver: look_here_for_me
replicas: 1
template:
metadata:
labels:
app: nginx
stackdriver: look_here_for_me
spec:
containers:
- name: nginx
image: nginx
You can apply this definition and send some traffic from the other pod so that the logs could be generated. I've done it with:
$ kubectl run -it --rm --image=ubuntu ubuntu -- /bin/bash
$ apt update && apt install -y curl
$ curl NGINX_POD_IP_ADDRESS/NONEXISTING # <-- this path is only for better visibility
After that you can go to:
GCP Cloud Console (Web UI) -> Logging (I used new version)
With the following query:
resource.type="k8s_container"
resource.labels.cluster_name="gke-label"
-->labels."k8s-pod/stackdriver"="look_here_for_me"
You should be able to see the container logs as well it's label:
I'm hosting an application on the Google Cloud Platform via Kubernetes, and I've managed to set up this continuous deployment pipeline:
Application code is updated
New Docker image is automatically generated
K8s Deployment is automatically updated to use the new image
This works great, except for one issue - the deployment always seems to have only one pod. Because of this, when the next update cycle comes around, the entire application goes down, which is unacceptable.
I've tried modifying the YAML of the deployment to increase the number of replicas, and it works... until the next image update, where it gets reset back to one pod again.
This is the command I use to update the image deployment:
set image deployment foo-server gcp-cd-foo-server-sha256=gcr.io/project-name/gcp-cd-foo-server:$REVISION_ID
You can use this command if you dont want to edit deployment yaml file:
kubectl scale deployment foo-server --replicas=2
Also, look at update strategy with maxUnavailable and maxsurge properties.
In your orgional deployment.yml file keep the replicas to 2 or more, othervise you cant avoid down time if only one pod is running and you are going to re-deploy/upgrade etc.
Deployment with 3 replicas( example):
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Deployment can ensure that only a certain number of Pods may be down
while they are being updated. By default, it ensures that at least 25%
less than the desired number of Pods are up (25% max unavailable).
Deployment can also ensure that only a certain number of Pods may be
created above the desired number of Pods. By default, it ensures that
at most 25% more than the desired number of Pods are up (25% max
surge).
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
Nevermind, I had just set up my deployments wrong - had something to do with using the GCP user interface to create the deployments rather than console commands. I created the deployments with kubectl run app --image ... instead and it works now.
I have a Deployment with 3 replicas of a pod running the image 172.20.20.20:5000/my_app, that is located in my private registry.
I want do a rolling-update in the deployment when I push a new latest version of that image to my registry.
I push the new image this way (tag v3.0 to latest):
$ docker tag 172.20.20.20:5000/my_app:3.0 172.20.20.20:5000/my_app
$ docker push 172.20.20.20:5000/my_app
But nothing happens. Pods' images are not upgraded. This is my deployment definition:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: 172.20.20.20:5000/my_app:latest
ports:
- containerPort: 8080
Is there a way to do that automatically? Should I run a command like rolling-update like in ReplicaControllers?
In order to upgrade a Deployment you have to modify the Deployment resource with the new image. So for example, change 172.20.20.20:5000/my_app:v1 to 172.20.20.20:5000/my_app:v2. Since you're just modifying the image within the Docker registry doesn't notice the change.
If you (manually) kill the individual Pods, the Deployment will restart them. Since the Deployment image specifies the "latest" tag Kubernetes will download the latest version (now "v3" in your case) due to the implied "Always" ImagePullPolicy.