How to control openshift rollout (CI/CD pipeline)? - deployment

We've been using a deployment config with image change trigger so that when a new image appears in the configured image stream a new rollout would be triggered. This has worked relatively well, but now we got a new challenge. The pipeline should wait for the rollout to complete and reflect success/failure as the job outcome.
We've figured so far to pause rollouts while the image is being built. Then once built, unpause the rollouts and rollout explicitly, then wait for rollout to complete:
# To work well with previously auto triggered deployments, pause rollouts if present
oc rollout pause dc "$SOURCE_NAME" || true
# apply deployment config
# TODO: Pass all the template parameters
oc process -f delivery-automation/openshift/jdk-service-template.yml -p "SOURCE_NAME=$SOURCE_NAME" -p "IMAGE_NAME=$IMAGE_NAME" | oc apply -f - --validate
# tag the correct image version as latest
# https://docs.openshift.com/container-platform/3.4/dev_guide/application_lifecycle/promoting_applications.html
oc tag "$OC_PROJECT_STORAGE/$IMAGE_NAME:$SOURCE_VERSION" "$IMAGE_NAME:latest"
# gogogogo
oc rollout resume dc "$SOURCE_NAME" || true
#new deployment with latest image
oc rollout latest "$SOURCE_NAME"
#check for rollout is success
oc rollout status "dc/$SOURCE_NAME" --watch
This almost works, except it still triggers deployments on image change which then fails on oc rollout latest because another deployment is already in progress.
EDIT: The command oc rollout resume actually starts a rollout on image change!
We've tried to modify the trigger as well not to trigger automatically to no avail:
triggers:
- type: ImageChange
imageChangeParams:
automatic: false
containerNames:
- ${SOURCE_NAME}
from:
kind: ImageStreamTag
name: '${IMAGE_NAME}:latest'
We've also tried to remove the image change trigger altogether, but then the rollout fails because it cannot resolve the actual image.

I have a sloppy solution for now and I'll be happy if someone proposes a better alternative:
# reconfigure service and deployment
- oc process -f service-template.yml -p "SOURCE_NAME=$SOURCE_NAME" -p "IMAGE_NAME=$IMAGE_NAME" | oc apply -f - --validate
# tag current version as latest
- oc tag "$OC_PROJECT_STORAGE/$IMAGE_NAME:$SOURCE_VERSION" "$IMAGE_NAME:latest"
# cancel any pending rollout, last one wins!
- oc rollout cancel dc "$SOURCE_NAME" || true
# cancel any pending rollout, make sure cancel completed
- oc rollout status "dc/$SOURCE_NAME" --watch || true
# trigger new rollout
- oc rollout latest "$SOURCE_NAME" -o json | jq -r '"Rollout started - " + (.status.latestVersion | #text)'
# follow up on latest rollout
- oc rollout status "dc/$SOURCE_NAME" --watch
This leaves me with some residual cancelled rollouts, but functionally, it is satisfactory.

Related

Why the k8s rollback (rollout undo) is not working?

After a successful
kubectl rollout restart deployment/foo
the
kubectl rollout undo deployment/foo
or
kubectl rollout undo deployment/foo --to-revision=x
are not having effect. I mean, the pods are replaced by new ones and a new revision is created which can be checked with
kubectl rollout history deployment foo
but when I call the service, the rollback had no effect.
I also tried to remove the imagePullPolicy: Always, guessing that it was always pulling even in the rollback, with no success because probably one thing is not related to the other.
Edited: The test is simple, I change the health check route of the http api to return something different in the json, and it doesn't.
Edited:
Maybe a typo, but not: I was executing with ... undo deployment/foo ..., and now tried with ... undo deployment foo .... It also gives me deployment.apps/foo rolled back, but no changes in the live system.
More tests: I changed again my api route to test what would happen if I executed a rollout undo to every previous revision one by one. I applied the last 10 revisions, and nothing.
To be able to rollback to a previous version don't forget to append the --record parameter to your kubectl command, for example:
kubectl apply -f DEPLOYMENT.yaml --record
Then you should be able to see the history as you know with:
kubectl rollout history deployment DEPLOYMENT_NAME
And your rollback will work properly
kubectl rollout undo deployment DEPLOYMENT_NAME --to-revision=CHOOSEN_REVISION_NUMBER
Little example:
consider my nginx deployment manifest "nginx-test.yaml" here:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
lets create it:
❯ kubectl apply -f nginx-test.yaml --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginx-deployment created
lets check the image of this deployment, as expected from the manifest:
❯ k get pod nginx-deployment-74d589986c-k9whj -o yaml | grep image:
- image: nginx
image: docker.io/library/nginx:latest
now lets modify the image of this deployment to "nginx:1.21":
#"nginx=" correspond to the name of the container inside the pod create by the deployment.
❯ kubectl set image deploy nginx-deployment nginx=nginx:1.21.6
deployment.apps/nginx-deployment image updated
we can optionnaly check the rollout status:
❯ kubectl rollout status deployment nginx-deployment
deployment "nginx-deployment" successfully rolled out
we can check the rollout history with:
❯ kubectl rollout history deploy nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 kubectl apply --filename=nginx-test.yaml --record=true
2 kubectl apply --filename=nginx-test.yaml --record=true
lets check the image of this deployment, as expected:
❯ k get pod nginx-deployment-66dcfc79b5-4pk7w -o yaml | grep image:
- image: nginx:1.21.6
image: docker.io/library/nginx:1.21.6
Oh, no, i don't like this image ! Lets rollback:
❯ kubectl rollout undo deployment nginx-deployment --to-revision=1
deployment.apps/nginx-deployment rolled back
creating:
> kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-deployment-66dcfc79b5-4pk7w 1/1 Running 0 3m41s 10.244.3.4 so-cluster-1-worker3 <none> <none>
pod/nginx-deployment-74d589986c-m2htr 0/1 ContainerCreating 0 13s <none> so-cluster-1-worker2 <none> <none>
after few seconds:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-deployment-74d589986c-m2htr 1/1 Running 0 23s 10.244.4.10 so-cluster-1-worker2 <none> <none>
as you can see it worked:
❯ k get pod nginx-deployment-74d589986c-m2htr -o yaml | grep image:
- image: nginx
image: docker.io/library/nginx:latest
lets recheck the history:
❯ kubectl rollout history deploy nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 kubectl apply --filename=nginx-test.yaml --record=true
2 kubectl apply --filename=nginx-test.yaml --record=true
you can change the rollout history's CHANGE-CAUSE with the "kubernetes.io/change-cause" annotation:
❯ kubectl annotate deploy nginx-deployment kubernetes.io/change-cause="update image from 1.21.6 to latest" --reco
rd
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginx-deployment annotated
lets recheck the history:
❯ kubectl rollout history deploy nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
2 kubectl apply --filename=nginx-test.yaml --record=true
3 update image from 1.21.6 to latest
lets describe the deployment:
❯ kubectl describe deploy nginx-deploy
Name: nginx-deployment
Namespace: so-tests
CreationTimestamp: Fri, 06 May 2022 00:56:09 -0300
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision: 3
kubernetes.io/change-cause: update image from latest to latest
...
hope this has helped you, bguess.
I experienced a similar situation and it was more an own mistake rather than a configuration issue with my K8s manifests.
At my Docker image creation workflow, I forgot to update the version of my Docker image. I have a GitHub Action workflow that pushes the image to DockerHub. Not updating the image version, overwrote the current image with latest changes in the application.
Then my kubectl rollout undo command was pulling the correct image but that image had the most recent changes in the application. In other words, image 1.1 was the same as 1.0. Running undo had no effect on the application state.
Stupid mistake but that was my experience in case helps someone.

How to remove a label from a kubernetes object just with "kubectl apply -f file.yaml"?

I'm playing around with GitOps and ArgoCD in Redhat Openshift. My goal is to switch a worker node to an infra node.
I want to do this with descriptive YAML Files, and NOT manually by using the command line (that's easy with kubectl label node ...)
In order to do make the node an infra node, I want to add a label "infra" and take the label "worker" from it. Before, the object looks like this (irrelevant labels omitted):
apiVersion: v1
kind: Node
metadata:
labels:
node-role.kubernetes.io/infra: ""
name: node6.example.com
spec: {}
After applying a YAML File, it's supposed to look like that:
apiVersion: v1
kind: Node
metadata:
labels:
node-role.kubernetes.io/worker: ""
name: node6.example.com
spec: {}
If I put the latter config in a file, and do "kubectl apply -f ", the node has both infra and worker labels. So adding a label or changing the value of a label is easy, but is there a way to remove a label in an objects metadata by applying a YAML file ?
you can delete the label with
kubectl label node node6.example.com node-role.kubernetes.io/infra-
than you can run the kubectl apply again with the new label.
You will be up and running.
I would say it's not possible to do with kubectl apply, at least I tried and couldn't find any informations about that.
As #Petr Kotas mentioned you can always use
kubectl label node node6.example.com node-role.kubernetes.io/infra-
But I see you're looking for something else
I want to do this with descriptive YAML Files, and NOT manually by using the command line (that's easy with kubectl label node ...)
So maybe the answer could be to use API clients, for example python? I have found this example here, made by #Prafull Ladha
As already mentioned, correct kubectl example to delete label, but there is no mention of removing labels using API clients. if you want to remove label using the API, then you need to provide a new body with the labelname: None and then patch that body to the node or pod. I am using the kubernetes python client API for example purpose
from pprint import pprint
from kubernetes import client, config
config.load_kube_config()
client.configuration.debug = True
api_instance = client.CoreV1Api()
body = {
"metadata": {
"labels": {
"label-name": None}
}
}
api_response = api_instance.patch_node("minikube", body)
print(api_response)
Try setting the worker label to false:
node-role.kubernetes.io/worker: "false"
Worked for me on OpenShift 4.4.
Edit:
This doesn't work. What happened was:
Applied YML file containing node-role.kubernetes.io/worker: "false"
Automated process ran deleting the node-role.kubernetes.io/worker label from the node (due to it not being specified in the YML it would automatically apply)
What's funny is that the automated process would not delete the label if it was empty instead of set to false.
I've pretty successfully changed a node label in my Kubernetes cluster (created using kubeadm) using kubectl replace and kubectl apply.
Required: If your node configuration was changed manually using imperative command like kubectl label it's required to fix last-applied-configuration annotation using the following command (replace node2 with your node name):
kubectl get node node2 -o yaml | kubectl apply -f -
Note: It works in the same way with all types of Kubernetes objects (with slightly different consequences. Always check the results).
Note2: --export argument for kubectl get is deprecated, and it works well without it, but if you use it the last-applied-configuration annotation appears to be much shorter and easier to read.
Without applying existing configuration, the next kubectl apply command will ignore all fields that are not present in the last-applied-configuration annotation.
The following example illustrate that behavior:
kubectl get node node2 -o yaml | grep node-role
{"apiVersion":"v1","kind":"Node","metadata":{"annotations":{"flannel.alpha.coreos.com/backend-data":"{\"VtepMAC\":\"46:c6:d1:f0:6c:0a\"}","flannel.alpha.coreos.com/backend-type":"vxlan","flannel.alpha.coreos.com/kube-subnet-manager":"true","flannel.alpha.coreos.com/public-ip":"10.156.0.11","kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"creationTimestamp":null,
"labels":{
"beta.kubernetes.io/arch":"amd64",
"beta.kubernetes.io/os":"linux",
"kubernetes.io/arch":"amd64",
"kubernetes.io/hostname":"node2",
"kubernetes.io/os":"linux",
"node-role.kubernetes.io/worker":""}, # <--- important line: only worker label is present
"name":"node2","selfLink":"/api/v1/nodes/node2"},"spec":{"podCIDR":"10.244.2.0/24"},"status":{"daemonEndpoints":{"kubeletEndpoint":{"Port":0}},"nodeInfo":{"architecture":"","bootID":"","containerRuntimeVersion":"","kernelVersion":"","kubeProxyVersion":"","kubeletVersion":"","machineID":"","operatingSystem":"","osImage":"","systemUUID":""}}}
node-role.kubernetes.io/santa: ""
node-role.kubernetes.io/worker: ""
Let's check what happened with node-role.kubernetes.io/santa label if I try to replace the worker with infra and remove santa, ( worker is present in the annotation):
# kubectl diff is used to comare the current online configuration, and the configuration as it would be if applied
kubectl get node node2 -o yaml | sed 's#node-role.kubernetes.io/worker: ""#node-role.kubernetes.io/infra: ""#' | sed 's#node-role.kubernetes.io/santa: ""##'| kubectl diff -f -
diff -u -N /tmp/LIVE-380689040/v1.Node..node2 /tmp/MERGED-682760879/v1.Node..node2
--- /tmp/LIVE-380689040/v1.Node..node2 2020-04-08 17:20:18.108809972 +0000
+++ /tmp/MERGED-682760879/v1.Node..node2 2020-04-08 17:20:18.120809972 +0000
## -18,8 +18,8 ##
kubernetes.io/arch: amd64
kubernetes.io/hostname: node2
kubernetes.io/os: linux
+ node-role.kubernetes.io/infra: "" # <-- created as desired
node-role.kubernetes.io/santa: "" # <-- ignored, because the label isn't present in the last-applied-configuration annotation
- node-role.kubernetes.io/worker: "" # <-- removed as desired
name: node2
resourceVersion: "60973814"
selfLink: /api/v1/nodes/node2
exit status 1
After fixing annotation (by running kubectl get node node2 -o yaml | kubectl apply -f - ), kubectl apply works pretty well replacing and removing labels:
kubectl get node node2 -o yaml | sed 's#node-role.kubernetes.io/worker: ""#node-role.kubernetes.io/infra: ""#' | sed 's#node-role.kubernetes.io/santa: ""##'| kubectl diff -f -
diff -u -N /tmp/LIVE-107488917/v1.Node..node2 /tmp/MERGED-924858096/v1.Node..node2
--- /tmp/LIVE-107488917/v1.Node..node2 2020-04-08 18:01:55.776699954 +0000
+++ /tmp/MERGED-924858096/v1.Node..node2 2020-04-08 18:01:55.792699954 +0000
## -18,8 +18,7 ##
kubernetes.io/arch: amd64
kubernetes.io/hostname: node2
kubernetes.io/os: linux
- node-role.kubernetes.io/santa: "" # <-- removed as desired
- node-role.kubernetes.io/worker: "" # <-- removed as desired, literally replaced with the following label
+ node-role.kubernetes.io/infra: "" # <-- created as desired
name: node2
resourceVersion: "60978298"
selfLink: /api/v1/nodes/node2
exit status 1
Here are a few more examples:
# Check the original label ( last filter removes last applied config annotation line)
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
node-role.kubernetes.io/infra: ""
# Replace the label "infra" with "worker" using kubectl replace syntax
$ kubectl get node node2 -o yaml | sed 's#node-role.kubernetes.io/infra: ""#node-role.kubernetes.io/worker: ""#' | kubectl replace -f -
node/node2 replaced
# check the new state of the label
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
node-role.kubernetes.io/worker: ""
# label replaced -------^^^^^^
# Replace the label "worker" back to "infra" using kubectl apply syntax
$ kubectl get node node2 -o yaml | sed 's#node-role.kubernetes.io/worker: ""#node-role.kubernetes.io/infra: ""#' | kubectl apply -f -
node/node2 configured
# check the new state of the label
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
node-role.kubernetes.io/infra: ""
# label replaced -------^^^^^
# Remove the label from the node ( for demonstration purpose)
$ kubectl get node node2 -o yaml | sed 's#node-role.kubernetes.io/infra: ""##' | kubectl apply -f -
node/node2 configured
# check the new state of the label
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
# empty output
# label "infra" has been removed
You may see the following warning when you use kubectl apply -f on the resource created using imperative commands like kubectl create or kubectl expose for the first time:
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
In this case last-applied-configuration annotation will be created with the content of the file used in kubectl apply -f filename.yaml command. It may not contain all parameters and labels that are present in the live object.

Adding --record=true on a deployment file - Kubernetes

I'm new to Kubernetes and I wanted to know if there is there a way I can add '--record=true' inside the deployment yaml file, so I do not have to type it on the command line!
I know it goes like this: kubectl apply -f deployfile.yml --record
I am asking this because we work on a team, and not everyone is using --record=true at the end of the command when deploying files to kubernetes!
Thank you in advance,
As far as I'm aware there is no feature like --record=true flag in kubectl that you can add to Manifest.
The command which was used to start the Deployment is being stored in the kubernetes.io/change-cause annotation. This is being used for Rollout history which is described here.
First, check the revisions of this Deployment:
kubectl rollout history deployment.v1.apps/nginx-deployment
The output is similar to this:
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true
2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true
3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true
CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify the CHANGE-CAUSE message by:
Annotating the Deployment with kubectl annotate deployment.v1.apps/nginx-deployment kubernetes.io/change-cause="image updated to 1.9.1"
Append the --record flag to save the kubectl command that is making changes to the resource.
Manually editing the manifest of the resource.
To see the details of each revision, run:
kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2
The output is similar to this:
deployments "nginx-deployment" revision 2
Labels: app=nginx
pod-template-hash=1159050644
Annotations: kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true
Containers:
nginx:
Image: nginx:1.9.1
Port: 80/TCP
QoS Tier:
cpu: BestEffort
memory: BestEffort
Environment Variables: <none>
No volumes.
For the command history I would use $ history or check user bash_history
$ tail /home/username/.bash_history
Create an alias in your bashrc or zshrc as below
alias kubectl=kubectl --record and then do kubectl apply -f deployfile.yml
or
alias kr=kubectl --record and kr apply -f deployfile.yml

How to verify the rolling update?

I tried to automate the rolling update when the configmap changes are made. But, I am confused about how can I verify if the rolling update is successful or not. I found out the command
kubectl rollout status deployment test-app -n test
But I guess this is used when we are performing the rollback rather than for rolling update. What's the best way to know if the rolling update is successful or not?
I think it is fine,
kubectl rollout status deployments/test-app -n test can be used to verify the rollout deployment as well.
As an additional step, you can run,
kubectl rollout history deployments/test-app -n test as well.
if you need further clarity,
kubectl get deployments -o wide and check the READY and IMAGE fields.
ConfigMap generation and rolling update
I tried to automate the rolling update when the configmap changes are made
It is a good practice to create new resources instead of mutating (update in-place). kubectl kustomize is supporting this workflow:
The recommended way to change a deployment's configuration is to
create a new configMap with a new name,
patch the deployment, modifying the name value of the appropriate configMapKeyRef field.
You can deploy using Kustomize to automatically create a new ConfigMap every time you want to change content by using configMapGenerator. The old ones can be garbage collected when not used anymore.
Whith Kustomize configMapGenerator can you get a generated name.
Example
kind: ConfigMap
metadata:
name: example-configmap-2-g2hdhfc6tk
and this name get reflected to your Deployment that then trigger a new rolling update, but with a new ConfigMap and leave the old unchanged.
Deploy both Deployment and ConfigMap using
kubectl apply -k <kustomization_directory>
When handling change this way, you are following the practices called Immutable Infrastructure.
Verify deployment
To verify a successful deployment, you are right. You should use:
kubectl rollout status deployment test-app -n test
and when leaving the old ConfigMap unchanged but creating a new ConfigMap for the new ReplicaSet it is clear which ConfigMap belongs to which ReplicaSet.
Also rollback will be easier to understand since both old and new ReplicaSet use its own ConfigMap (on change of content).
Your command is fine to check if an update went through.
Now, a ConfigMap change eventually gets applied to the Pod. There is no need to do a rolling update for that. Depending on what you have passed in the ConfigMap, you probably could have restarted the service and that's it.
What's the best way to know if the rolling update is successful or not?
To check if you rolling update was executed correctly, you command works fine, you could check also if you replicas is running.
I tried to automate the rolling update when the configmap changes are made.
You could use Reloader to perform your rolling updates automatically when a configmap/secret changed.
Reloader can watch changes in ConfigMap and Secret and do rolling upgrades on Pods with their associated DeploymentConfigs, Deployments, Daemonsets and Statefulsets.
Let's explore how Reloader works in a pratical way using a nginx deployment as exxample.
First install Reloader in your cluster:
kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml
You will see a new container named reloader-reloader-... this container is responsible to 'watch' your deployments and make the rolling updates when necessary.
Create a configMap with your values, in my case I'll create a my-config and set a variable called myvar.value with value hello:
kubectl create configmap my-config --from-literal=myvar.value=hello
Now, let's create a simple deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-example
labels:
app: nginx
metadata:
annotations:
configmap.reloader.stakater.com/reload: my-config
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
env:
- name: MYVAR
valueFrom:
configMapKeyRef:
name: my-config
key: myvar.value
In this example, the nginx image will be used getting the value from my configMap and set in a variable called MYVAR.
To Reloader works, you must specify the name of your configMap in annotations, in the example above it will be:
metadata:
annotations:
configmap.reloader.stakater.com/reload: my-config
Apply the deployment example with kubectl apply -f mydeplyment-example.yaml and check the variable from the new pod.
$ kubectl exec -it $(kubectl get pods -l=app=nginx --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}') env | grep MYVAR
MYVAR=hello
Now let's change the value of the variable:
Edit configmap with kubectl edit configmap my-config, change the value of myvar.value to hi, save and close.
After save, Reloader will recreate your container and get the new value from configmap.
To check if the rolling update was executed successfully:
kubectl rollout status deployment deployment-example
Check the new value:
$ kubectl exec -it $(kubectl get pods -l=app=nginx --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}') env | grep MYVAR
MYVAR=hi
That's all!
Check the Reloader github see more options to use.
I hope it helps!
Mounted ConfigMaps are updated automatically reference
To Check roll-out history used below steps
ubuntu#dlv-k8s-cluster-master:~$ kubectl rollout history deployment busybox
REVISION CHANGE-CAUSE
1 kubectl create --filename=busybox.yaml --record=true
Update image on deployment as below
$ kubectl set image deployment.app/busybox *=busybox --record
deployment.apps/busybox image updated
Check new rollout history which will list the new change cause for rollout
$ kubectl rollout history deployment busybox
REVISION CHANGE-CAUSE
1 kubectl create --filename=busybox.yaml --record=true
2 kubectl set image deployment.app/busybox *=busybox --record=true
To Rollback Deployment : use undo flag along rollout command
ubuntu#dlv-k8s-cluster-master:~$ kubectl rollout undo deployment busybox deployment.apps/busybox rolled back
ubuntu#dlv-k8s-cluster-master:~$ kubectl rollout history deployment busybox
REVISION CHANGE-CAUSE
2 kubectl set image deployment.app/busybox *=busybox --record=true
3 kubectl create --filename=busybox.```

How to re-deploy (rolling update) kubernetes deployment if the files don't change

Say we have this in a deployment.yml
containers:
- name: my_container
imagePullPolicy: Always
image: my_image:latest
and so redeployment might take the form of:
kubectl set image deployment/my-deployment my_container=my_image
which I stole from here:
https://stackoverflow.com/a/40368520/1223975
my question is - is this the right way to do a rolling-update? Will the above always work to make sure the deployment gets the new image? My deployment.yml might never change - it might just be my_image:latest forever, so how to do rolling updates?
I don't expect this to be an accepted answer. But I wanted to make it for the future as there is a command to do this in Kubernetes 1.15.
PR https://github.com/kubernetes/kubernetes/pull/76062 added a command called kubectl rollout restart. It is part of Kubernetes 1.15. In the future you will be able to do:
kubectl rollout restart deployment/my-deployment