Kubectl update imagePullPolicy - kubernetes

How do we update the imagePullPolicy alone for certain deployments using kubectl? The image tag has changed, however we don't require a restart. Need to update existing deployments with --image-pull-policy as IfNotPresent
Note: Don't have the complete YAML or JSON for the deployments, hence need to do it via kubectl

use
kubectl edit deployment <deployment_name> -n namespace
Then you will be able to edit imagePullPolicy

Related

kubectl - running rollout restart only when configmap changes

I have a devops pipeline divided in three steps:
kubectl apply -f configmap.yml
kubectl apply -f deployment.yml
kubectl rollout restart deployment/test-service
I think that when the configmap.yml changes the rollout restart step is useful. But when only the deployment.yml changes, I'm worried that the "extra" rollout restart step is not useful and should be avoided.
Should I execute rollout restart only when the configmap.yml changes or should I don't care about?
This isn't a direct answer, but it ended up being too long for a comment and I think it's relevant. If you were to apply your manifests using kustomize (aka kubectl apply -k), then you get the following behavior:
ConfigMaps are generated with a content-based hash appended to their name
Kustomize substitutes the generated name into your Deployment
This means the Deployment is only modified when the content of the ConfigMap changes, causing an implicit re-deploy of the pods managed by the Deployment.
This largely gets you the behavior you want, but it would require some changes to your deployment pipeline.
Best practice is to annotate the deployment's pod with the hash of the configmap. If the content of the configmap changes, the annotation changes and all pods will be rolling updated. If the configmap doesn't change, nothing will happen.
E.g. with helm:
annotations:
checksum/config: {{ include (print .Template.BasePath "/configmap.yaml") . | sha256sum }}
from grafana example.
If you're not using helm you can have a script create the hash in your pipeline.
By that the rollout restart step is not required anymore. Pods will restart always if the configmap and/or the deployment changes. Otherwise nothing happens.

Kubernetes pod Rollback and Restart

Deployment resource object is still not supported in our cluster and not enabled.
We are using Pod resource object Yaml file. something like below:
apiVersion: v1
kind: Pod
metadata:
name: sample-test
namespace: default
spec:
automountServiceAccountToken: false
containers:
I have explored patch and Put rest api for Pod(Kubectl patch and replace) - it will update to new image version and pod restarts.
I need help in below:
When the image version is same, it will not update and pod will not restart.
How can i acheive Pod restart, is there any API for this or any alternate
approach for this. Because My pod also refers configmap and secret. After i
make changes to secret, i want to restart pod so that it can take updated
value.
Suppose when patch applied with new container image and it fails status is failed, I want to rollback to previous version, How can i acheive this with standalone pod without using deployment. Is there any alternate approach.
Achieving solutions for your scenario, can be handled like this:
When the image version is same, it will not update and pod will not restart. How can i acheive Pod restart, is there any API for this or any alternate approach for this. Because My pod also refers configmap and secret. After i make changes to secret, i want to restart pod so that it can take updated value
Create a new secret/configmap each time and update the pod yaml to use the new configmap/secret rather than the old name.
Suppose when patch applied with new container image and it fails status is failed, I want to rollback to previous version, How can i acheive this with standalone pod without using deployment. Is there any alternate approach
Before you do a Pod update, get the current Pod yaml using kubectl like this,
kubectl get pod <pod-name> -o yaml -n <namespace>
After getting the yaml, generate the new pod yaml and apply it. In case of failure, clean up the new resources created(configmaps & secrets) and apply the older version of pod to achieve rollback

How do I undo a kubectl create deploy?

I was setting up a nginx cluster on google cloud, and I entered a wrong image name; instead of entering:
kubectl create deploy nginx --image=nginx:1.17.10
I entered:
kubectl create deploy nginx --image=1.17.10
and eventually after running kubectl get pods, It showed ImagePullBackOff as the status for the pod.
When I tried running the correct create deploy command above, It said "nginx" already exists.
When I tried doing kubernetes delete --all pods, the pod was recreated with a new ID but still had the same status, and still couldn't allow me to run the right 'kubectl create deploy' command above. Now I'm stuck.
How can I undo it?
You need to delete the deployment:
kubectl delete deploy nginx
Otherwise Kubernetes will recreate the pod on every shutdown.
You can see all your deployments with
kubectl get deploy
Edit the deployment via kubectl edit deployment DEPLOYMENT_NAME and change the image name.
Or
Edit the manifest file and append the file with a correct image mane and do a kubectl apply -f YAML file
First of all, your k8s cluster is trying to pull image 1.17.10 from public docker registry. But as there are no image exists with this name that's why it's get error. And when you have tried to delete your pods it will again try to create with same image name as your deployment is exists. For this reason you need to delete deployment rather then pods. Otherwise, deployment will automatically try to create deleted pod again.
you can actually check what was the error in your deployment with this command:
kubectl describe deploy nginx
For you the command will bekubectl delete deploy -n <Namespace_name> <deployment_name>. As you have created your deployment in default namespace you don't need to mention the namespace automatically it will be the default namespace.
you can delete deployment with this command:
kubectl delete deploy nginx

k3s cleanup of HelmChart?

I have followed instructions from this blog post to set up a k3s cluster on a couple of raspberry pi 4:
I'm now trying to get my hands dirty with traefik as front, but I'm having issues with the way it has been deployed as a 'HelmChart' I think.
From the k3s docs
It is also possible to deploy Helm charts. k3s supports a CRD
controller for installing charts. A YAML file specification can look
as following (example taken from
/var/lib/rancher/k3s/server/manifests/traefik.yaml):
So I have been starting up my k3s with the --no-deploy traefik option to manually add it with settings. So I therefore manually apply a yaml like this:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.64.0.tgz
set:
rbac.enabled: "true"
ssl.enabled: "true"
kubernetes.ingressEndpoint.useDefaultPublishedService: "true"
dashboard:
enabled: true
domain: "traefik.k3s1.local"
But when trying to iterate over settings to get it working as I want, I'm having trouble tearing it down. If I try kubectl delete -f on this yaml it just hangs indefinitely. And I can't seem to find a clean way to delete all the resources manually either.
I've been resorting now to just reinstall my entire cluster over and over because I can't seem to cleanup properly.
Is there a way to delete all the resources created by a chart like this without the helm cli (which I don't even have)?
Are you sure that kubectl delete -f is hanging?
I had the same issue as you and it seemed like kubectl delete -f was hanging, but it was really just taking a long time.
As far as I can tell, when you issue the kubectl delete -f a pod in the kube-system namespace with a name of helm-delete-* should spin up and try to delete the resources deployed via helm. You can get the full name of that container by running kubectl -n kube-system get pods, find the one with kube-delete-<name of yaml>-<id>. Then use the pod name to look at the logs using kubectl -n kube-system logs kube-delete-<name of yaml>-<id>.
An example of what I did was:
kubectl delete -f jenkins.yaml # seems to hang
kubectl -n kube-system get pods # look at pods in kube-system namespace
kubectl -n kube-system logs helm-delete-jenkins-wkjct # look at the delete logs
I see two options here:
Use the --now flag to delete your yaml file with minimal delay.
Use --grace-period=0 --force flags to force delete the resource.
There are other options but you'll need Helm CLI for them.
Please let me know if that helped.

Kubernetes rolling deployment using the yaml file

I have deployed an application into Kubernetes using the following command.
kubectl apply -f deployment.yaml -n <NAMESPACE>
I have my deployment content in the deployment yaml file.
This is working fine. Now, I have updated few things in the deployment.yaml file and hence would like to update the deployment.
Option 1:- Delete and deploy again
kubectl delete -f deployment.yaml -n <NAMESPACE>
kubectl apply -f deployment.yaml -n <NAMESPACE>
Option 2:- Use set to update changes
kubectl set image deployment/nginx-deployment nginx=nginx:1.91
I don't want to use this approach as I am keeping my deployment.yaml file in GitHUB.
Option 3:- Using edit command
kubectl edit deployment/nginx-deployment
I don't want to use the above 3 options.
Is there any way to update the deployment using the file itself.
Like,
kubectl update deployment.yaml -n NAMESPACE
This way, I will make sure that I will always have the latest deployment file in my GitHub repo.
As #Daisy Shipton has said, what you want to do could be simplified with a simple command: kubectl apply -f deployment.yaml.
I will also add that I don't think it's correct to utilize the Option 2 to update the image utilized by the Pod with an imperative command! If the source of truth is the Deployment file present on your GitHub, you should simply update that file, by modifying the image that is used by your Pod's container there!
Next time you desire to update your Deployment object, unless you don't forget to modify the .yaml file, you will be setting the Pods to use the previous Nginx's image.
So there certainly should exist some restriction in using imperative commands to update the specification of any Kubernetes's object!