Kubernetes - update existing configmap from file - kubernetes

I created a simple text file on my local machine
I created a configmap out of that test file:
kubectl create configmap test-configm --from-file=test-file.txt
I added the volumemounts and volume to my deployment and verified the file is in my pods.
Now I want to modify the test-file.txt on my local machine and then update the configmap I created in step 2 so that all my pods can get the new version of that file, how can I accomplish this?
Thanks!

Per https://kubernetes.io/docs/concepts/configuration/configmap/ mounted configMaps are updated automatically. You would simply have to update the configMap using a dry-run followed by imperative command like this.
kubectl create configmap test-configm --from-file=test-file.txt --dry-run -o yaml | kubectl apply -f -

Related

Reapply updated configuration to a statefulset, using Helm

I have a rather peculiar use case. Specifically, prior to the deployment of my statefulset I am deploying a ConfigMap which contains an environment variable setting (namely RECREATE_DATADIR) which instructs the pod's container to create a new data structure on the file system.
However, during the typical lifetime of the container the data structure should NOT be recreated. Hence, right after the pod is successfully running, I am changing the ConfigMap and then reapply it. Hence - if the pod ever fails, it won't recreate the data directory structure when it respawns.
How can I achieve this same result using Helm charts?
You can create a job as part of your helm chart, with the post-install helm hook which will have configmap edit permissions, will use a kubectl image (bitnami/kubectl for example), and it will patch the configmap to false using kubectl commands.

How do I undo a kubectl create deploy?

I was setting up a nginx cluster on google cloud, and I entered a wrong image name; instead of entering:
kubectl create deploy nginx --image=nginx:1.17.10
I entered:
kubectl create deploy nginx --image=1.17.10
and eventually after running kubectl get pods, It showed ImagePullBackOff as the status for the pod.
When I tried running the correct create deploy command above, It said "nginx" already exists.
When I tried doing kubernetes delete --all pods, the pod was recreated with a new ID but still had the same status, and still couldn't allow me to run the right 'kubectl create deploy' command above. Now I'm stuck.
How can I undo it?
You need to delete the deployment:
kubectl delete deploy nginx
Otherwise Kubernetes will recreate the pod on every shutdown.
You can see all your deployments with
kubectl get deploy
Edit the deployment via kubectl edit deployment DEPLOYMENT_NAME and change the image name.
Or
Edit the manifest file and append the file with a correct image mane and do a kubectl apply -f YAML file
First of all, your k8s cluster is trying to pull image 1.17.10 from public docker registry. But as there are no image exists with this name that's why it's get error. And when you have tried to delete your pods it will again try to create with same image name as your deployment is exists. For this reason you need to delete deployment rather then pods. Otherwise, deployment will automatically try to create deleted pod again.
you can actually check what was the error in your deployment with this command:
kubectl describe deploy nginx
For you the command will bekubectl delete deploy -n <Namespace_name> <deployment_name>. As you have created your deployment in default namespace you don't need to mention the namespace automatically it will be the default namespace.
you can delete deployment with this command:
kubectl delete deploy nginx

kubernetes -I changed the deployment name ,then redeploying to the environment,how to cleanup the old deployment and pods with old name?

Had requirement to change the pod or deployment name.now when we deploy,we have 2 deployments and 3 pods each with old and new name
So far i was deleting the old deployments manually.
do i need to manually delete the old deployment and pods or is there a better method?
To delete deployment use
$ kubectl delete deploy/old_deployment_name
This will delete deployment, including pods and rs, if you had them.
And dont do this mistake the second time :) #Kamol is right - the best way of managing resources is change config file(e.g your deployment) and re-apply with
kubectl apply -f deployment.yaml
I think we can also remove everything if its installed with the apply command
kubectl apply -f deployment.yaml
you can delete
kubectl delete -f deployment.yaml

What is the proper way of removing a deployment from kubernetes cleanly

I have an example deployment running on a kubernetes cluster that also exposes a service and has a persistent volume bound by persistent volume claim.
I would expect that running:
kubectl delete deployment 'deployment_name'
Will delete everything but after running the above the service and storage still exist and I still have to manually delete the service and the persistent volume for the persistent volume claim to release.
Isn't there a single command to remove everything cleanly?
Thank you.
If you are creating deployment, service and PV in 3 separate YAML files you will have to remove them one by one.
However if you have 3 of them in the same YAML file, you can delete all three at once by applying:
kubectl delete -f file.yaml
If you have defined deployment, pv, pvc and service in a single file say file.yaml, then you can delete all of them using single command:
kubectl delete -f file.yaml
This will delete all the objects defined in that yaml file.

Kubernetes rolling deployment using the yaml file

I have deployed an application into Kubernetes using the following command.
kubectl apply -f deployment.yaml -n <NAMESPACE>
I have my deployment content in the deployment yaml file.
This is working fine. Now, I have updated few things in the deployment.yaml file and hence would like to update the deployment.
Option 1:- Delete and deploy again
kubectl delete -f deployment.yaml -n <NAMESPACE>
kubectl apply -f deployment.yaml -n <NAMESPACE>
Option 2:- Use set to update changes
kubectl set image deployment/nginx-deployment nginx=nginx:1.91
I don't want to use this approach as I am keeping my deployment.yaml file in GitHUB.
Option 3:- Using edit command
kubectl edit deployment/nginx-deployment
I don't want to use the above 3 options.
Is there any way to update the deployment using the file itself.
Like,
kubectl update deployment.yaml -n NAMESPACE
This way, I will make sure that I will always have the latest deployment file in my GitHub repo.
As #Daisy Shipton has said, what you want to do could be simplified with a simple command: kubectl apply -f deployment.yaml.
I will also add that I don't think it's correct to utilize the Option 2 to update the image utilized by the Pod with an imperative command! If the source of truth is the Deployment file present on your GitHub, you should simply update that file, by modifying the image that is used by your Pod's container there!
Next time you desire to update your Deployment object, unless you don't forget to modify the .yaml file, you will be setting the Pods to use the previous Nginx's image.
So there certainly should exist some restriction in using imperative commands to update the specification of any Kubernetes's object!