I have a running GKE cluster with cockroachDB active. It's been running for quite a while and I don't want to reinitialize it from scratch - it uses the (almost) standard cockroachDB supplied yaml file to start. I need to change a switch in the exec line to modify the logging level -- currently it's set to the below (but that is logging all information messages as well as errors)
exec /cockroach/cockroach start --logtostderr --insecure --advertise-host $(hostname -f) --http-host 0.0.0.0 --join cockroachdb-0.cockroachdb,cockroachdb-1.cockroachdb,cockroa
chdb-2.cockroachdb --cache 25% --max-sql-memory 25%"
How do I do this without completely stopping the DB?
Kubernetes allows you to update StatefulSets in a rolling manner, such that only one pod is brought down at a time.
The simplest way to make changes is to run kubectl edit statefulset cockroachdb. This will open up a text editor in which you can make the desired change to the command, then save and exit. After that, Kubernetes should handle replacing the pods one-by-one with new pods that use the new command.
For more information:
https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html#step-10-upgrade-the-cluster
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets
https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resources
Related
I'm running GKE cluster and there is a deployment that uses image which I push to Container Registry on GCP, issue is - even though I build the image and push it with latest tag, the deployment keeps on creating new pods with the old one cached - is there a way to update it without re-deploying (aka without destroying it first)?
There is a known issue with the kubernetes that even if you change configmaps the old config remains and you can either redeploy or workaround with
kubectl patch deployment $deployment -n $ns -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
is there something similar with cached images?
I think you're looking for kubectl set or patch which I found there in kubernetes documentation.
To update image of deployment you can use kubectl set
kubectl set image deployment/name_of_deployment name_of_deployment=image:name_of_image
To update image of your pod you can use kubectl patch
kubectl patch pod name_of_pod -p '{"spec":{"containers":[{"name":"name_of_pod_from_yaml","image":"name_of_image"}]}}'
You can always use kubectl edit to edit which allows you to directly edit any API resource you can retrieve via the command line tool.
kubectl edit deployment name_of_deployment
Let me know if you have any more questions.
1) You should change the way of your thinking. Destroying pod is not bad. Application downtime is what is bad. You should always plan your deployments in such a way that it can tolerate one pod death. Use multiple replicas for stateless apps and use clusters for stateful apps. Use Kubernetes rolling update for any changes to your deployments. Rolling updates have many extremely important settings which directly influence the uptime of your apps. Read it carefully.
2) The reason why Kubernetes launches old image is that by default it uses
imagePullPolicy: IfNotPresent. Use imagePullPolicy: Always and it will always try to pull latest version on redeploy.
I am debugging certain behavior from my application pods; i am launching on K8s cluster. In order to do that I am increasing logging by increasing verbosity of deployment by adding --v=N flag to Kubectl create deployment command.
my question is : how can i configure increased verbosity globally so all pods start reporting increased verbosity; including pods in kube-system name space.
i would prefer if it can be done without re-starting k8s cluster; but if there is no other way I can re-start.
thanks
Ankit
For your applications, there is nothing global as that is not something that has global meaning. You would have to add the appropriate config file settings, env vars, or cli options for whatever you are using.
For kubernetes itself, you can turn up the logging on the kubelet command line, but the defaults are already pretty verbose so I’m not sure you really want to do that unless you’re developing changes for kubernetes.
Is there any way I can exec into the container, then edit some code (ex: add some log, edit come configuration file, etc) and restart the container to see what happens?
I tried to search for this but found nothing helpful.
The point is, I want to do a quick debug, not to do a full cluster deployment.
Some programs (like ie. nginx) support configuration reload without restarting their process, with these you can just kubectl exec change config and send a signal to master process (ie. kubectl exec <nginx_pod> kill -HUP 1). It is a feature of the software though, so many will not take that into account.
Containers are immutable by design, so they restart with a clean state each time. That said, with no simple way of doing this, there are hackish ways to achieve it.
One I can think of involves modifying the image on the node that will then restart the container. If you can ssh into the node and access docker directly, you can identify the container with a modified file and commit these changes with docker commit under the same tag. At that point your local container with that tag has your changes baked in so if you restart it (not reschedule, as it could start on different node), it will come up with your changes (assuming you do not use pullPolicy: always).
Again, not the way it's meant to be used, but achievable.
Any changes to the local container file system will be lost if you restart the pod You would need to work out whether the application stack you are using can perform an internal restart without actually exiting.
What language/application stack are you using?
You should at least consider an hostPath volume, in order to share local files on your host with your Kubernetes instance, in order to be able to do that kind of test.
After that, it is up to your application running within your pod to detect the file change, and restart if needed (ie, this is not specific to Kubernetes at all)
You could put any configuration in a configmap then just apply that, obviously assuming what reads the configmap would re-read it.
Same issue i have faced in my container as well
I have done the below steps in kubernate conatiner, it worked.
logged into pod eg:
kubectl exec --stdin --tty nginx-6799fc88d8-mlvqx -- /bin/bash
Once i logged in to application pod ran the below commands
#apt-get update
#apt-get install vim
Now am able to use vim editor in kubernate container.
I would like to suspend the main process in a docker container running in a kubernetes pod. I have attempted to do this by running
kubectl exec <pod-name> -c <container-name> kill -STOP 1
but the signal will not stop the container. Investigating other approaches, it looks like docker stop --signal=SIGSTOP or docker pause might work. However, as far as I know, kubectl exec always runs in the context of a container, and these commands would need to be run in the pod outside the context of the container. Does kubectl's interface allow for anything like this? Might I achieve this behavior through a call to the underlying kubernetes API?
You could set the replicaset to 0 which would set the number of working deployments to 0. This isn't quite a Pause but it does Stop the deployment until you set the number of deployments to >0.
kubectl scale --replicas=0 deployment/<pod name> --namespace=<namespace>
So kubernetes does not support suspending pods because it's a VM kinda behavior, and since starting a new one is cheaper it just schedules a new pod in case of failure. In effect your pods should be stateless. And any application that needs to store state, should have a persistent volume mounted inside the pod.
The simple mechanics(and general behavior) of Kubernetes is if the process inside the contaiener fails kuberentes will restart it by creating a new pod.
If you also comment what you are trying to achieve as an end goal I think I can help you better.
Is there a way to reload currently running pods created by replicationcontroller to reapply newly created services?
Example:
I have a running pods created by ReplicationController config file. I have deleted a service called mongo-svc and recreated it again using different port. Is there a way for the pod's env file to be updated with the new IP and ports from the new mongo-svc?
You can restart pods by simply deleting them: if they are linked to a Replication controller, the RC will take care of restarting them
kubectl delete pod <your-pod-name>
if you have a couple pods, it's easy enougth to copy/paste the pod names, but if you have many pods it can become cumbersome.
So another way to delete pods and restart them is to scale the RC down to 0 instances and back up to the number you need.
kubectl scale --replicas=0 rc <your-rc>
kubectl scale --replicas=<n> rc <your-rc>
By-the-way, you may also want to look at 'rolling-updates' to do this in a more production friendly manner, but that implies updating the RC config.
If you want the same pod to have the new service, the clean answer is no. You could (I strongly suggest not to do this) run kubectl exec <pod-name> -c <containers> -- export <service env var name>=<service env var value>. But your best bet is to run kubectl delete <pod-name> and let your replication controller handle the work.
I've ran into a similar issue for services being ran outside of kubernetes, say a DB for instance, to address this I've been creating this https://github.com/cpg1111/kubongo which updates the service's endpoint without deleting the pods. That same idea can also be applied to other pods in kubernetes to automate the service update. Basically it watches a specific service, and when it's IP changes for whatever reason it updates all the pods without deleting them. This does use the same code as kubectl exec however it is automated, sanitizes input and ensures the export is executed on all pods.
What do you mean with 'reapply'?
The pods to which the services point are generally selected based on labels.In other words, you can add / remove labels from the pods to include / exclude them from a service.
Read here for more information about defining services: http://kubernetes.io/v1.1/docs/user-guide/services.html#defining-a-service
And here for more information about labels: http://kubernetes.io/v1.1/docs/user-guide/labels.html
Hope it helps!