How to update a set of pods running in kubernetes? - kubernetes

What is the preferred way of updating a set of pods (e.g. after making code changes & pushing underlying docker image to docker hub) controlled by a replication controller in kubernetes cluster?
I can see 2 ways:
Deleting & re-creating replication controller manually
Using kubectl rolling-update
With the rolling-update I have to change the replication controller name. Since I'm storing replication controller definition in YAML file and not generating it manually, having to change the file to push out a code update seems to bring about bad habits like alternating between 2 names for the replication controller (e.g. controllerA and controllerB) to avoid name conflict.
What is the better way?

Update: kubectl rolling-update has been deprecated and the replacement command is kubectl rollout. Also note that since I wrote the original answer the Deployment resource has been added and is a better choice than ReplicaSets as the rolling update is performed server side instead of by the client.
You should use kubectl rolling-update. We recently added a feature to do a "simple rolling update" which will update the image in a replication controller without renaming it. It's the last example shown in the kubectl help rolling-update output:
// Update the pods of frontend by just changing the image, and keeping the old name
$ kubectl rolling-update frontend --image=image:v2
This command also supports recovery -- if you cancel your update and restart it later, it will resume from where it left off. Even though it creates a new replication controller behind the scenes, at the end of the update the new replication controller takes the name of the old replication controller so it appears as pure update rather than switching to an entirely new replication controller.

The best option I've found so far is Skaffold, which automatically builds the image, pushes it the image registry and updates the corresponding pods/controllers. It can even watch for code changes and rebuild the image as soon as changes are saved with skaffold dev command. This only requires adding a simple skaffold.yaml that specifies the image on the registry and path to the Kubernetes manifests. This workflow is described in details in the Getting Started guide.

The following explanations are from Kubernetes In Action's book
Deleting & re-creating replication controller manually
Doing a rolling update manually is laborious and error-prone. Depending on the number of replicas, you’d need to run a dozen or more commands in the proper order to perform the update process.Luckily, Kubernetes allows you to perform the rolling update with a single command.
Using kubectl rolling-update
Instead of performing rolling updates using ReplicationControllers manually, you can have kubectl perform them. Using kubectl to perform the update makes the process much easier, but, this is now an out dated way of updating apps.
Why performing an update like this isn’t as good as it could be is because it’s imperative. How Kubernetes is about you telling it the desired state of the system and having Kubernetes achieve that state on its own, by figuring out the best way to do it.
Using Deployments for updating apps declaratively --THE BEST ALTERNATIVE--
A Deployment is a higher-level resource meant for deploying applications and updating them declaratively, instead of doing it through a ReplicationController or a ReplicaSet, which are both considered lower-level concepts.
Using a Deployment instead of the lower-level constructs makes updating an app much easier, because you’re defining the desired state through the single Deployment resource and letting Kubernetes take care of the rest.
One more thing, Rolling back a rollout is possible because Deployments.

Related

Why does deleting a kubernetes namespace take so long?

I'm attempting to write some integration tests that setup a deployment and an ingress and then make web requests, effectively curl commands, against the ingress to test the configuration of the ingress. Backends and services are also created to gaurantee that the ingress is correctly routing and proxying to the backends.
However, tear down of the setup, to run a new set of tests is slow. By 'teardown' here I mean I simply delete the namespace in which all of these deployments live. This can take quite a while. Why is that? And what are the best ways to quickly tear down such a setup?
Kubernetes works largely through controllers, which loop endlessly looking for small pieces of work to do (like schedule a pod somewhere, unschedule a pod, remove an ingress route, etc); this makes it highly reliable but sometimes comes at the cost of relatively high latency for your operations. Namespace deletions require bringing down all the resources in a cluster, which requires a lot of small steps and therefore can take a while to finish.
There is a --force option for kubectl delete, but it comes with some scary-sounding warnings:
--force=false: If true, immediately remove resources from API and
bypass graceful deletion. Note that immediate deletion of some
resources may result in inconsistency or data loss and requires
confirmation.
So, this probably isn't advisable as a regular thing to do (perhaps someone more familiar with its behavior can add on to this).
Another option is to let the delete proceed asynchronously and just not block your CI jobs on it. The --wait=false flag (by default, set to true) will make sure the request is entered successfully but won't block kubectl from exiting while the delete actually happens. Your namespace will enter the Terminating state and eventually get deleted (unless something prevents it from coming down).
kubectl delete namespace my-test-namespace-1 --wait=false
This does mean that your next CI run may find the namespace is still there. To avoid a conflict, you could use a random suffix or incrementing counter for the namespace's name.

Restart Pod when secrets gets updated

We are using secret as environment variables on pod, but every time we have updated on secrets, we are redeploying the pods to take changes effect. We are looking for a mechanism where Pods get restarted automatically whenever secrets gets updated. Any help on this?
Thanks in advance.
There are many ways to handle this.
First, use Deployment instead of "naked" Pods that are not managed. The Deployment will create new Pods for you, when the Pod template is changed.
Second, to manage Secrets may be a bit tricky. It would be great if you can use a setup where you can use Kustomize SecretGenerator - then each new Secret will get its unique name. In addition, that unique name is reflected to the Deployment automatically - and your pods will automatically be recreated when a Secret is changed - this match your origin problem. When Secret and Deployment is handled this way, you apply the changes with:
kubectl apply -k <folder>
If you mount your secrets to pod it will get updated automatically you don't have to restart your pod as mentioned here
Other approaches are staker reloader which can reload your deployments based on configs, secrets etc
There are multiple ways of doing this:
Simply restart the pod
this can be done manually, or,
you could use an operator provided by VMware carvel kapp controller (documentation), using kapp controller you can reload the secrets/ configmap without needing to restart the pods (which effectively runs helm template <package> on a periodic basis and applies the changes if it founds any differences in helm template), check out my design for reloading the log level without needing to restart the pod.
Using service bindings https://servicebinding.io/

Designing K8 pod and proceses for initialization

I have a problem statement where in there is a Kubernetes cluster and I have some pods running on it.
Now, I want some functions/processes to run once per deployment, independent of number of replicas.
These processes use the same image like the image in deployment yaml.
I cannot use initcontainers and sidecars, because they will run along with main container on pod for each replica.
I tried to create a new image and then a pod out of it. But this pod keeps on running, which is not good for cluster resource, as it should be destroyed after it has done its job. Also, the main container depends on the completion on this process, in order to run the "command" part of K8 spec.
Looking for suggestions on how to tackle this?
Theoretically, You could write an admission controller webhook for intercepting create/update deployments and triggering your functions as you want. If your functions need to be checked, use ValidatingWebhookConfiguration for validating the process and then deny or accept commands.

Kubernetes rolling update vs set image

After some intense google and SO search i couldn't find any document that mentions both rolling update and set image, and can stress the difference between the two.
Can anyone shed light? When would I rather use either of those?
EDIT: It's worth mentioning that i'm already working with deployments (rather than replication controller directly) and that I'm using yaml configuration files. It would also be nice to know if there's a way to perform any of those using configuration files rather than direct commands.
In older k8s versions the ReplicationController was the only resource to manage a group of replicated pods. To update the pods of a ReplicationController you use kubectl rolling-update.
Later, k8s introduced the Deployment which manages ReplicaSet resources. The Deployment could be updated via kubectl set image.
Working with Deployment resources (as you already do) is the preferred way. I guess the ReplicationController and its rolling-update command are mainly still there for backward compatibility.
UPDATE: As mentioned in the comments:
To update a Deployment I used kubectl patch as it could also change things like adding new env vars whereas kubectl set image is rather limited and can only change the image version. Also note, that patch can be applied to all k8s resources and is not restricted to be used with a Deployment.
Later, I shifted my deployment processes to use helm - a really neat and k8s native package management tool. Can highly recommend to have a look at it.

Is there a way to make kubectl apply restart deployments whose image tag has not changed?

I've got a local deployment system that is mirroring our production system. Both are deployed by calling kubectl apply -f deployments-and-services.yaml
I'm tagging all builds with the current git hash, which means that for clean deploys to GKE, all the services have a new docker image tag which means that apply will restart them, but locally to minikube the tag is often not changing which means that new code is not run. Before I was working around this by calling kubectl delete and then kubectl create for deploying to minikube, but as the number of services I'm deploying has increased, that is starting to stretch the dev cycle too far.
Ideally, I'd like a better way to tell kubectl apply to restart a deployment rather than just depending on the tag?
I'm curious how people have been approaching this problem.
Additionally, I'm building everything with bazel which means that I have to be pretty explicit about setting up my build commands. I'm thinking maybe I should switch to just delete/creating the one service I'm working on and leave the others running.
But in that case, maybe I should just look at telepresence and run the service I'm dev'ing on outside of minikube all together? What are best practices here?
I'm not entirely sure I understood your question but that may very well be my reading comprehension :)
In any case here's a few thoughts that popped up while reading this (again not sure what you're trying to accomplish)
Option 1: maybe what you're looking for is to scale down and back up, i.e. scale your deployment to say 0 and then back up, given you're using configmap and maybe you only want to update that, the command would be kubectl scale --replicas=0 -f foo.yaml and then back to whatever
Option 2: if you want to apply the deployment and not kill any pods for example, you would use the cascade=false (google it)
Option 3: lookup the rollout option to manage deployments, not sure if it works on services though
Finally, and that's only me talking, share some more details like which version of k8s are you using? maybe provide an actual use case example to better describe the issue.
Kubernetes, only triggers a deployment when something has changed, if you have image pull policy to always you can delete your pods to get the new image, if you want kube to handle the deployment you can update the kubernetes yaml file to container a constantly changing metadata field (I use seconds since epoch) which will trigger a change. Ideally you should be tagging your images with unique tags from your CI/CD pipeline with the commit reference they have been built from. this gets around this issue and allows you to take full advantage of the kubernetes rollback feature.