Rolling update using k8s client-go - kubernetes

I'm struggling to find an example of performing a rolling update of a kubernetes deployment using client-go. Currently I scale to 0 and then back to 1 but that causes downtime.
I wonder if there is a way to trigger a rolling update. I have nothing to change in the deployment itself. All I need is to restart a pod so that it consumes an updated ConfigMap.
I have not found a direct way to initiate a rolling update rather than editing a deployment. But this does not work for me.

I ended up with just updating a deployment. Introduced an env which holds resourceVersion of a configMap that I need to watch. This causes rolling update. I have not found a direct way to initiate it

Related

How to make changes to a configmap take effect immediately without restart the pod?

Imagining a MySQL application is using a configmap to store it's my.cnf file. After a modification to that my.cnf file the changes take about a minute to take effect (appearing inside the MySQL container), it's time-consuming. Is there a way to speed it up?
I tried pod annotation like sync-config-map-time":"20220210-010101, and update the annotation after a configmap modification, but this would make the application pod restarts all the time. Is there any better solution to tackle this issue?
Posting the answer as a community wiki, feel free to edit and expand.
Unfortunately this feature will be available in a future. You can check a progress on GitHub
At this moment I can recommend you Reloader.
We would like to watch if some change happens in ConfigMap and/or Secret; then perform a rolling upgrade on relevant DeploymentConfig, Deployment, Daemonset, Statefulset and Rollout
Reloader can watch changes in ConfigMap and Secret and do rolling upgrades on Pods with their associated DeploymentConfigs, Deployments, Daemonsets Statefulsets and Rollouts.

Skipping a pod deployment in statefulset

I have a stateful set of pods, and due to the stateful nature of them one of them cannot be recreated due to some state error that deleting it wouldn't help.
Since it's a stateful set kubernetes will block creation of additional pods until it's able to get the stuck one running.
Statefulsets has podManagementPolicy: "Parallel" but it cannot be changed in runtime.
The question is if there's a way to make kubernetes skip the stuck one?
I belive you are looking for a WA for an known issue which is still open
StatefulSet will continue to wait for the broken Pod to become Ready (which never happens) before it will attempt to revert it back to the working configuration.
In term of upgrade found this on git hub below from official doc
The Pods in the StatefulSet are updated in reverse ordinal order. The StatefulSet controller terminates each Pod, and waits for it to transition to Running and Ready prior to updating the next Pod.
Note that, even though the StatefulSet controller will not proceed to update the next Pod until its ordinal successor is Running and Ready, it will restore any Pod that fails during the update to its current version. Pods that have already received the update will be restored to the updated version, and Pods that have not yet received the update will be restored to the previous version. In this way, the controller attempts to continue to keep the application healthy and the update consistent in the presence of intermittent failures.
Read Forced Rollback
When using Rolling Updates with the default Pod Management Policy (OrderedReady), it’s possible to get into a broken state that requires manual intervention to repair.

What is a rollout in Kubernetes?

I just started to learn Kubernetes. I know what a rollback is, but I have never heard of rollout. Is "rollout" related to rollback in any way? Or "rollout is similar to deploying something?
Rollout simply means rolling update of application. Rolling update means that application is updated gradually, gracefully and with no downtime. So when you push new version of your application's Docker image and then trigger rollout of your deployment Kubernetes first launches new pod with new image while keeping old version still running. When new pod settles down (passes its readiness probe) - Kubernetes kills old pod and switches Service endpoints to point to new version. When you have multiple replicas it will happen gradually until all replicas are replaced with new version.
This behavior however is not the only one possible. You can tune Rolling Update settings in your deployments spec.strategy settings.
Official docs even have interactive tutorial on Rolling Update feature, it perfectly explains how it works: https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/
Rollout is opposite to Rollback. Yes it means deploying new application or upgrading existing application.
Note: Some more details on the paragraph that you referred. Let's say we have 5 replicas. On rollout, we can configure how many applications should upgrade at a time and what should happen if there is a failure in the new configuration using maxUnavailabe, maxSurge and readinessProbe. Refer refer about all this parameters and tune accordingly.

Kubernetes rolling update vs set image

After some intense google and SO search i couldn't find any document that mentions both rolling update and set image, and can stress the difference between the two.
Can anyone shed light? When would I rather use either of those?
EDIT: It's worth mentioning that i'm already working with deployments (rather than replication controller directly) and that I'm using yaml configuration files. It would also be nice to know if there's a way to perform any of those using configuration files rather than direct commands.
In older k8s versions the ReplicationController was the only resource to manage a group of replicated pods. To update the pods of a ReplicationController you use kubectl rolling-update.
Later, k8s introduced the Deployment which manages ReplicaSet resources. The Deployment could be updated via kubectl set image.
Working with Deployment resources (as you already do) is the preferred way. I guess the ReplicationController and its rolling-update command are mainly still there for backward compatibility.
UPDATE: As mentioned in the comments:
To update a Deployment I used kubectl patch as it could also change things like adding new env vars whereas kubectl set image is rather limited and can only change the image version. Also note, that patch can be applied to all k8s resources and is not restricted to be used with a Deployment.
Later, I shifted my deployment processes to use helm - a really neat and k8s native package management tool. Can highly recommend to have a look at it.

How to update a set of pods running in kubernetes?

What is the preferred way of updating a set of pods (e.g. after making code changes & pushing underlying docker image to docker hub) controlled by a replication controller in kubernetes cluster?
I can see 2 ways:
Deleting & re-creating replication controller manually
Using kubectl rolling-update
With the rolling-update I have to change the replication controller name. Since I'm storing replication controller definition in YAML file and not generating it manually, having to change the file to push out a code update seems to bring about bad habits like alternating between 2 names for the replication controller (e.g. controllerA and controllerB) to avoid name conflict.
What is the better way?
Update: kubectl rolling-update has been deprecated and the replacement command is kubectl rollout. Also note that since I wrote the original answer the Deployment resource has been added and is a better choice than ReplicaSets as the rolling update is performed server side instead of by the client.
You should use kubectl rolling-update. We recently added a feature to do a "simple rolling update" which will update the image in a replication controller without renaming it. It's the last example shown in the kubectl help rolling-update output:
// Update the pods of frontend by just changing the image, and keeping the old name
$ kubectl rolling-update frontend --image=image:v2
This command also supports recovery -- if you cancel your update and restart it later, it will resume from where it left off. Even though it creates a new replication controller behind the scenes, at the end of the update the new replication controller takes the name of the old replication controller so it appears as pure update rather than switching to an entirely new replication controller.
The best option I've found so far is Skaffold, which automatically builds the image, pushes it the image registry and updates the corresponding pods/controllers. It can even watch for code changes and rebuild the image as soon as changes are saved with skaffold dev command. This only requires adding a simple skaffold.yaml that specifies the image on the registry and path to the Kubernetes manifests. This workflow is described in details in the Getting Started guide.
The following explanations are from Kubernetes In Action's book
Deleting & re-creating replication controller manually
Doing a rolling update manually is laborious and error-prone. Depending on the number of replicas, you’d need to run a dozen or more commands in the proper order to perform the update process.Luckily, Kubernetes allows you to perform the rolling update with a single command.
Using kubectl rolling-update
Instead of performing rolling updates using ReplicationControllers manually, you can have kubectl perform them. Using kubectl to perform the update makes the process much easier, but, this is now an out dated way of updating apps.
Why performing an update like this isn’t as good as it could be is because it’s imperative. How Kubernetes is about you telling it the desired state of the system and having Kubernetes achieve that state on its own, by figuring out the best way to do it.
Using Deployments for updating apps declaratively --THE BEST ALTERNATIVE--
A Deployment is a higher-level resource meant for deploying applications and updating them declaratively, instead of doing it through a ReplicationController or a ReplicaSet, which are both considered lower-level concepts.
Using a Deployment instead of the lower-level constructs makes updating an app much easier, because you’re defining the desired state through the single Deployment resource and letting Kubernetes take care of the rest.
One more thing, Rolling back a rollout is possible because Deployments.