What is a rollout in Kubernetes? - kubernetes

I just started to learn Kubernetes. I know what a rollback is, but I have never heard of rollout. Is "rollout" related to rollback in any way? Or "rollout is similar to deploying something?

Rollout simply means rolling update of application. Rolling update means that application is updated gradually, gracefully and with no downtime. So when you push new version of your application's Docker image and then trigger rollout of your deployment Kubernetes first launches new pod with new image while keeping old version still running. When new pod settles down (passes its readiness probe) - Kubernetes kills old pod and switches Service endpoints to point to new version. When you have multiple replicas it will happen gradually until all replicas are replaced with new version.
This behavior however is not the only one possible. You can tune Rolling Update settings in your deployments spec.strategy settings.
Official docs even have interactive tutorial on Rolling Update feature, it perfectly explains how it works: https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/

Rollout is opposite to Rollback. Yes it means deploying new application or upgrading existing application.
Note: Some more details on the paragraph that you referred. Let's say we have 5 replicas. On rollout, we can configure how many applications should upgrade at a time and what should happen if there is a failure in the new configuration using maxUnavailabe, maxSurge and readinessProbe. Refer refer about all this parameters and tune accordingly.

Related

Kubernetes | What's the difference between rollout undo vs deploy to an older version?

I see there are two ways to move back to the older deployment version. One is using rollout undo command and another option is to deploy again to the older version. Is there any difference between the two or they both are interchangeable?
As I understood, you're asking for a difference between doing undo and manually changing pod definitions to the exact previous state. If that's the case - read below.
When you do a new deployment and if in that deployment your pod definitions' hash has been modified - the Deployment Controller will create a new ReplicaSet (let's call it A) in order to roll out new version, but at the same time it will decrease replica count in the existing ReplicaSet (let's call it B) - so you have 2 ReplicaSets (A, B). How it does this - depends on rollout strategy you choose (for example: rolling updates, blue-green deployment and so on).
When you do kubectl rollout undo deploy <your deployment> - the Deployment Controller will basically decrease the number of replicas in your newly created ReplicaSet (A) and increase the number of replicas in the old ReplicaSet (B).
But, when you do, as you said: deploy again to the older version - you basically do a new deployment, so new ReplicaSet (C) will be created in order to roll out your new version (even if it's not a new version), and your existing ReplicaSets(A) replica count will be decreased.
So, basically the difference is the ReplicaSets which gets created.
Read: Deployments for more info
The whole flow is as following:
Deployment Controller manages ReplicaSets
ReplicaSet changes desired pod count in etcd
Scheduler schedules the pod
Kubelet creates/terminates actual pods
And all of them talk to API Server and watch for changes in resource definitions via watch mechanism, again via API Server
when you undo the roll out, you are updating in a way that is not reflected in source control. The preferred way is to revert your YAML and apply previous versions- then your revisions match with the tracked configuration.
kubectl rollout history deployment xyz
REVISION --> these do not reflect correctly, get a new number with undo roll-out
Well first of all there is no direct way of option is to deploy again in kubernetes. Yes the undo way is to going back the previous version of your deployment. and command to go back is
kubectl undo deployment ABC

How will a scheduled (rolling) restart of a service be affected by an ongoing upgrade (and vice versa)

Due to a memory leak in one of our services I am planning to add a k8s CronJob to schedule a periodic restart of the leaking service. Right now we do not have the resources to look into the mem leak properly, so we need a temporary solution to quickly minimize the issues caused by the leak. It will be a rolling restart, as outlined here:
How to schedule pods restart
I have already tested this in our test cluster, and it seems to work as expected. The service has 2 replicas in test, and 3 in production.
My plan is to schedule the CronJob to run every 2 hours.
I am now wondering: How will the new CronJob behave if it should happen to execute while a service upgrade is already running? We do rolling upgrades to achieve zero downtime, and we sometimes roll out upgrades several times a day. I don't want to limit the people who deploy upgrades by saying "please ensure you never deploy near to 08:00, 10:00, 12:00 etc". That will never work in the long term.
And vice versa, I am also wondering what will happen if an upgrade is started while the CronJob is already running and the pods are restarting.
Does kubernetes have something built-in to handle this kind of conflict?
This answer to the linked question recommends using kubectl rollout restart from a CronJob pod. That command internally works by adding an annotation to the deployment's pod spec; since the pod spec is different, it triggers a new rolling upgrade of the deployment.
Say you're running an ordinary redeployment; that will change the image: setting in the pod spec. At about the same time, the kubectl rollout restart happens that changes an annotation setting in the pod spec. The Kubernetes API forces these two changes to be serialized, so the final deployment object will always have both changes in it.
This question then reduces to "what happens if a deployment changes and needs to trigger a redeployment, while a redeployment is already running?" The Deployment documentation covers this case: it will start deploying new pods on the newest version of the pod spec and treat all older ones as "old", so a pod with the intermediate state might only exist for a couple of minutes before getting replaced.
In short: this should work consistently and you shouldn't need to take any special precautions.

Skipping a pod deployment in statefulset

I have a stateful set of pods, and due to the stateful nature of them one of them cannot be recreated due to some state error that deleting it wouldn't help.
Since it's a stateful set kubernetes will block creation of additional pods until it's able to get the stuck one running.
Statefulsets has podManagementPolicy: "Parallel" but it cannot be changed in runtime.
The question is if there's a way to make kubernetes skip the stuck one?
I belive you are looking for a WA for an known issue which is still open
StatefulSet will continue to wait for the broken Pod to become Ready (which never happens) before it will attempt to revert it back to the working configuration.
In term of upgrade found this on git hub below from official doc
The Pods in the StatefulSet are updated in reverse ordinal order. The StatefulSet controller terminates each Pod, and waits for it to transition to Running and Ready prior to updating the next Pod.
Note that, even though the StatefulSet controller will not proceed to update the next Pod until its ordinal successor is Running and Ready, it will restore any Pod that fails during the update to its current version. Pods that have already received the update will be restored to the updated version, and Pods that have not yet received the update will be restored to the previous version. In this way, the controller attempts to continue to keep the application healthy and the update consistent in the presence of intermittent failures.
Read Forced Rollback
When using Rolling Updates with the default Pod Management Policy (OrderedReady), it’s possible to get into a broken state that requires manual intervention to repair.

Rolling update using k8s client-go

I'm struggling to find an example of performing a rolling update of a kubernetes deployment using client-go. Currently I scale to 0 and then back to 1 but that causes downtime.
I wonder if there is a way to trigger a rolling update. I have nothing to change in the deployment itself. All I need is to restart a pod so that it consumes an updated ConfigMap.
I have not found a direct way to initiate a rolling update rather than editing a deployment. But this does not work for me.
I ended up with just updating a deployment. Introduced an env which holds resourceVersion of a configMap that I need to watch. This causes rolling update. I have not found a direct way to initiate it

Is it safe to replace ReplicationController with Deployment

I am practicing katacoda k8s lesson with the knowledge from Stack Overflow. I had tried kill the pods by command line and the result of them are exactly the same with simple example. The pod will get recreated in several moment later after dead.
Question:
Can I just simply replace the ReplicationController with Deployment?
Don't use replication controller. Those are replaced with ReplicaSet.
In your case, use deployment object to manage the application life cycle. With deployment you would be able to control rolling upgrade and, rollbabk features of kubernetes
Deployment object works one layer above ReplicaSet and allows you to upgrade the app to new version with zero downtime