I use Google Kubernetes Engine and I intentionally put an error in the code. I was hoping the rolling update will stop when it discovers the status is CrashLoopBackOff, but it wasn't.
In this page, they say..
The Deployment controller will stop the bad rollout automatically, and
will stop scaling up the new ReplicaSet. This depends on the
rollingUpdate parameters (maxUnavailable specifically) that you have
specified.
But it's not happening, is it only if the status ImagePullBackOff?
Below is my configuration.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: volume-service
labels:
group: volume
tier: service
spec:
replicas: 4
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 2
maxSurge: 2
template:
metadata:
labels:
group: volume
tier: service
spec:
containers:
- name: volume-service
image: gcr.io/example/volume-service:latest
P.S. I already read liveness/readiness probes, but I don't think it can stop a rolling update? or is it?
Turns out I just need to set minReadySeconds and it stops the rolling update when the new replicaSet has status CrashLoopBackOff or something like Exited with status code 1. So now the old replicaSet still available and not updated.
Here is the new config.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: volume-service
labels:
group: volume
tier: service
spec:
replicas: 4
minReadySeconds: 60
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 2
maxSurge: 2
template:
metadata:
labels:
group: volume
tier: service
spec:
containers:
- name: volume-service
image: gcr.io/example/volume-service:latest
Thank you for averyone help!
I agree with #Nicola_Ben - I would also consider changing to the setup below:
spec:
replicas: 4
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1 <----- I want at least (4)-[1] = 3 available pods.
maxSurge: 1 <----- I want maximum (4)+[1] = 5 total running pods.
Or even change maxSurge to 0.
This will help us to expose less possibly nonfunctional pods (like we would do in canary release).
Like #Hana_Alaydrus suggested its important to setup minReadySeconds.
With addition to that, sometimes we need to take more actions after the rollout execution.
(For example, there are cases when the new pods not functioning properly but the process running inside the container haven't crash).
A suggestion for a general debug process:
1 ) First of all, pause the rollout with:
kubectl rollout pause deployment <name>.
2 ) Debug the relevant pods and decide how to continue (maybe we can continue with with the new release, maybe not).
3 ) We would have to resume the rollout with: kubectl rollout resume deployment <name> because even if we decided to return to previous release with the undo command (4.B) we need first to resume the rollout.
4.A ) Continue with new release.
4.B ) Return to previous release with: kubectl rollout undo deployment <name>.
Below is a visual summary (click inside in order to view the comments):
The explanation you quoted is correct, and it means that the new replicaSet (the one with the error) will not proceed to completion, but it will be stopped in its progression to the maxSurge+maxUnavailable count. And the old replicaSet will be present too.
Here the example I tried with:
spec:
replicas: 4
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
And these are the results:
NAME READY STATUS RESTARTS AGE
pod/volume-service-6bb8dd677f-2xpwn 0/1 ImagePullBackOff 0 42s
pod/volume-service-6bb8dd677f-gcwj6 0/1 ImagePullBackOff 0 42s
pod/volume-service-c98fd8d-kfff2 1/1 Running 0 59s
pod/volume-service-c98fd8d-wcjkz 1/1 Running 0 28m
pod/volume-service-c98fd8d-xvhbm 1/1 Running 0 28m
NAME DESIRED CURRENT READY AGE
replicaset.extensions/volume-service-6bb8dd677f 2 2 0 26m
replicaset.extensions/volume-service-c98fd8d 3 3 3 28m
My new replicaSet will start only 2 new pods (1 slot from the maxUnavailable and 1 slot from the maxSurge).
The old replicaSet will keep running 3 pods (4 - 1 unAvailable).
The two params you set in the rollingUpdate section are the key point, but you can play also with other factors like readinessProbe, livenessProbe, minReadySeconds, progressDeadlineSeconds.
For them, here the reference.
Related
I am trying to deploy a PodDisruptionBudget for my deployment, but when I deploy this example
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: example-pdb
spec:
minAvailable: 1
selector:
matchLabels:
app: example-deployment
with this deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deployment
spec:
replicas: 1
selector:
matchLabels:
app: example-deployment-app
template:
metadata:
labels:
app: example-deployment-app
spec:
...
I get the response
$ kubectl get pdb
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
example-pdb 1 N/A 0 7s
What does it mean for "ALLOWED DISRUPTIONS" to be 0?
As mentioned by Specifying a PodDisruptionBudget:
A PodDisruptionBudget has three fields:
A label selector .spec.selector to specify the set of pods to which it applies. This field is required.
.spec.minAvailable which is a description of the number of pods from that set that must still be available after the eviction, even in
the absence of the evicted pod. minAvailable can be either an
absolute number or a percentage.
.spec.maxUnavailable (available in Kubernetes 1.7 and higher) which is a description of the number of pods from that set that can be
unavailable after the eviction. It can be either an absolute number or
a percentage.
In your case the .spec.minAvailable is set to 1, so 1 Pod must always be available, even during a disruption.
Now looking at your Deployment's .spec.replicas is set to 1 which in combination of .spec.minAvailable: 1 means that there are no disruptions allowed for that config.
Take a look at the official example:
Use kubectl to check that your PDB is created.
Assuming you don't actually have pods matching app: zookeeper in
your namespace, then you'll see something like this:
kubectl get poddisruptionbudgets
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
zk-pdb 2 N/A 0 7s
If there are matching pods (say, 3), then you would see something like
this:
kubectl get poddisruptionbudgets
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
zk-pdb 2 N/A 1 7s
The non-zero value for ALLOWED DISRUPTIONS means that the disruption
controller has seen the pods, counted the matching pods, and updated
the status of the PDB.
You can get more information about the status of a PDB with this
command:
kubectl get poddisruptionbudgets zk-pdb -o yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
annotations:
…
creationTimestamp: "2020-03-04T04:22:56Z"
generation: 1
name: zk-pdb
…
status:
currentHealthy: 3
desiredHealthy: 2
disruptionsAllowed: 1
expectedPods: 3
observedGeneration: 1
You can see that if the .spec.minAvailable is set to 2 and there are 3 running Pods than the disruptionsAllowed is actually 1. You can check the same with your use case.
That's what I do:
Deploy a stateful set. The pod will always exit with an error to provoke a failing pod in status CrashLoopBackOff: kubectl apply -f error.yaml
Change error.yaml (echo a => echo b) and redeploy stateful set: kubectl apply -f error.yaml
Pod keeps the error status and will not immediately redeploy but wait until the pod is restarted after some time.
Requesting pod status:
$ kubectl get pod errordemo-0
NAME READY STATUS RESTARTS AGE
errordemo-0 0/1 CrashLoopBackOff 15 59m
error.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: errordemo
labels:
app.kubernetes.io/name: errordemo
spec:
serviceName: errordemo
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: errordemo
template:
metadata:
labels:
app.kubernetes.io/name: errordemo
spec:
containers:
- name: demox
image: busybox:1.28.2
command: ['sh', '-c', 'echo a; sleep 5; exit 1']
terminationGracePeriodSeconds: 1
Questions
How can I achieve an immediate redeploy even if the pod has an error status?
I found out these solutions but I would like to have a single command to achieve that (In real life I'm using helm and I just want to call helm upgrade for my deployments):
Kill the pod before the redeploy
Scale down before the redeploy
Delete the statefulset before the redeploy
Why doesn't kubernetes redeploy the pod at once?
In my demo example I have to wait until kubernetes tries to restart the pod after waiting some time.
A pod with no error (e.g. echo a; sleep 10000;) will be restarted immediately. That's why I set terminationGracePeriodSeconds: 1
But in my real deployments (where I use helm) I also encountered the case that the pods are never redeployed. Unfortunately I cannot reproduce this behaviour in a simple example.
You could set spec.podManagementPolicy: "Parallel"
Parallel pod management tells the StatefulSet controller to launch or terminate all Pods in parallel, and not to wait for Pods to become Running and Ready or completely terminated prior to launching or terminating another Pod.
Remember that the default podManagementPolicy is OrderedReady
OrderedReady pod management is the default for StatefulSets. It tells the StatefulSet controller to respect the ordering guarantees demonstrated above
And if your application requires ordered update then there is nothing you can do.
I have a service and pod in node.js . .consider hello world ..
exposed port : 80 on http
I want to seamlessly restart my service/pod
pod/service restart is taking a lot of time, thus there is downtime.
Using : kubectl delete; then recreate it with kubectl.
How can i avoid delay and downtime ?
Considering continuous deployments, your previous Pods will be terminated & new Pods will be created. Therefore, downtime of service is possible.
To avoid this add strategy in your deployment spec
example:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: api
spec:
replicas: 4
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
where maxUnavailable: 0 defines that at any given time more than 1 pods should be available
Extra:
If you service takes some time to be live you can use readiness probe in spec to avoid traffic to be routed before the pods are ready .
example :
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 15
periodSeconds: 30
I am currently using Deployments to manage my pods in my K8S cluster.
Some of my deployments require 2 pods/replicas, some require 3 pods/replicas and some of them require just 1 pod/replica. The issue Im having is the one with one pod/replica.
My YAML file is :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: user-management-backend-deployment
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 2
selector:
matchLabels:
name: user-management-backend
template:
metadata:
labels:
name: user-management-backend
spec:
containers:
- name: user-management-backend
image: proj_csdp/user-management_backend:3.1.8
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
livenessProbe:
httpGet:
port: 8080
path: /user_management/health
initialDelaySeconds: 300
timeoutSeconds: 30
readinessProbe:
httpGet:
port: 8080
path: /user_management/health
initialDelaySeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: nfs
mountPath: "/vault"
volumes:
- name: nfs
nfs:
server: kube-nfs
path: "/kubenfs/vault"
readOnly: true
I have a the old version running fine.
# kubectl get po | grep user-management-backend-deployment
user-management-backend-deployment-3264073543-mrrvl 1/1 Running 0 4d
Now I want to update the image:
# kubectl set image deployment user-management-backend-deployment user-management-backend=proj_csdp/user-management_backend:3.2.0
Now as per RollingUpdate design, K8S should bring up the new pod while keeping the old pod working and only once the new pod is ready to take the traffic, should the old pod get deleted. But what I see is that the old pod is immediately deleted and the new pod is created and then it takes time to start taking traffic meaning that I have to drop traffic.
# kubectl get po | grep user-management-backend-deployment
user-management-backend-deployment-3264073543-l93m9 0/1 ContainerCreating 0 1s
# kubectl get po | grep user-management-backend-deployment
user-management-backend-deployment-3264073543-l93m9 1/1 Running 0 33s
I have used maxSurge: 2 & maxUnavailable: 1 but this does not seem to be working.
Any ideas why is this not working ?
It appears to be the maxUnavailable: 1; I was able to trivially reproduce your experience setting that value, and trivially achieve the correct experience by setting it to maxUnavailable: 0
Here's my "pseudo-proof" of how the scheduler arrived at the behavior you are experiencing:
Because replicas: 1, the desired state for k8s is exactly one Pod in Ready. During a Rolling Update operation, which is the strategy you requested, it will create a new Pod, bringing the total to 2. But you granted k8s permission to leave one Pod in an unavailable state, and you instructed it to keep the desired number of Pods at 1. Thus, it fulfilled all of those constraints: 1 Pod, the desired count, in an unavailable state, permitted by the R-U strategy.
By setting the maxUnavailable to zero, you correctly direct k8s to never let any Pod be unavailable, even if that means surging Pods above the replica count for a short time
with Strategy Type set to RollingUpdate a new pod is created before the old one is deleted even with a single replica. Strategy Type Recreate kills old pods before creating new ones
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment
As answered already, you can set the maxUnavailable to 0 to achieve the desired result. A couple of extra notes:
You should not expect this to work when using a stateful service that mounts a single specific volume that is to be used by the new pod. The volume will be attached to the soon-to-be-replaced pod, so won't be able to attach to the new pod.
The documentation notes that you cannot set this to 0 if you have set .spec.strategy.rollingUpdate.maxSurge to 0.
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable
So we have a deployment that is using rolling updates. We need it to pause 180 seconds between each pod it brings up. My understanding is that I need to set MinReadySeconds: 180 and to set the RollingUpdateStrategy.MaxUnavailable: 1 and RollingUpdateStrategy.MaxSurge: 1 for the deployment to wait. With those settings it still brings the pods up as fast as it can. . . What am I missing.
relevant part of my deployment
spec:
minReadySeconds: 180
replicas: 9
revisionHistoryLimit: 20
selector:
matchLabels:
deployment: standard
name: standard-pod
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
Assuming that a pod is ready after a certain delay is not very idiomatic within an orchestrator like Kubernetes, as there may be something that prevents the pod from successfully starting, or maybe delays the start by another few seconds.
Instead, you could use Liveliness and Readiness probes to make sure that the pod is there and ready to serve traffic before taking down the old pod
We updated our cluster to a newer version of Kubernetes and it started working.
Posted on behalf of the question asker.