That's what I do:
Deploy a stateful set. The pod will always exit with an error to provoke a failing pod in status CrashLoopBackOff: kubectl apply -f error.yaml
Change error.yaml (echo a => echo b) and redeploy stateful set: kubectl apply -f error.yaml
Pod keeps the error status and will not immediately redeploy but wait until the pod is restarted after some time.
Requesting pod status:
$ kubectl get pod errordemo-0
NAME READY STATUS RESTARTS AGE
errordemo-0 0/1 CrashLoopBackOff 15 59m
error.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: errordemo
labels:
app.kubernetes.io/name: errordemo
spec:
serviceName: errordemo
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: errordemo
template:
metadata:
labels:
app.kubernetes.io/name: errordemo
spec:
containers:
- name: demox
image: busybox:1.28.2
command: ['sh', '-c', 'echo a; sleep 5; exit 1']
terminationGracePeriodSeconds: 1
Questions
How can I achieve an immediate redeploy even if the pod has an error status?
I found out these solutions but I would like to have a single command to achieve that (In real life I'm using helm and I just want to call helm upgrade for my deployments):
Kill the pod before the redeploy
Scale down before the redeploy
Delete the statefulset before the redeploy
Why doesn't kubernetes redeploy the pod at once?
In my demo example I have to wait until kubernetes tries to restart the pod after waiting some time.
A pod with no error (e.g. echo a; sleep 10000;) will be restarted immediately. That's why I set terminationGracePeriodSeconds: 1
But in my real deployments (where I use helm) I also encountered the case that the pods are never redeployed. Unfortunately I cannot reproduce this behaviour in a simple example.
You could set spec.podManagementPolicy: "Parallel"
Parallel pod management tells the StatefulSet controller to launch or terminate all Pods in parallel, and not to wait for Pods to become Running and Ready or completely terminated prior to launching or terminating another Pod.
Remember that the default podManagementPolicy is OrderedReady
OrderedReady pod management is the default for StatefulSets. It tells the StatefulSet controller to respect the ordering guarantees demonstrated above
And if your application requires ordered update then there is nothing you can do.
Related
So I wish to limit resources used by pod running for each of my namespace, and therefor want to use resource quota.
I am following this tutorial.
It works well, but I wish something a little different.
When trying to schedule a pod which will go over the limit of my quota, I am getting a 403 error.
What I wish is the request to be scheduled, but waiting in a pending state until one of the other pod end and free some resources.
Any advice?
Instead of using straight pod definitions (kind: Pod) use deployment.
Why?
Pods in Kubernetes are designed as relatively ephemeral, disposable entities:
You'll rarely create individual Pods directly in Kubernetes—even singleton Pods. This is because Pods are designed as relatively ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a controller), the new Pod is scheduled to run on a Node in your cluster. The Pod remains on that node until the Pod finishes execution, the Pod object is deleted, the Pod is evicted for lack of resources, or the node fails.
Kubernetes assumes that for managing pods you should a workload resources instead of creating pods directly:
Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.
Here are some examples of workload resources that manage one or more Pods:
Deployment
StatefulSet
DaemonSet
By using deployment you will get very similar behaviour to the one you want.
Example below:
Let's suppose that I created pod quota for a custom namespace, set to "2" as in this example and I have two pods running in this namespace:
kubectl get pods -n quota-demo
NAME READY STATUS RESTARTS AGE
quota-demo-1 1/1 Running 0 75s
quota-demo-2 1/1 Running 0 6s
Third pod definition:
apiVersion: v1
kind: Pod
metadata:
name: quota-demo-3
spec:
containers:
- name: quota-demo-3
image: nginx
ports:
- containerPort: 80
Now I will try to apply this third pod in this namespace:
kubectl apply -f pod.yaml -n quota-demo
Error from server (Forbidden): error when creating "pod.yaml": pods "quota-demo-3" is forbidden: exceeded quota: pod-demo, requested: pods=1, used: pods=2, limited: pods=2
Not working as expected.
Now I will change pod definition into deployment definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: quota-demo-3-deployment
labels:
app: quota-demo-3
spec:
selector:
matchLabels:
app: quota-demo-3
template:
metadata:
labels:
app: quota-demo-3
spec:
containers:
- name: quota-demo-3
image: nginx
ports:
- containerPort: 80
I will apply this deployment:
kubectl apply -f deployment-v3.yaml -n quota-demo
deployment.apps/quota-demo-3-deployment created
Deployment is created successfully, but there is no new pod, Let's check this deployment:
kubectl get deploy -n quota-demo
NAME READY UP-TO-DATE AVAILABLE AGE
quota-demo-3-deployment 0/1 0 0 12s
We can see that a pod quota is working, deployment is monitoring resources and waiting for the possibility to create a new pod.
Let's now delete one of the pod and check deployment again:
kubectl delete pod quota-demo-2 -n quota-demo
pod "quota-demo-2" deleted
kubectl get deploy -n quota-demo
NAME READY UP-TO-DATE AVAILABLE AGE
quota-demo-3-deployment 1/1 1 1 2m50s
The pod from the deployment is created automatically after deletion of the pod:
kubectl get pods -n quota-demo
NAME READY STATUS RESTARTS AGE
quota-demo-1 1/1 Running 0 5m51s
quota-demo-3-deployment-7fd6ddcb69-nfmdj 1/1 Running 0 29s
It works the same way for memory and CPU quotas for namespace - when the resources are free, deployment will automatically create new pods.
I have created a replicaset with wrong container image with below configuration.
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: rs-d33393
namespace: default
spec:
replicas: 4
selector:
matchLabels:
name: busybox-pod
template:
metadata:
labels:
name: busybox-pod
spec:
containers:
- command:
- sh
- -c
- echo Hello Kubernetes! && sleep 3600
image: busyboxXXXXXXX
name: busybox-container
Pods Information:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
rs-d33393-5hnfx 0/1 InvalidImageName 0 11m
rs-d33393-5rt5m 0/1 InvalidImageName 0 11m
rs-d33393-ngw78 0/1 InvalidImageName 0 11m
rs-d33393-vnpdh 0/1 InvalidImageName 0 11m
After this, i try to edit the image inside replicaset using kubectl edit replicasets.extensions rs-d33393 and update image as busybox.
Now, i am expecting pods to be recreated with proper image as part of replicaset.
This has not been the exact result.
Can someone please explain, why it is so?
Thanks :)
With ReplicaSets directly you have to kill the old pod, so the new ones will be created with the right image.
If you would be using a Deployment, and you should, changing the image would force the pod to be re-created.
Replicaset does not support updates. As long as required number of pods exist matching the selector labels, replicaset's jobs is done. You should use Deployment instead.
https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/
From the docs:
To update Pods to a new spec in a controlled way, use a Deployment, as
ReplicaSets do not support a rolling update directly.
Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods. Therefore, it is recommend to use Deployments instead of directly using ReplicaSets unless you don’t require updates at all. ( i.e. one may never need to manipulate ReplicaSet objects when using a Deployment)
Its easy to perform rolling updates and rollbacks when deployed using deployments.
$ kubectl create deployment busybox --image=busyboxxxxxxx --dry-run -o yaml > busybox.yaml
$ cat busybox.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: busybox
name: busybox
spec:
replicas: 1
selector:
matchLabels:
app: busybox
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: busybox
spec:
containers:
- image: busyboxxxxxxx
name: busyboxxxxxxx
ubuntu#dlv-k8s-cluster-master:~$ kubectl create -f busybox.yaml --record=true
deployment.apps/busybox created
Check rollout history
ubuntu#dlv-k8s-cluster-master:~$ kubectl rollout history deployment busybox
deployment.apps/busybox
REVISION CHANGE-CAUSE
1 kubectl create --filename=busybox.yaml --record=true
Update image on deployment
ubuntu#dlv-k8s-cluster-master:~$ kubectl set image deployment.app/busybox *=busybox --record
deployment.apps/busybox image updated
ubuntu#dlv-k8s-cluster-master:~$ kubectl rollout history deployment busybox
deployment.apps/busybox
REVISION CHANGE-CAUSE
1 kubectl create --filename=busybox.yaml --record=true
2 kubectl set image deployment.app/busybox *=busybox --record=true
Rollback Deployment
ubuntu#dlv-k8s-cluster-master:~$ kubectl rollout undo deployment busybox
deployment.apps/busybox rolled back
ubuntu#dlv-k8s-cluster-master:~$ kubectl rollout history deployment busybox
deployment.apps/busybox
REVISION CHANGE-CAUSE
2 kubectl set image deployment.app/busybox *=busybox --record=true
3 kubectl create --filename=busybox.yaml --record=true
You could use
k scale rs new-replica-set --replicas=0
and then
k scale rs new-replica-set --replicas=<Your number of replicas>
Edit the replicaset(assuming its called replicaset.yaml) file with command:
kubectl edit rs replicaset
edit the image name in the editor
save the file
exit the editor
Now , you will need to either delete the replica sets or delete the existing pods:
kubectl delete rs new-replica-set
kubectl delete pod pod_1 pod_2 pod_2 pod_4
replicaset should spin up new pods with new image.
What is the command to delete replication controller and its pods?
I am taking a course to learn k8s on pluralsight. I am trying to delete the pods that I have just created using Replication controller. Following is my YAML:
apiVersion: v1
kind: ReplicationController
metadata:
name: hello-rc
spec:
replicas: 2
selector:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-ctr
image: nigelpoulton/pluralsight-docker-ci:latest
ports:
- containerPort: 8080
If I do 'kubectl get pods' following is the how it looks on my mac:
I have tried the following two commands to delete the pods that are created in the Minikube cluster on my mac, but they are not working:
kubectl delete pods hello-world
kubectl delete pods hello-rc
Could someone help me understand what I am missing?
you can delete the pods by deleting the replication controller that created them
kubectl delete rc hello-rc
also, because pods created are just managed by ReplicationController, you can delete only theReplicationController and leave the pods running
kubectl delete rc hello-rc --cascade=false
this means the pods are no longer managed .you can create a new ReplicationController with the
proper label selector and manage them again
Also,instead of replicationcontrollers, you can use replica sets.
They behave in a similar way, but they have more expressive
pod selectors. For example, a ReplicationController can’t match pods with 2 labels
below command is just enough
kubectl delete rc hello-rc
One more thing is that ReplicationController is deprecated rather ReplicaSets is preferred
I have a deployment with a defined number of replicas. I use readiness probe to communicate if my Pod is ready/ not ready to handle new connections – my Pods toggle between ready/ not ready state during their lifetime.
I want Kubernetes to scale the deployment up/ down to ensure that there is always the desired number of pods in a ready state.
Example:
If replicas is 4 and there are 4 Pods in ready state, then Kubernetes should keep the current replica count.
If replicas is 4 and there are 2 ready pods and 2 not ready pods, then Kubernetes should add 2 more pods.
How do I make Kubernetes scale my deployment based on the "ready"/ "not ready" status of my Pods?
I don't think this is possible. If pod is not ready, k8 will not make it ready as It is something which releated to your application.Even if it create new pod, how readiness will be guaranted. So you have to resolve the reasons behind non ready status and then k8. Only thing k8 does it keep them away from taking world load to avoid request failure
Ensuring you always have 4 pods running can be done by specifying the replicas property in your deployment definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 4 #here we define a requirement for 4 replicas
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Kubernetes will ensure that if any pods crash, replacement pods will be created so that a total of 4 are always available.
You cannot schedule deployments on unhealthy nodes in the cluster. The master api will only create pods on nodes which are healthy and meet the quota criteria to create any additional pods on the nodes which are schedulable.
Moreover, what you define is called an auto-heal concept of k8s which in basic terms will be taken care of.
The following is the file used to create the Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kloud-php7
namespace: kloud-hosting
spec:
replicas: 1
template:
metadata:
labels:
app: kloud-php7
spec:
containers:
- name: kloud-php7
image: 192.168.1.1:5000/kloud-php7
- name: kloud-nginx
image: 192.168.1.1:5000/kloud-nginx
ports:
- containerPort: 80
The Deployment and the Pod worked fine, but after deleting the Deployment and a generated ReplicaSet, the I cannot delete the spawn Pods permanently. New Pods will be created if old ones are deleted.
The kubernetes cluster is created with kargo, containing 4 nodes running CentOS 7.3, kubernetes version 1.5.6
Any idea how to solve this problem ?
This is working as intended. The Deployment creates (and recreates) a ReplicaSet and the ReplicaSet creates (and recreates!) Pods. You need to delete the Deployment, not the Pods or the ReplicaSet:
kubectl delete deploy -n kloud-hosting kloud-php7
This is Because the replication set always enables to recreate the pods as mentioned in the deployment file(suppose say 3 ..kube always make sure that 3 pods up and running)
so here we need to delete replication set first to get rid of pods.
kubectl get rs
and delete the replication set .this will in turn deletes the pods
It could be the deamonsets need to be deleted.
For example:
$ kubectl get DaemonSets
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
elasticsearch-operator-sysctl 5 5 5 5 5 <none> 6d
$ kubectl delete daemonsets elasticsearch-operator-sysctl
Now running get pods should not list elasticsearch* pods.