Controll job deletions - kubernetes

We have a cronjob monitoring in our cluster. If a pod did not appear in 24 hours, It means that the cronjob haven't ran and we need to alert. But sometimes, due to some garbage collection, pod is deleted (but job completed successfully). How to keep all pods and avoid garbage collection? I know about finalizers, but looks like It's not working in this case.

Posting this as answer since it's a reason why it can happen.
Answer
Cloud kubernetes clusters have nodes autoscaling policies. Or sometimes node pools can be scaled down/up manually.
Cronjob creates job for each run which in turn creates a corresponding pod. Pods are assigned to exact nodes. And if for any reason node with assigned to it pod(s) was removed due to node autoscaling/manual scaling, pods will be gone. However jobs will be preserved since they are stored in etcd.
There are two flags which control amount of jobs stored in the history:
.spec.successfulJobsHistoryLimit - which is by default set to 3
.spec.failedJobsHistoryLimit - set by default to 1
If setting up 0 then everything will be removed right after jobs finish.
Jobs History Limits
How it happens in fact
I have a GCP GKE cluster with two nodes:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-cluster-xxxx Ready <none> 15h v1.21.3-gke.2001
gke-cluster-yyyy Ready <none> 3d20h v1.21.3-gke.2001
cronjob.yaml for testing:
apiVersion: batch/v1
kind: CronJob
metadata:
name: test-cronjob
spec:
schedule: "*/2 * * * *"
successfulJobsHistoryLimit: 5
jobTemplate:
spec:
template:
spec:
containers:
- name: test
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Pods created:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-cronjob-27253914-mxnzg 0/1 Completed 0 8m59s 10.24.0.22 gke-cluster-4-xxxx <none> <none>
test-cronjob-27253916-88cjn 0/1 Completed 0 6m59s 10.24.0.25 gke-cluster-4-xxxx <none> <none>
test-cronjob-27253918-hdcg9 0/1 Completed 0 4m59s 10.24.0.29 gke-cluster-4-xxxx <none> <none>
test-cronjob-27253920-shnnp 0/1 Completed 0 2m59s 10.24.1.15 gke-cluster-4-yyyy <none> <none>
test-cronjob-27253922-cw5gp 0/1 Completed 0 59s 10.24.1.18 gke-cluster-4-yyyy <none> <none>
Scaling down one node:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-cluster-4-xxxx NotReady,SchedulingDisabled <none> 16h v1.21.3-gke.2001
gke-cluster-4-yyyy Ready <none> 3d21h v1.21.3-gke.2001
And getting pods now:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-cronjob-27253920-shnnp 0/1 Completed 0 7m47s 10.24.1.15 gke-cluster-4-yyyy <none> <none>
test-cronjob-27253922-cw5gp 0/1 Completed 0 5m47s 10.24.1.18 gke-cluster-4-yyyy <none> <none>
Previously completed pods on the first node are gone now.
Jobs are still in place:
$ kubectl get jobs
NAME COMPLETIONS DURATION AGE
test-cronjob-27253914 1/1 1s 13m
test-cronjob-27253916 1/1 2s 11m
test-cronjob-27253918 1/1 1s 9m55s
test-cronjob-27253920 1/1 34s 7m55s
test-cronjob-27253922 1/1 2s 5m55s
How it can be solved
Changing monitoring alert to look for jobs completion is much more precise method and independent to any cluster nodes scaling actions.
E.g. I still can retrieve a result from job test-cronjob-27253916 where corresponding pod to it is deleted:
$ kubectl get job test-cronjob-27253916 -o jsonpath='{.status.succeeded'}
1
Useful links:
Jobs history limits
Garbage collection in kubernetes
TTL controller for finished resources

Related

Rook ceph broken on kubernetes?

Using Ceph v1.14.10, Rook v1.3.8 on k8s 1.16 on-premise. After 10 days without any trouble, we decided to drain some nodes, then, all moved pods cant attach to their PV any more, look like Ceph cluster is broken:
My ConfigMap rook-ceph-mon-endpoints is referencing 2 missing mon pod IPs:
csi-cluster-config-json: '[{"clusterID":"rook-ceph","monitors":["10.115.0.129:6789","10.115.0.4:6789","10.115.0.132:6789"]}]
But
kubectl -n rook-ceph get pod -l app=rook-ceph-mon -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
rook-ceph-mon-e-56b849775-4g5wg 1/1 Running 0 6h42m 10.115.0.2 XXXX <none> <none>
rook-ceph-mon-h-fc486fb5c-8mvng 1/1 Running 0 6h42m 10.115.0.134 XXXX <none> <none>
rook-ceph-mon-i-65666fcff4-4ft49 1/1 Running 0 30h 10.115.0.132 XXXX <none> <none>
Is it normal or I must run a kind of "reconciliation" task to update the CM with new mon pod IPs ?
(could be related to https://github.com/rook/rook/issues/2262)
I had to manualy update:
secret rook-ceph-config
cm rook-ceph-mon-endpoints
cm rook-ceph-csi-config
As #travisn said:
The operator owns updating that configmap and secret. It's not expected to update them manually unless there is some disaster recovery situation as described at https://rook.github.io/docs/rook/v1.4/ceph-disaster-recovery.html.

How to scale a kubernetes cluster while limiting costs on GCP

We have a GKE cluster set up on google cloud platform.
We have an activity that requires 'bursts' of computing power.
Imagine that we usually do 100 computations an hour on average, then suddently we need to be able to process 100000 in less then two minutes. However most of the time, everything is close to idle.
We do not want to pay for idle servers 99% of the time, and want to scale clusters depending on actual use (no data persistance needed, servers can be deleted afterwards). I looked up the documentation available on kubernetes regarding auto scaling, for adding more pods with HPA and adding more nodes with cluster autoscaler
However it doesn't seem like any of these solutions would actually reduce our costs or improve performances, because they do not seem to scale past the GCP plan:
Imagine that we have a google plan with 8 CPUs. My understanding is if we add more nodes with cluster autoscaler we will just instead of having e.g. 2 nodes using 4 CPUs each we will have 4 nodes using 2 cpus each. But the total available computing power will still be 8 CPU.
Same reasoning go for HPA with more pods instead of more nodes.
If we have the 8 CPU payment plan but only use 4 of them, my understanding is we still get billed for 8 so scaling down is not really useful.
What we want is autoscaling to change our payment plan temporarly (imagine from n1-standard-8 to n1-standard-16) and get actual new computing power.
I can't believe we are the only ones with this use case but I cannot find any documentation on this anywhere! Did I misunderstand something ?
TL;DR:
Create a small persistant node-pool
Create a powerfull node-pool that can be scaled to zero (and cease billing) while not in use.
Tools used:
GKE’s Cluster Autoscaling, Node selector, Anti-affinity rules and Taints and tolerations.
GKE Pricing:
From GKE Pricing:
Starting June 6, 2020, GKE will charge a cluster management fee of $0.10 per cluster per hour. The following conditions apply to the cluster management fee:
One zonal cluster per billing account is free.
The fee is flat, irrespective of cluster size and topology.
Billing is computed on a per-second basis for each cluster. The total amount is rounded to the nearest cent, at the end of each month.
From Pricing for Worker Nodes:
GKE uses Compute Engine instances for worker nodes in the cluster. You are billed for each of those instances according to Compute Engine's pricing, until the nodes are deleted. Compute Engine resources are billed on a per-second basis with a one-minute minimum usage cost.
Enters, Cluster Autoscaler:
automatically resize your GKE cluster’s node pools based on the demands of your workloads. When demand is high, cluster autoscaler adds nodes to the node pool. When demand is low, cluster autoscaler scales back down to a minimum size that you designate. This can increase the availability of your workloads when you need it, while controlling costs.
Cluster Autoscaler cannot scale the entire cluster to zero, at least one node must always be available in the cluster to run system pods.
Since you already have a persistent workload, this wont be a problem, what we will do is create a new node pool:
A node pool is a group of nodes within a cluster that all have the same configuration. Every cluster has at least one default node pool, but you can add other node pools as needed.
For this example I'll create two node pools:
A default node pool with a fixed size of one node with a small instance size (emulating the cluster you already have).
A second node pool with more compute power to run the jobs (I'll call it power-pool).
Choose the machine type with the power you need to run your AI Jobs, for this example I'll create a n1-standard-8.
This power-pool will have autoscaling set to allow max 4 nodes, minimum 0 nodes.
If you like to add GPUs you can check this great: Guide Scale to almost zero + GPUs.
Taints and Tolerations:
Only the jobs related to the AI workload will run on the power-pool, for that use a node selector in the job pods to make sure they run in the power-pool nodes.
Set a anti-affinity rule to ensure that two of your training pods cannot be scheduled on the same node (optimizing the price-performance ratio, this is optional depending on your workload).
Add a taint to the power-pool to avoid other workloads (and system resources) to be scheduled on the autoscalable pool.
Add the tolerations to the AI Jobs to let them run on those nodes.
Reproduction:
Create the Cluster with the persistent default-pool:
PROJECT_ID="YOUR_PROJECT_ID"
GCP_ZONE="CLUSTER_ZONE"
GKE_CLUSTER_NAME="CLUSTER_NAME"
AUTOSCALE_POOL="power-pool"
gcloud container clusters create ${GKE_CLUSTER_NAME} \
--machine-type="n1-standard-1" \
--num-nodes=1 \
--zone=${GCP_ZONE} \
--project=${PROJECT_ID}
Create the auto-scale pool:
gcloud container node-pools create ${GKE_BURST_POOL} \
--cluster=${GKE_CLUSTER_NAME} \
--machine-type=n1-standard-8 \
--node-labels=load=on-demand \
--node-taints=reserved-pool=true:NoSchedule \
--enable-autoscaling \
--min-nodes=0 \
--max-nodes=4 \
--zone=${GCP_ZONE} \
--project=${PROJECT_ID}
Note about parameters:
--node-labels=load=on-demand: Add a label to the nodes in the power pool to allow selecting them in our AI job using a node selector.
--node-taints=reserved-pool=true:NoSchedule: Add a taint to the nodes to prevent any other workload from accidentally being scheduled in this node pool.
Here you can see the two pools we created, the static pool with 1 node and the autoscalable pool with 0-4 nodes.
Since we don't have workload running on the autoscalable node-pool, it shows 0 nodes running (and with no charge while there is no node in execution).
Now we'll create a job that create 4 parallel pods that run for 5 minutes.
This job will have the following parameters to differentiate from normal pods:
parallelism: 4: to use all 4 nodes to enhance performance
nodeSelector.load: on-demand: to assign to the nodes with that label.
podAntiAffinity: to declare that we do not want two pods with the same label app: greedy-job running in the same node (optional).
tolerations: to match the toleration to the taint that we attached to the nodes, so these pods are allowed to be scheduled in these nodes.
apiVersion: batch/v1
kind: Job
metadata:
name: greedy-job
spec:
parallelism: 4
template:
metadata:
name: greedy-job
labels:
app: greedy-app
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "300"
nodeSelector:
load: on-demand
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- greedy-app
topologyKey: "kubernetes.io/hostname"
tolerations:
- key: reserved-pool
operator: Equal
value: "true"
effect: NoSchedule
restartPolicy: OnFailure
Now that our cluster is in standby we will use the job yaml we just created (I'll call it greedyjob.yaml). This job will run four processes that will run in parallel and that will complete after about 5 minutes.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 42m v1.14.10-gke.27
$ kubectl get pods
No resources found in default namespace.
$ kubectl apply -f greedyjob.yaml
job.batch/greedy-job created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
greedy-job-2xbvx 0/1 Pending 0 11s
greedy-job-72j8r 0/1 Pending 0 11s
greedy-job-9dfdt 0/1 Pending 0 11s
greedy-job-wqct9 0/1 Pending 0 11s
Our job was applied, but is in pending, let's see what's going on in those pods:
$ kubectl describe pod greedy-job-2xbvx
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 28s (x2 over 28s) default-scheduler 0/1 nodes are available: 1 node(s) didn't match node selector.
Normal TriggeredScaleUp 23s cluster-autoscaler pod triggered scale-up: [{https://content.googleapis.com/compute/v1/projects/owilliam/zones/us-central1-b/instanceGroups/gke-autoscale-to-zero-clus-power-pool-564148fd-grp 0->1 (max: 4)}]
The pod can't be scheduled on the current node due to the rules we defined, this triggers a Scale Up routine on our power-pool. This is a very dynamic process, after 90 seconds the first node is up and running:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
greedy-job-2xbvx 0/1 Pending 0 93s
greedy-job-72j8r 0/1 ContainerCreating 0 93s
greedy-job-9dfdt 0/1 Pending 0 93s
greedy-job-wqct9 0/1 Pending 0 93s
$ kubectl nodes
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 44m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-qxkw Ready <none> 11s v1.14.10-gke.27
Since we set pod anti-affinity rules, the second pod can't be scheduled on the node that was brought up and triggers the next scale up, take a look at the events on the second pod:
$ k describe pod greedy-job-2xbvx
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal TriggeredScaleUp 2m45s cluster-autoscaler pod triggered scale-up: [{https://content.googleapis.com/compute/v1/projects/owilliam/zones/us-central1-b/instanceGroups/gke-autoscale-to-zero-clus-power-pool-564148fd-grp 0->1 (max: 4)}]
Warning FailedScheduling 93s (x3 over 2m50s) default-scheduler 0/1 nodes are available: 1 node(s) didn't match node selector.
Warning FailedScheduling 79s (x3 over 83s) default-scheduler 0/2 nodes are available: 1 node(s) didn't match node selector, 1 node(s) had taints that the pod didn't tolerate.
Normal TriggeredScaleUp 62s cluster-autoscaler pod triggered scale-up: [{https://content.googleapis.com/compute/v1/projects/owilliam/zones/us-central1-b/instanceGroups/gke-autoscale-to-zero-clus-power-pool-564148fd-grp 1->2 (max: 4)}]
Warning FailedScheduling 3s (x3 over 68s) default-scheduler 0/2 nodes are available: 1 node(s) didn't match node selector, 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules.
The same process repeats until all requirements are satisfied:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
greedy-job-2xbvx 0/1 Pending 0 3m39s
greedy-job-72j8r 1/1 Running 0 3m39s
greedy-job-9dfdt 0/1 Pending 0 3m39s
greedy-job-wqct9 1/1 Running 0 3m39s
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 46m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-qxkw Ready <none> 2m16s v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-sf6q Ready <none> 28s v1.14.10-gke.27
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
greedy-job-2xbvx 0/1 Pending 0 5m19s
greedy-job-72j8r 1/1 Running 0 5m19s
greedy-job-9dfdt 1/1 Running 0 5m19s
greedy-job-wqct9 1/1 Running 0 5m19s
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 48m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-39m2 Ready <none> 63s v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-qxkw Ready <none> 4m8s v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-sf6q Ready <none> 2m20s v1.14.10-gke.27
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
greedy-job-2xbvx 1/1 Running 0 6m12s
greedy-job-72j8r 1/1 Running 0 6m12s
greedy-job-9dfdt 1/1 Running 0 6m12s
greedy-job-wqct9 1/1 Running 0 6m12s
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 48m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-39m2 Ready <none> 113s v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-ggxv Ready <none> 26s v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-qxkw Ready <none> 4m58s v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-sf6q Ready <none> 3m10s v1.14.10-gke.27
Here we can see that all nodes are now up and running (thus, being billed by second)
Now all jobs are running, after a few minutes the jobs complete their tasks:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
greedy-job-2xbvx 1/1 Running 0 7m22s
greedy-job-72j8r 0/1 Completed 0 7m22s
greedy-job-9dfdt 1/1 Running 0 7m22s
greedy-job-wqct9 1/1 Running 0 7m22s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
greedy-job-2xbvx 0/1 Completed 0 11m
greedy-job-72j8r 0/1 Completed 0 11m
greedy-job-9dfdt 0/1 Completed 0 11m
greedy-job-wqct9 0/1 Completed 0 11m
Once the task is completed, the autoscaler starts downsizing the cluster.
You can learn more about the rules for this process here: GKE Cluster AutoScaler
$ while true; do kubectl get nodes ; sleep 60; done
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 54m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-39m2 Ready <none> 7m26s v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-ggxv Ready <none> 5m59s v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-qxkw Ready <none> 10m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-sf6q Ready <none> 8m43s v1.14.10-gke.27
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 62m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-39m2 Ready <none> 15m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-ggxv Ready <none> 14m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-qxkw Ready <none> 18m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-sf6q NotReady <none> 16m v1.14.10-gke.27
Once conditions are met, autoscaler flags the node as NotReady and starts removing them:
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 64m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-39m2 NotReady <none> 17m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-ggxv NotReady <none> 16m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-qxkw Ready <none> 20m v1.14.10-gke.27
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 65m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-39m2 NotReady <none> 18m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-ggxv NotReady <none> 17m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-qxkw NotReady <none> 21m v1.14.10-gke.27
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 66m v1.14.10-gke.27
gke-autoscale-to-zero-clus-power-pool-564148fd-ggxv NotReady <none> 18m v1.14.10-gke.27
NAME STATUS ROLES AGE VERSION
gke-autoscale-to-zero-cl-default-pool-9f6d80d3-x9lb Ready <none> 67m v1.14.10-gke.27
Here is the confirmation that the nodes were removed from GKE and from VMs(remember that every node is a Virtual Machine billed as Compute Engine):
Compute Engine: (note that gke-cluster-1-default-pool is from another cluster, I added it to the screenshot to show you that there is no other node from cluster gke-autoscale-to-zero other than the default persistent one.)
GKE:
Final Thoughts:
When scaling down, cluster autoscaler respects scheduling and eviction rules set on Pods. These restrictions can prevent a node from being deleted by the autoscaler. A node's deletion could be prevented if it contains a Pod with any of these conditions:
An application's PodDisruptionBudget can also prevent autoscaling; if deleting nodes would cause the budget to be exceeded, the cluster does not scale down.
You can note that the process is really fast, in our example it took around 90 seconds to upscale a node and 5 minutes to finish downscaling a standby node, providing a HUGE improvement in your billing.
Preemptible VMs can reduce even further your billing, but you will have to consider the kind of workload you are running:
Preemptible VMs are Compute Engine VM instances that last a maximum of 24 hours and provide no availability guarantees. Preemptible VMs are priced lower than standard Compute Engine VMs and offer the same machine types and options.
I know you are still considering the best architecture for your app.
Using APP Engine and IA Platform are optimal solutions as well, but since you are currently running your workload on GKE I wanted to show you an example as requested.
If you have any further questions let me know in the comments.

How to reserve certain worker nodes for a namespace

I would like to reserve some worker nodes for a namespace. I see the notes of stackflow and medium
How to assign a namespace to certain nodes?
https://medium.com/#alejandro.ramirez.ch/reserving-a-kubernetes-node-for-specific-nodes-e75dc8297076
I understand we can use taint and nodeselector to achieve that.
My question is if people get to know the details of nodeselector or taint, how can we prevent them to deploy pods into these dedicated worker nodes.
thank you
To accomplish what you need, basically you have to use taint.
Let's suppose you have a Kubernetes cluster with one Master and 2 Worker nodes:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
knode01 Ready <none> 8d v1.16.2
knode02 Ready <none> 8d v1.16.2
kubemaster Ready master 8d v1.16.2
As example I'll setup knode01 as Prod and knode02 as Dev.
$ kubectl taint nodes knode01 key=prod:NoSchedule
$ kubectl taint nodes knode02 key=dev:NoSchedule
To run a pod into these nodes, we have to specify a toleration in spec session on you yaml file:
apiVersion: v1
kind: Pod
metadata:
name: pod1
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
tolerations:
- key: "key"
operator: "Equal"
value: "dev"
effect: "NoSchedule"
This pod (pod1) will always run in knode02 because it's setup as dev. If we want to run it on prod, our tolerations should look like that:
tolerations:
- key: "key"
operator: "Equal"
value: "prod"
effect: "NoSchedule"
Since we have only 2 nodes and both are specified to run only prod or dev, if we try to run a pod without specifying tolerations, the pod will enter on a pending state:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod0 1/1 Running 0 21m 192.168.25.156 knode01 <none> <none>
pod1 1/1 Running 0 20m 192.168.32.83 knode02 <none> <none>
pod2 1/1 Running 0 18m 192.168.25.157 knode01 <none> <none>
pod3 1/1 Running 0 17m 192.168.32.84 knode02 <none> <none>
shell-demo 0/1 Pending 0 16m <none> <none> <none> <none>
To remove a taint:
$ kubectl taint nodes knode02 key:NoSchedule-
This is how it can be done
Add new label, say, ns=reserved, label to a specific worker node
Add taint and tolerations to target specific pods on to this worker node
You need to define RBAC roles and role bindings in that namespace to control what other users can do

Debugging kubernetes pods in Completed status forever but not ready

Would there be any reason if I'm trying to run a pod on a k8s cluster which stays in the 'Completed' status forever but is never in ready status 'Ready 0/1' ... although there are core-dns , kube-proxy pods,etc running successfully scheduled under each node in the nodepool assigned to a k8s cluster ... all the worker nodes seems to be in healthy state
This sounds like the pod lifecycle has ended and the reasons is probably because your pod have finished the task it is meant for.
Something like next example will not do anything, it will be created successfully then started, will pull the image and then it will be marked as completed.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: test-pod
image: busybox
resources:
Here is how it will look:
NAMESPACE NAME READY STATUS RESTARTS AGE
default my-pod 0/1 Completed 1 3s
default testapp-1 0/1 Pending 0 92m
default web-0 1/1 Running 0 3h2m
kube-system event-exporter-v0.2.5-7df89f4b8f-mc7kt 2/2 Running 0 7d19h
kube-system fluentd-gcp-scaler-54ccb89d5-9pp8z 1/1 Running 0 7d19h
kube-system fluentd-gcp-v3.1.1-gdr9l 2/2 Running 0 123m
kube-system fluentd-gcp-v3.1.1-pt8zp 2/2 Running 0 3h2m
kube-system fluentd-gcp-v3.1.1-xwhkn 2/2 Running 5 172m
kube-system fluentd-gcp-v3.1.1-zh29w 2/2 Running 0 7d19h
For this case, I will recommend you to check your yaml, check what kind of pod you are running? and what is it meant for?
Even, and with test purposes, you can add a command argument to keep it running.
command: ["/bin/sh","-c"]
args: ["command one; command two && command three"]
something like:
args: ["-c", "while true; do echo hello; sleep 10;done"]
The yaml with commands added would look like this:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: test-pod
image: busybox
resources:
command: ["/bin/sh","-c"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
Here is how it will look:
NAMESPACE NAME READY STATUS RESTARTS AGE
default my-pod 1/1 Running 0 8m56s
default testapp-1 0/1 Pending 0 90m
default web-0 1/1 Running 0 3h
kube-system event-exporter-v0.2.5-7df89f4b8f-mc7kt 2/2 Running 0 7d19h
kube-system fluentd-gcp-scaler-54ccb89d5-9pp8z 1/1 Running 0 7d19h
kube-system fluentd-gcp-v3.1.1-gdr9l 2/2 Running 0 122m
Another thing that will help is a kubectl describe pod $POD_NAME, to analyze this further.
This issue got nothing to do with k8s cluster or setup. It's mostly with your Dockerfile.
For the pod to be in running state, you need to have the container as an executable, for that usually an Entrypoint that specifies the instructions would solve the problem you are facing.
A better approach to resolve this or debug this particular issue would be, by running the respective docker container on a localmachine before deploying that onto k8s cluster.
The message Completed, most likely implies that the k8s started your container, then the container subsequently exited.
In Dockerfile, the ENTRYPOINT defines the process with pid 1 which the docker container should hold, otherwise it terminates, so whatever the pod is trying to do, you need to check whether the ENTRYPOINT command is looping, and doesn't die.
For side notes, Kubernetes will try to restart the pod, once it terminates but after few tries the pod status may turn into CrashLoopBackOff and you will see message something similar to Back-off restarting failed container.

Kubernetes coredns pods stuck in Pending status. Cannot start the dashboard

I am building a Kubernetes cluster following this tutorial, and I have troubles to access the Kubernetes dashboard. I already created another question about it that you can see here, but while digging up into my cluster, I think that the problem might be somewhere else and that's why I create a new question.
I start my master, by running the following commands:
> kubeadm reset
> kubeadm init --apiserver-advertise-address=[MASTER_IP] > file.txt
> tail -2 file.txt > join.sh # I keep this file for later
> kubectl apply -f https://git.io/weave-kube/
> kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-kb2zq 0/1 Pending 0 2m46s
coredns-fb8b8dccf-nnc5n 0/1 Pending 0 2m46s
etcd-kubemaster 1/1 Running 0 93s
kube-apiserver-kubemaster 1/1 Running 0 93s
kube-controller-manager-kubemaster 1/1 Running 0 113s
kube-proxy-lxhvs 1/1 Running 0 2m46s
kube-scheduler-kubemaster 1/1 Running 0 93s
Here we can see that I have two coredns pods stuck in Pending state forever, and when I run the command :
> kubectl -n kube-system describe pod coredns-fb8b8dccf-kb2zq
I can see in the Events part the following Warning :
Failed Scheduling : 0/1 nodes are available 1 node(s) had taints that the pod didn't tolerate.
Since it is a Warning and not and Error, and that as a Kubernetes newbie, taints does not mean much to me, I tried to connect a node to the master (using the previously saved command) :
> cat join.sh
kubeadm join [MASTER_IP]:6443 --token [TOKEN] \
--discovery-token-ca-cert-hash sha256:[ANOTHER_TOKEN]
> ssh [USER]#[WORKER_IP] 'bash' < join.sh
This node has joined the cluster.
On the master, I check that the node is connected:
> kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster NotReady master 13m v1.14.1
kubeslave1 NotReady <none> 31s v1.14.1
And I check my pods :
> kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-kb2zq 0/1 Pending 0 14m
coredns-fb8b8dccf-nnc5n 0/1 Pending 0 14m
etcd-kubemaster 1/1 Running 0 13m
kube-apiserver-kubemaster 1/1 Running 0 13m
kube-controller-manager-kubemaster 1/1 Running 0 13m
kube-proxy-lxhvs 1/1 Running 0 14m
kube-proxy-xllx4 0/1 ContainerCreating 0 2m16s
kube-scheduler-kubemaster 1/1 Running 0 13m
We can see that another kube-proxy pod have been created and is stuck in ContainerCreating status.
And when I am doing a describe again :
kubectl -n kube-system describe pod kube-proxy-xllx4
I can see in the Events part multiple identical Warnings :
Failed create pod sandbox : rpx error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Get https://k8s.gcr.io/v1/_ping: dial tcp: lookup k8s.gcr.io on [::1]:53 read up [::1]43133->[::1]:53: read: connection refused
Here are my repositories :
docker image ls
REPOSITORY TAG
k8s.gcr.io/kube-proxy v1.14.1
k8s.gcr.io/kube-apiserver v1.14.1
k8s.gcr.io/kube-controller-manager v1.14.1
k8s.gcr.io/kube-scheduler v1.14.1
k8s.gcr.io/coredns 1.3.1
k8s.gcr.io/etcd 3.3.10
k8s.gcr.io/pause 3.1
And so, for the dashboard part, I tried to start it with the command
> kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
But the dashboard pod is stuck in Pending state.
kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-kb2zq 0/1 Pending 0 40m
coredns-fb8b8dccf-nnc5n 0/1 Pending 0 40m
etcd-kubemaster 1/1 Running 0 38m
kube-apiserver-kubemaster 1/1 Running 0 38m
kube-controller-manager-kubemaster 1/1 Running 0 39m
kube-proxy-lxhvs 1/1 Running 0 40m
kube-proxy-xllx4 0/1 ContainerCreating 0 27m
kube-scheduler-kubemaster 1/1 Running 0 38m
kubernetes-dashboard-5f7b999d65-qn8qn 1/1 Pending 0 8s
So, event though my problem originaly was that I cannot access to my dashboard, I guess that the real problem is deeper thant that.
I know that I just put a lot of information here, but I am a k8s beginner and I am completely lost on this.
There is an issue I experienced with coredns pods stuck in a pending mode when setting up your own cluster; which I resolve by adding pod network.
Looks like because there is no Network Addon installed, the nodes are taint as not-ready. Installing the Addon would remove the taints and the Pods will be able to schedule. In my case adding flannel fixed the issue.
EDIT: There is a note about this in the official k8s documentation - Create cluster with kubeadm:
The network must be deployed before any applications. Also, CoreDNS
will not start up before a network is installed. kubeadm only
supports Container Network Interface (CNI) based networks (and does
not support kubenet).
Actually it is the opposite of a deep or serious issue. This is a trivial issue. Always you see a pod stuck on Pending state, it means the scheduler is having a hard time to schedule the pod; mostly because there are no enough resources on the node.
In your case it is a taint that has the node, and your pod doesn't have the toleration. What you have to do is to describe the node and get the taint:
kubectl describe node | grep -i taints
Note: you might have more then one taint. So you might want to do kubectl describe no NODE since with grep you will only see one taint.
Once you get the taint, that will be something like hello=world:NoSchedule; which means key=value:effect, you will have to add a toleration section in your Deployment. This is an example Deployment so you can see how it should look like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 10
strategy:
type: Recreate
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
name: http
tolerations:
- effect: NoExecute #NoSchedule, PreferNoSchedule
key: node
operator: Equal
value: not-ready
tolerationSeconds: 3600
As you can see there is the toleration section in the yaml. So, if I would have a node with node=not-ready:NoExecute taint, no pod would be able to be scheduled on that node, unless would have this toleration.
Also you can remove the taint, if you don need it. To remove a taint you would describe the node, get the key of the taint and do:
kubectl taint node NODE key-
Hope it makes sense. Just add this section to your deployment, and it will work.
Set up the flannel network tool.
Running commands:
$ sysctl net.bridge.bridge-nf-call-iptables=1
$ kubectl apply -f
https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml