With node_selector we can schedule a particular deployment replicas to a certain node pool. But how to make sure that at least one pod is running in every node (say size of node pool is more than 1)
I need this to ensure my pods are spread across the node pool, so that if a particular node face an issue (say disconnected from cluster) my application would still run.
With nodeSelector you can directly tie a Pod to a node, but it doesn't provide any means for spreading the Pods of a Deployment across the nodes.
To spread Pods across the nodes, you can use Pod anti-affinity.
For example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
selector:
matchLabels:
app: my-app
replicas: 3
template:
metadata:
labels:
app: my-app
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- my-app
topologyKey: "kubernetes.io/hostname"
containers:
- name: my-app
image: my-app:1.0.0
This schedules the Pods so that no two Pods of the Deployment are located on the same node, if possible.
For example, if you have 5 nodes and 3 replicas in the Deployment, then each Pod should be scheduled to a different node. If you have 5 nodes and 6 replicas, then the first 5 Pods should be scheduled to a different node each and the 6th Pod is scheduled to a node which already already has Pod (because there's no other possibility).
See more examples in the Kubernetes documentation.
Kubernetes having dedicated resource type called daemonset.This will ensure your pod is running on each node
kind: DaemonSet
metadata:
name: ssd-monitor
spec:
selector:
matchLabels:
app: ssd-monitor
template:
metadata:
labels:
app: ssd-monitor
spec:
containers:
- name: main
image: luksa/ssd-monitor
You can see 2 pods running on 2 nodes
[root#master ~]# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ssd-monitor-24qd7 1/1 Running 0 2m17s 10.36.0.7 node2.k8s <none> <none>
ssd-monitor-w7nxr 1/1 Running 0 2m17s 10.44.0.12 node1.k8s <none> <none>
Related
I am containerizing spring-boot applications on kubernetes and I want to have a different application property file for each replica of POD.
As I want to have different config for different pod replicas.
Any help on above would be appreciated.
They're not really replicas if you want a unique configuration for each pod. I think you may be looking for a StatefulSet. Quoting from the docs:
Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.
For example, given a StatefulSet like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: example
spec:
selector:
matchLabels:
app: example
serviceName: "example"
replicas: 3
template:
metadata:
labels:
app: example
spec:
containers:
- name: nginx
image: docker.io/nginxinc/nginx-unprivileged:mainline
ports:
- containerPort: 80
name: http
I end up with:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
example-0 1/1 Running 0 34s
example-1 1/1 Running 0 31s
example-2 1/1 Running 0 28s
In each pod, I can look at the value of $HOSTNAME to find my unique name, and I could use that to extract appropriate configuration from a directory path/structured file/etc.
I want to maintain different configuration for each pod, so planning to fetch properties from spring cloud config based on pod name.
Ex:
Properties in cloud
PodName1.property1 = "xxx"
PodName2.property1 ="yyy";
Property value will be different for each pod. Planning to fetch properties from cloud ,based on container name Environment.get("current pod name"+ " propertyName").
So I want to set fixed hostname/pod name
If the above is not possible, is there any alternative ?
You can use statefulsets if you want fixed pod names for your application.
e.g.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web # this will be used as prefix in pod name
spec:
serviceName: "nginx"
replicas: 2 # specify number of pods that should be running
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
This template will create 2 pods of nginx in default namespace with names as following:
kubectl get pods -l app=nginx
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 1m
web-1 1/1 Running 0 1m
A basic example can be found here.
I have a 5 node cluster(1-master/4-worker). Is it possible to configure a StatefulSet where I can make a pod(s) to run on a given node knowing it has sufficient capacity rather Kubernetes Scheduler making this decision?
Lets say, my StatefulSet create 4 pods(replicas: 4) as myapp-0,myapp-1,myapp-2 and myapp-3. Now what I am looking for is:
myapp-0 pod-- get scheduled over---> worker-1
myapp-1 pod-- get scheduled over---> worker-2
myapp-2 pod-- get scheduled over---> worker-3
myapp-3 pod-- get scheduled over---> worker-4
Please let me know if it can be achieved somehow? Because if I add a toleration to pods of a StatefulSet, it will be same for all the pods and all of them will get scheduled over a single node matching the taint.
Thanks, J
You can delegate responsibility for scheduling arbitrary subsets of pods to your own custom scheduler(s) that run(s) alongside, or instead of, the default Kubernetes scheduler.
You can write your own custom scheduler. A custom scheduler can be written in any language and can be as simple or complex as you need. Below is a very simple example of a custom scheduler written in Bash that assigns a node randomly. Note that you need to run this along with kubectl proxy for it to work.
SERVER='localhost:8001'
while true;
do
for PODNAME in $(kubectl --server $SERVER get pods -o json | jq '.items[] | select(.spec.schedulerName == "my-scheduler") | select(.spec.nodeName == null) | .metadata.name' | tr -d '"')
;
do
NODES=($(kubectl --server $SERVER get nodes -o json | jq '.items[].metadata.name' | tr -d '"'))
NUMNODES=${#NODES[#]}
CHOSEN=${NODES[$[$RANDOM % $NUMNODES]]}
curl --header "Content-Type:application/json" --request POST --data '{"apiVersion":"v1", "kind": "Binding", "metadata": {"name": "'$PODNAME'"}, "target": {"apiVersion": "v1", "kind"
: "Node", "name": "'$CHOSEN'"}}' http://$SERVER/api/v1/namespaces/default/pods/$PODNAME/binding/
echo "Assigned $PODNAME to $CHOSEN"
done
sleep 1
done
Then just in your StatefulSet configuration file under specification section you will have to add schedulerName: your-scheduler line.
You can also use pod affinity:.
Example:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-cache
spec:
selector:
matchLabels:
app: store
replicas: 3
template:
metadata:
labels:
app: store
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- store
topologyKey: "kubernetes.io/hostname"
containers:
- name: redis-server
image: redis:3.2-alpine
The below yaml snippet of the webserver statefuset has podAntiAffinity and podAffinity configured. This informs the scheduler that all its replicas are to be co-located with pods that have selector label app=store. This will also ensure that each web-server replica does not co-locate on a single node.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web-server
spec:
selector:
matchLabels:
app: web-store
replicas: 3
template:
metadata:
labels:
app: web-store
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- web-store
topologyKey: "kubernetes.io/hostname"
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- store
topologyKey: "kubernetes.io/hostname"
containers:
- name: web-app
image: nginx:1.12-alpine
If we create the above two deployments, our three node cluster should look like below.
node-1 node-2 node-3
webserver-1 webserver-2 webserver-3
cache-1 cache-2 cache-3
The above example uses PodAntiAffinity rule with topologyKey: "kubernetes.io/hostname" to deploy the redis cluster so that no two instances are located on the same host
You can simply define three replicas of specific pod and define particular pod configuration file, egg.:
There is label: nodeName which is the simplest form of node selection constraint, but due to its limitations it is typically not used. nodeName is a field of PodSpec. If it is non-empty, the scheduler ignores the pod and the kubelet running on the named node tries to run the pod. Thus, if nodeName is provided in the PodSpec, it takes precedence over the above methods for node selection.
Here is an example of a pod config file using the nodeName field:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
nodeName: kube-worker-1
More information about scheduler: custom-scheduler.
Take a look on this article: assigining-pods-kubernetes.
You can use the following KubeMod ModRule:
apiVersion: api.kubemod.io/v1beta1
kind: ModRule
metadata:
name: statefulset-pod-node-affinity
spec:
type: Patch
match:
# Select pods named myapp-xxx.
- select: '$.kind'
matchValue: Pod
- select: '$.metadata.name'
matchRegex: myapp-.*
patch:
# Patch the selected pods such that their node affinity matches nodes that contain a label with the name of the pod.
- op: add
path: /spec/affinity/nodeAffinity/requiredDuringSchedulingIgnoredDuringExecution
value: |-
nodeSelectorTerms:
- matchExpressions:
- key: accept-pod/{{ .Target.metadata.name }}
operator: In
values:
- 'true'
The above ModRule will monitor for the creation of pods named myapp-* and will inject a nodeAffinity section into their resource manifest before they get deployed. This will instruct the scheduler to schedule the pod to a node which has a label accept-pod/<pod-name> set to true.
Then you can assign future pods to nodes by adding labels to the nodes:
kubectl label node worker-1 accept-pod/myapp-0=true
kubectl label node worker-2 accept-pod/myapp-1=true
kubectl label node worker-3 accept-pod/myapp-2=true
...
After the above ModRule is deployed, creating the StatefulSet will trigger the creation of its pods, which will be intercepted by the ModRule. The ModRule will dynamically inject the nodeAffinity section using the name of the pod.
If, later on, the StatefulSet is deleted, deploying it again will lead to the pods being scheduled on the same exact nodes as they were before.
You can do this using nodeSelector and node affinity (take a look at this guide https://kubernetes.io/docs/concepts/configuration/assign-pod-node/), anyone can be used to run pods on specific nodes. But if the node has taints (restrictions) then you need to add tolerations for those nodes (more can be found here https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/). Using this approach, you can specify a list of nodes to be used for your pod's scheduling, the catch is if you specify for ex. 3 nodes and you have 5 pods then you don't have control how many pods will run on each of these nodes. They gets distributed as per kube-schedular.
Another relevant use case: If you want to run one pod in each of the specified nodes, you can create a daemonset and select nodes using nodeSelector.
You can use podAntiAffinity to distribute replicas to different nodes.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: "kubernetes.io/hostname"
This would deploy web-0 in worker1 , web-1 in worker2, web-2 in worker3 and web-3 in worker4.
take a look to this guideline https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
however, what you are looking for is the nodeSelector directive that should be placed in the pod spec.
Assume I have a cluster with 2 nodes and a POD with 2 replicas. Can I have the guarantee that my 2 replicas are deployed in 2 differents nodes. So that when a node is down, the application keeps running. By default does the scheduler work on best effort mode to assign the 2 replicas in distinct nodes?
Pod AntiAffinity
Pod anti-affinity can also to repel the pod from each other. so no two pods can be scheduled on same node.
Use following configurations.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: "kubernetes.io/hostname"
containers:
- name: nginx
image: nginx
This will use the anti-affinity feature so if you are having more than 2 nodes the there will be guarantee that no two pod will be scheduled on same node.
You can use kind: DeamonSet . Here is a link to Kubernetes DeamonSet documentation.
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.
Here is a link to documentation about DeamonSets in OpenShift
Example might look like the following:
This is available on Openshift >= 3.2 version of openshift This use case is to run a specific docker container (veermuchandi/welcome) on all nodes (or a set nodes with specific label
Enable HostPorts expose on Openshift
$ oc edit scc restricted #as system:admin user
change allowHostPorts: true and save
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: welcome
spec:
template:
metadata:
name: welcome
labels:
daemon: welcome
spec:
containers:
- name: c
image: veermuchandi/welcome
ports:
- containerPort: 8080
hostPort: 8080
name: serverport
$ oc create -f myDaemonset.yaml #with system:admin user
Source available here
Daemonset is not a good option. It will schedule one pod on every node. In future if you scale your cluster and then pods get scaled as many as nodes. Instead Use pod affinity to schedule no more than one pod on any node
I have one kubernetes cluster with 4 nodes and one master. I am trying to run 5 nginx pod in all nodes. Currently sometimes the scheduler runs all the pods in one machine and sometimes in different machine.
What happens if my node goes down and all my pods were running in same node? We need to avoid this.
How to enforce scheduler to run pods on the nodes in round-robin fashion, so that if any node goes down then at at least one node should have NGINX pod in running mode.
Is this possible or not? If possible, how can we achieve this scenario?
Use podAntiAfinity
Reference: Kubernetes in Action Chapter 16. Advanced scheduling
The podAntiAfinity with requiredDuringSchedulingIgnoredDuringExecution can be used to prevent the same pod from being scheduled to the same hostname. If prefer more relaxed constraint, use preferredDuringSchedulingIgnoredDuringExecution.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 5
template:
metadata:
labels:
app: nginx
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution: <---- hard requirement not to schedule "nginx" pod if already one scheduled.
- topologyKey: kubernetes.io/hostname <---- Anti affinity scope is host
labelSelector:
matchLabels:
app: nginx
container:
image: nginx:latest
Kubelet --max-pods
You can specify the max number of pods for a node in kubelet configuration so that in the scenario of node(s) down, it will prevent K8S from saturating another nodes with pods from the failed node.
Use Pod Topology Spread Constraints
As of 2021, (v1.19 and up) you can use Pod Topology Spread Constraints topologySpreadConstraints by default and I found it more suitable than podAntiAfinity for this case.
The major difference is that Anti-affinity can restrict only one pod per node, whereas Pod Topology Spread Constraints can restrict N pods per nodes.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-example-deployment
spec:
replicas: 6
selector:
matchLabels:
app: nginx-example
template:
metadata:
labels:
app: nginx-example
spec:
containers:
- name: nginx
image: nginx:latest
# This sets how evenly spread the pods
# For example, if there are 3 nodes available,
# 2 pods are scheduled for each node.
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: nginx-example
For more details see KEP-895 and an official blog post.
I think the inter-pod anti-affinity feature will help you.
Inter-pod anti-affinity allows you to constrain which nodes your pod is eligible to schedule on based on labels on pods that are already running on the node. Here is an example.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: nginx-service
name: nginx-service
spec:
replicas: 3
selector:
matchLabels:
run: nginx-service
template:
metadata:
labels:
service-type: nginx
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service-type
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
containers:
- name: nginx-service
image: nginx:latest
Note: I use preferredDuringSchedulingIgnoredDuringExecution here since you have more pods than nodes.
For more detailed information, you can refer to the Inter-pod affinity and anti-affinity (beta feature) part of following link:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
The scheduler should spread your pods if your containers specify resource request for the amount of memory and CPU they need. See
http://kubernetes.io/docs/user-guide/compute-resources/
We can use Taint or toleration to avoid pods deployed into an node or not to deploy into a node.
Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.
Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints.
A sample deployment yaml will be like
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: nginx-service
name: nginx-service
spec:
replicas: 3
selector:
matchLabels:
run: nginx-service
template:
metadata:
labels:
service-type: nginx
spec:
containers:
- name: nginx-service
image: nginx:latest
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoSchedule"
You can find more information at https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#:~:text=Node%20affinity%2C%20is%20a%20property,onto%20nodes%20with%20matching%20taints.