Kubernetes ds won't run pod on master node [duplicate] - kubernetes

This question already has an answer here:
Scheduler is not scheduling Pod for DaemonSet in Master node
(1 answer)
Closed 5 years ago.
I am running a cluster with 1 master and 1 node. Now, when I run daemon set it only shows 1 desired node, while it should be 2. There is no error I could find anywhere in the describe/logs, but the daemonset only chooses 1 node to run. I am using kubernetes 1.9.1.
Any idea what I can be doing wrong? Or how to debug it?
TIA.

This happens if the k8s master node has just the node-role.kubernetes.io/master: NoSchedule taint without toleration for it.
The the node-role.kubernetes.io/master: NoSchedule toleration is needed in k8s 1.6 or later to schedule daemonsets on master nodes.
Add the following toleration for the daemonset in the YAML file to make k8s schedule daemonsets on the master node too:
...
kind: DaemonSet
spec:
...
template:
...
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
Taints of the master node can be checked by:
kubectl describe node <master node>
Tolerations of a pod can be checked by:
kubectl describe pod <pod name>
More info about daemonsets is in https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/.

By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, e.g. for a single-machine Kubernetes cluster for development, run:
kubectl taint nodes --all node-role.kubernetes.io/master-

Related

Install Kubernetes-embedded pods run only specific node

I have a Kubernetes cluster, and running 3 nodes. But I want to run my app on only two nodes. So I want to ask, Can I run other pods (Kubernetes extensions) in the Kubernetes cluster only on a single node?
node = Only Kubernetes pods
node = my app
node = my app
Yes, you can run the application POD on only two nodes and other extension Kubernetes POD on a single node.
When you say Kubernetes extension POD by that consider some external third-party PODs like Nginx ingress controller and other not default system POD like kube-proxy, kubelet, etc those should require to run each available node.
Option 1
You can use the Node affinity to schedule PODs on specific nodes.
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/hostname
operator: In
values:
- node-1
- node-2
containers:
- name: with-node-affinity
image: nginx
Option 2
You can use the taint & toleration to schedule the PODs on specific nodes.
Certain kube-system pods like kube-proxy, the CNI pods (cilium/flannel) and other daemonSet must run on each of the worker node, you can not stop them. If that is not the case for you, a node can be taint to noSchedule using below command.
kubectl taint nodes type=<a_node_label>:NoSchedule
The further enhancement you can explore https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

Kubernetes Restrict Node to run labeled pods only

we would like to merge 2 kubernetes cluster because we need to establish a communication between the pods and it should also be cheaper.
Cluster 1 should stay intact and cluster 2 will be deleted. The pods in cluster 2 have very high requirements for resources and we would like to create node pool dedicated to these pods.
So the idea is to label the new nodes and also label the pods that were part of cluster 2 before to enforce that they run on these nodes.
What I cannot find an answer for is the following question: How can I ensure that no other pod is scheduled to run on the new node pool without having to redeploy all pods and assigning labels to them?
There are 2 problems you have to solve:
Stop cluster 1 pods from running on cluster 2 nodes
Stop cluster 2 pods from running on cluster 1 nodes
Given your question, it looks like you can make changes to cluster 2 deployments, but don't want to update existing cluster 1 deployments.
The solution to problem 1 is to use taints and tolerations. You can taint your cluster 2 nodes to stop all pods from being scheduled there then add tolerations to your cluster 2 deployments to allow them to ignore this taint. This means that cluster 1 pods cannot be deployed to cluster 2 nodes and problem 1 is solved.
You add a taint like this:
kubectl taint nodes node1 key1=value1:NoSchedule-
and tolerate it in your cluster 2 pod/deployment spec like this:
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoSchedule"
Problem 2 cannot be solved the same way because you don't want to change deployments for cluster 1 pods. This is a shame because taints are the easiest solution to this. If you could make that change, then you'd simply add a taint to cluster 1 nodes and tolerate it only in cluster 1 deployments.
Given these constraints, the solution is to use node affinity. You'd need to use the requiredDuringSchedulingIgnoredDuringExecution form to ensure that the rules are always followed. The rules themselves can be as simple as a node selector based on labels. A shorter version of the example from the linked docs:
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: a-node-label-key
operator: In
values:
- a-node-label-value
containers:
- name: with-node-affinity
image: k8s.gcr.io/pause:2.0

Kubernetes: Dynamically identify node and taint

I have an application pod which will be deployed on k8s cluster
But as Kubernetes scheduler decides on which node this pod needs to run
Now I want to add taint to the node dynamically where my application pod is running with NOschedule so that no new pods will be scheduled on this node
I know that we can use kubectl taint node with NOschedule if I know the node name but I want to achieve this dynamically based on which node this application pod is running
The reason why I want to do this is this is critical application pod which shouldn’t have down time and for good reasons I have only 1 pod for this application across the cluster
Please suggest
In addition to #Rico answer.
You can use feature called node affinity, this is still a beta but some functionality is already implemented.
You should add a label to your node, for example test-node-affinity: test. Once this is done you can Add the nodeAffinity of field affinity in the PodSpec.
spec:
...
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: test-node-affinity
operator: In
values:
- test
This will mean the POD will look for a node with key test-node-affinity and value test and will be deployed there.
I recommend reading this blog Taints and tolerations, pod and node affinities demystified by Toader Sebastian.
Also familiarise yourself with Taints and Tolerations from Kubernetes docs.
You can get the node where your pod is running with something like this:
$ kubectl get pod myapp-pod -o=jsonpath='{.spec.nodeName}'
Then you can taint it:
$ kubectl taint nodes <node-name-from-above> key=value:NoSchedule
or the whole thing in one command:
$ kubectl taint nodes $(kubectl get pod myapp-pod -o=jsonpath='{.spec.nodeName}') key=value:NoSchedule

Argo Workflow distribution on KOPS cluster

Using KOPS tool, I deployed a cluster with:
1 Master
2 slaves
1 Load Balancer
Now, I am trying to deploy an Argo Workflow, but I don't know the process. Will it install on Node or Master of the k8s cluster I built? How does it work?
Basically, if anyone can describe the functional flow or steps of deploying ARGO work flow on kubernetes, it would be nice. First, I need to understand where is it deployed on Master or Worker Node?
Usually, kops creates Kubernetes cluster with taints on a master node that prevent regular pods scheduling on it.
Although, there was an issues with some cluster network implementation, and sometimes you are getting a cluster without taints on the master.
You can change taints on the master node by running the following commands:
add taints (no pods on master):
kubectl taint node kube-master node-role.kubernetes.io/master:NoSchedule
remove taints (allow to schedule pods on master):
kubectl taint nodes --all node-role.kubernetes.io/master-
If you want to know whether the taints are applied to the master node of not, run the following command:
kubectl get node node-master --export -o yaml
Find a spec: section. In case the taints are present, you should see something like this:
...
spec:
externalID: node-master
podCIDR: 192.168.0.0/24
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
...

Will (can) Kubernetes run Docker containers on the master node(s)?

Kubernetes has master and minion nodes.
Will (can) Kubernetes run specified Docker containers on the master node(s)?
I guess another way of saying it is: can a master also be a minion?
Thanks for any assistance.
Update 2015-08-06: As of PR #12349 (available in 1.0.3 and will be available in 1.1 when it ships), the master node is now one of the available nodes in the cluster and you can schedule pods onto it just like any other node in the cluster.
A docker container can only be scheduled onto a kubernetes node running a kubelet (what you refer to as a minion). There is nothing preventing you from creating a cluster where the same machine (physical or virtual) runs both the kubernetes master software and a kubelet, but the current cluster provisioning scripts separate the master onto a distinct machine.
This is going to change significantly when Issue #6087 is implemented.
You need to taint your master node to run containers on it, although not recommended.
Run this on your master node:
kubectl taint nodes --all node-role.kubernetes.io/master-
Courtesy of Alex Ellis' blog post here.
You can try this code:
kubectl label node [name_of_node] node-short-name=node-1
Create yaml file (first.yaml)
apiVersion: v1
kind: Pod
metadata:
name: nginxtest
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
node-short-name: node-1
Create a pod
kubectl create –f first.yaml