Argo Workflow distribution on KOPS cluster - kubernetes

Using KOPS tool, I deployed a cluster with:
1 Master
2 slaves
1 Load Balancer
Now, I am trying to deploy an Argo Workflow, but I don't know the process. Will it install on Node or Master of the k8s cluster I built? How does it work?
Basically, if anyone can describe the functional flow or steps of deploying ARGO work flow on kubernetes, it would be nice. First, I need to understand where is it deployed on Master or Worker Node?

Usually, kops creates Kubernetes cluster with taints on a master node that prevent regular pods scheduling on it.
Although, there was an issues with some cluster network implementation, and sometimes you are getting a cluster without taints on the master.
You can change taints on the master node by running the following commands:
add taints (no pods on master):
kubectl taint node kube-master node-role.kubernetes.io/master:NoSchedule
remove taints (allow to schedule pods on master):
kubectl taint nodes --all node-role.kubernetes.io/master-
If you want to know whether the taints are applied to the master node of not, run the following command:
kubectl get node node-master --export -o yaml
Find a spec: section. In case the taints are present, you should see something like this:
...
spec:
externalID: node-master
podCIDR: 192.168.0.0/24
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
...

Related

Create same master and working node in kubenetes

I am preparing dev environment and want to create a single host to be master and worker node for kubernetes.
How can I achieve my goal?
The difference between master node and worker node is that "regular pods cannot be scheduled on a master node because of a taint"
You just need to remove node-role.kubernetes.io/master:NoSchedule taint so that pods can be scheduled on that (master) node.
Following is the command:
kubectl taint nodes <masternodename> node-role.kubernetes.io/master:NoSchedule-
The master node is responsible for running several Kubernetes processes that are absolutely necessary to run and manage the cluster properly. [1]
The worker nodes are the part of the Kubernetes clusters which actually execute the containers and applications on them. [1]
Worker nodes are generally more powerful than master nodes because they have to run hundreds of clusters on them. However, master nodes hold more significance because they manage the distribution of workload and the state of the cluster. [1]
By removing taint you will be able to schedule pods on that node.
You should firstly check the present taint by running:
kubectl describe node <nodename> | grep Taints
In case the present one is master node you should remove that taint by running:
kubectl taint node <mastername> node-role.kubernetes.io/master:NoSchedule-
References:
[1] - What is Kubernetes cluster? What are worker and master nodes?
See also:
Creating a cluster with kubeadm,
This four similar questions:
Master tainted - no pods can be deployed
Remove node-role.kubernetes.io/master:NoSchedule taint,
Allow scheduling of pods on Kubernetes master?
Are the master and worker nodes the same node in case of a single node cluster?
Taints and Tolerations.
You have to remove the NoSchedule taint from the MASTER node.
I just spun up a kubeadm node and the taint is on my control-plane, not master.
So I did the following (sydney is the node name):
$kubectl describe node sydney | grep Taints
Taints: node-role.kubernetes.io/control-plane:NoSchedule
$kubectl taint nodes sydney node-role.kubernetes.io/control-plane:NoSchedule-
node/sydney untainted
$kubectl describe node sydney | grep Taints
Taints: <none>

How to simulate nodeNotReady for a node in Kubernetes

My ceph cluster is running on AWS with 3 masters 3 workers configuration. When I do kubectl get nodes it shows me all the nodes in the ready state.
Is there is any way I can simulate manually to get nodeNotReady error for a node?.
just stop kebelet service on one of the node that you want to see as NodeNotReady
If you just want NodeNotReady you can delete the CNI you have installed.
kubectl get all -n kube-system find the DaemonSet of your CNI and delete it or just do a reverse of installing it: kubectl delete -f link_to_your_CNI_yaml
You could also try to overwhelm the node with too many pods (resources). You can also share your main goal so we can adjust the answer.
About the answer from P Ekambaram you could just ssh to a node and then stop the kubelet.
To do that in kops you can just:
ssh -A admin#Node_PublicDNS_name
systemctl stop kubelet
EDIT:
Another way is to overload the Node which will cause: System OOM encountered and that will result in Node NotReady state.
This is just one of the ways of how to achieve it:
SSH into the Node you want to get into NotReady
Install Stress
Run stress: stress --cpu 8 --io 4 --hdd 10 --vm 4 --vm-bytes 1024M --timeout 5m (you can adjust the values of course)
Wait till Node crash.
After you stop the stress the Node should get back to healthy state automatically.
Not sure what is the purpose to simulate NotReady
if the purpose is to not schedule any new pods then you can use kubectl cordon node
NODE_NAME This will add the unschedulable taint to it and prevent new pods from being scheduled there.
If the purpose is to evict existing pod then you can use kubectl drain NODE_NAME
In general you can play with taints and toleration to achieve your goal related to the above and you can much more with those!
Now NotReady status comes from the taint node.kubernetes.io/not-ready Ref
Which is set by
In version 1.13, the TaintBasedEvictions feature is promoted to beta and enabled by default, hence the taints are automatically added by the NodeController
Therefore if you want to manually set that taint kubectl taint node NODE_NAME node.kubernetes.io/not-ready=:NoExecute the NodeController will reset it automatically!
So to absolutely see the NotReady status this is the best way
Lastly, if you want to remove your networking in a particular node then you can taint it like this kubectl taint node NODE_NAME dedicated/not-ready=:NoExecute

Stop scheduling pods on kubernetes master

For testing purpose, I have enabled pod scheduling on kubernetes master node with the following command
kubectl taint nodes --all node-role.kubernetes.io/master-
Now I have added worker node to the cluster and I would like to stop scheduling pods on master. How do I do that?
You simply taint the node again.
kubectl taint nodes master node-role.kubernetes.io/master=:NoSchedule
Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints. Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.
Even placed a taint on Master node,you can specify a toleration for a pod in the PodSpec, the pod would be able to schedule onto Master node:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
To learn more,see Taints and Tolerations

Kubernetes: Dynamically identify node and taint

I have an application pod which will be deployed on k8s cluster
But as Kubernetes scheduler decides on which node this pod needs to run
Now I want to add taint to the node dynamically where my application pod is running with NOschedule so that no new pods will be scheduled on this node
I know that we can use kubectl taint node with NOschedule if I know the node name but I want to achieve this dynamically based on which node this application pod is running
The reason why I want to do this is this is critical application pod which shouldn’t have down time and for good reasons I have only 1 pod for this application across the cluster
Please suggest
In addition to #Rico answer.
You can use feature called node affinity, this is still a beta but some functionality is already implemented.
You should add a label to your node, for example test-node-affinity: test. Once this is done you can Add the nodeAffinity of field affinity in the PodSpec.
spec:
...
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: test-node-affinity
operator: In
values:
- test
This will mean the POD will look for a node with key test-node-affinity and value test and will be deployed there.
I recommend reading this blog Taints and tolerations, pod and node affinities demystified by Toader Sebastian.
Also familiarise yourself with Taints and Tolerations from Kubernetes docs.
You can get the node where your pod is running with something like this:
$ kubectl get pod myapp-pod -o=jsonpath='{.spec.nodeName}'
Then you can taint it:
$ kubectl taint nodes <node-name-from-above> key=value:NoSchedule
or the whole thing in one command:
$ kubectl taint nodes $(kubectl get pod myapp-pod -o=jsonpath='{.spec.nodeName}') key=value:NoSchedule

Kubernetes ds won't run pod on master node [duplicate]

This question already has an answer here:
Scheduler is not scheduling Pod for DaemonSet in Master node
(1 answer)
Closed 5 years ago.
I am running a cluster with 1 master and 1 node. Now, when I run daemon set it only shows 1 desired node, while it should be 2. There is no error I could find anywhere in the describe/logs, but the daemonset only chooses 1 node to run. I am using kubernetes 1.9.1.
Any idea what I can be doing wrong? Or how to debug it?
TIA.
This happens if the k8s master node has just the node-role.kubernetes.io/master: NoSchedule taint without toleration for it.
The the node-role.kubernetes.io/master: NoSchedule toleration is needed in k8s 1.6 or later to schedule daemonsets on master nodes.
Add the following toleration for the daemonset in the YAML file to make k8s schedule daemonsets on the master node too:
...
kind: DaemonSet
spec:
...
template:
...
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
Taints of the master node can be checked by:
kubectl describe node <master node>
Tolerations of a pod can be checked by:
kubectl describe pod <pod name>
More info about daemonsets is in https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/.
By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, e.g. for a single-machine Kubernetes cluster for development, run:
kubectl taint nodes --all node-role.kubernetes.io/master-