How to deploy specific pod to all nodes including master, but only for specific pod - kubernetes

I have a security pod that needs to run everywhere including master. I do not want, however, master to run any other (non kubernetes) pods.
I know I can taint master node, and I know I can setup affinity for a pod. Yet (unless I am misunderstanding something) that isn't quite what I want.
What I want is to setup affinity in a way that this security pod runs on every single node including master as a part of same daemon set. It is important that I only have a single definition due to how this security pod gets deployed.
Can this be done?
I am running Kubernetes 1.8

I think this is more or less duplicate to this question.
What you need is a combination of two features:
DaemonSet will allow you to schedule Pod to run on every node
Tolerations in the DaemonSet Pods will allow this workload to run even on the node which has the master taint.
That way your security pods will run everywhere even on the master with the taint because they can tolerate it. I think there is an example directly on the DaemonSet website.
But other pods without this toleration will not be scheduled on master because they do not tolerate the taint.

Related

Will pods running on a PreferNoSchedule node migrate to an untainted node?

If a single Kubernetes cluster is built and runs some number of pods, however the single node carries a PreferNoSchedule taint, it would would make sense to migrate these pods and workloads to more suitable, untainted nodes if they are added to the cluster.
Will this happen automatically in >= 1.6 or will it need to be triggered? How is it triggered?
In this scenario, there will be no action triggered towards the kube-scheduler to schedule pods even though a new worker is added to a cluster.
For the pods to be moved to a new worker, we need to trigger a new pod scheduling requirement.
Simple solution would be to scale down to 0 and scale up to the needed number of pods for each deployment.
kubectl scale --replicas=<expected_replica_num> deployment <deployment_name>
As far as I know, this doesn't happen automatically with node taints. You can trigger it using kubectl rollout restart deployment/<name>.
I was unable to find sufficient literature for this in official Kubernetes documentation. The best I could find is kubernetes-sigs/descheduler

Difference between daemonsets and deployments

In Kelsey Hightower's Kubernetes Up and Running, he gives two commands :
kubectl get daemonSets --namespace=kube-system kube-proxy
and
kubectl get deployments --namespace=kube-system kube-dns
Why does one use daemonSets and the other deployments?
And what's the difference?
Kubernetes deployments manage stateless services running on your cluster (as opposed to for example StatefulSets which manage stateful services). Their purpose is to keep a set of identical pods running and upgrade them in a controlled way. For example, you define how many replicas(pods) of your app you want to run in the deployment definition and kubernetes will make that many replicas of your application spread over nodes. If you say 5 replica's over 3 nodes, then some nodes will have more than one replica of your app running.
DaemonSets manage groups of replicated Pods. However, DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. A Daemonset will not run more than one replica per node. Another advantage of using a Daemonset is that, if you add a node to the cluster, then the Daemonset will automatically spawn a pod on that node, which a deployment will not do.
DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples of such tasks include storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons like collectd
Lets take the example you mentioned in your question: why iskube-dns a deployment andkube-proxy a daemonset?
The reason behind that is that kube-proxy is needed on every node in the cluster to run IP tables, so that every node can access every pod no matter on which node it resides. Hence, when we make kube-proxy a daemonset and another node is added to the cluster at a later time, kube-proxy is automatically spawned on that node.
Kube-dns responsibility is to discover a service IP using its name and only one replica of kube-dns is enough to resolve the service name to its IP. Hence we make kube-dns a deployment, because we don't need kube-dns on every node.

How do I debug kubernetes scheduling?

I have added podAntiAffinity to my DeploymentConfig template.
However, pods are being scheduled on nodes that I expected would be excluded by the rules.
How can I view logs of the kubernetes scheduler to understand why it chose the node it did for a given pod?
PodAntiAffinity has more to do with other pods than nodes specifically. That is, PodAntiAffinity specifies which nodes to exclude based on what pods are already scheduled on that node. And even here you can make it a requirement vs. just a preference. To directly pick the node on which a pod is/is not scheduled, you want to use NodeAffinity. The guide.

What's the purpose of Kubernetes DaemonSet when replication controllers have node anti-affinity

DaemonSet is a Kubernetes beta resource that can ensure that exactly one pod is scheduled to a group of nodes. The group of nodes is all nodes by default, but can be limited to a subset using nodeSelector or the Alpha feature of node affinity/anti-affinity.
It seems that DaemonSet functionality can be achieved with replication controllers/replica sets with proper node affinity and anti-affinity.
Am I missing something? If that's correct should DaemonSet be deprecated before it even leaves Beta?
As you said, DaemonSet guarantees one pod per node for a subset of the nodes in the cluster. If you use ReplicaSet instead, you need to
use the node affinity/anti-affinity and/or node selector to control the set of nodes to run on (similar to how DaemonSet does it).
use inter-pod anti-affinity to spread the pods across the nodes.
make sure the number of pods > number of node in the set, so that every node has one pod scheduled.
However, ensuring (3) is a chore as the set of nodes can change over time. With DaemonSet, you don't have to worry about that, nor would you need to create extra, unschedulable pods. On top of that, DaemonSet does not rely on the scheduler to assign its pods, which makes it useful for cluster bootstrap (see How Daemon Pods are scheduled).
See the "Alternative to DaemonSet" section in the DaemonSet doc for more comparisons. DaemonSet is still the easiest way to run a per-node daemon without external tools.

Are there issues with running user pods on a Kubernetes master node?

Many of the run-throughs for deploying Kubernetes master nodes suggest you use --register-schedulable=false to prevent user pods being scheduled to the master node (e.g. https://coreos.com/kubernetes/docs/latest/deploy-master.html). On a very small Kubernetes cluster it seems somewhat a wasteful of compute resources to effectively prevent an entire node from being used for pod scheduling unless absolutely essential.
The answer to this question (Will (can) Kubernetes run Docker containers on the master node(s)?) suggests that it is indeed possible to run user pods on a master node - but doesn't address whether there are any issues associated with allowing this.
The only information that I've been able to find to date that suggests there might be issues associated with allowing this is that it appears that pods on master nodes communicate insecurely (see http://kubernetes.io/docs/admin/master-node-communication/ and https://github.com/kubernetes/kubernetes/issues/13598). I assume that this would potentially allow a rogue pod running on a master node to access/hijack Kubernetes functionality not normally accessible to pods on non-master nodes. Probably not a big deal with if only running pods/containers developed internally - although I guess there's always the possibility of someone hacking access to a pod/container and thereby gaining access to the master node.
Does this sound like a viable potential risk associated with this scenario (allowing user pods to run on a Kubernetes master node)? Are there any other potential issues associated with such a setup?
Running pods on the master node is definitely possible.
The security risk you mention is one issue, but if you configure service accounts, it isn't actually much different for all deployed pods to have secure remote access to the apiserver vs. insecure local access.
Another issue is resource contention. If you run a rogue pod on your master node that disrupts the master components, it can destabilize your entire cluster. Clearly this is a concern for production deployments, but if you are looking to maximize utilization of a small number of nodes in a development / experimentation environment, then it should be fine to run a couple of extra pods on the master.
Finally, you need to make sure the master node has a sufficiently large pod cidr allocated to it. In some deployments, the master only gets a /30 which isn't going to allow you to run very many pods.
Now Kubernetes and some Kubernetes distribution have what it calls taint.
taint can decide if the master can run a pod or not.
although running the pod on the master node is not the best practice but it's possible to do so. medium
in Kubernetes, we can read the explanation about taint here and I believe this is also related to scheduler
in Kubernetes or K3S, we can check if the nodes set the taint or not by describing the nodes.
# kubectl describe nodes | grep Taints
Taints: node.kubernetes.io/unreachable:NoExecute
Taints: node.kubernetes.io/unreachable:NoSchedule
Taints: node.kubernetes.io/unreachable:NoExecute
Taints: <none>
NoSchedule: Pods that do not tolerate this taint are not scheduled on the node.
PreferNoSchedule: Kubernetes avoids scheduling Pods that do not tolerate this taint onto the node.
NoExecute: Pod is evicted from the node if it is already running on the node, and is not scheduled onto the node if it is not yet running on the node.
source
if you want to specify one of your nodes, rather master or agent, just mention the nodes
# kubectl describe nodes agent3 | grep Taints
Taints: <none>
# kubectl describe nodes master | grep Taints
Taints: <none>
this is how you apply the taint to your nodes
kubectl taint nodes agent1 key1=value1:NoSchedule
kubectl taint nodes agent2 key1=value1:NoExecute
when your nodes are not running automatically it will show NoSchedule or NoExecute, make sure to check your nodes before checking the taint.
#robert have given a clear answer. I'm just trying to explain in a metaphorical way with a real-time example.
Your company's MANAGER is a better coder. If he starts coding, your company's MANAGER kind of work will be stalled/less efficient, because he can handle one thing in an efficient way. that will put your entire company at risks.
To operate efficiently, Hire more devs to code and don't make your MANAGER code(in order to get the works for the amount you are paying him).