I have kubernetes running on version 1.5 with two nodes and one master nodes. I would like to deploy fluentd as a daemon set onto all nodes, but the master node (the master node spams warning messages as it can't find logs). How can I avoid deploying to the master node?
So to make a pod not schedule on a master node you need to add the following
nodeSelector:
kubernetes.io/role: node
This will make the pod schedule on only nodes. The above example shows the default label for node in kops provisioned cluster. Please very the key value if you have have provisioned the cluster from a different provider
You can use a label for your slave nodes and use that label in a selector for the daemon set, which will only deploy on the nodes that have that label.
Inversely, you can define a negative selector to assign the daemon set to pods that don't have a label. In your case, the pod that doesn't have the master's label.
You're looking for the Taints and Tolerations features. Using these you can define that given node in "tainted" in particular way preventing pods scheduling on this node unless they have a toleration matching that taint.
Related
In Kubernetes, is possible to specify, at the Node level, which deployments it should run? That's kind of different from Node/PodAffinity, since would be possible to create a new node with the specified set of deployments running from the beginning, instead of wait for the scheduler to place new pods on that Node.
This would look like templating a VM if you are using some managed Kubernetes service, where you can specify the # of instances and it will be new nodes on your cluster (that will come up with that set of workloads that you defined). Would be that possible or is not the right Kubernetes' mindset?
In Kubernetes it's always the scheduler that assigns Pods to nodes. You can't somehow manually launch Pods on a node (outside of Kubernetes) and at the same time let them be a part of the Kubernetes cluster. The way to go is to always define your deployment via the Kubernetes API server and then let the scheduler assign the Pods to the available nodes.
However, you can influence how the scheduler assigns Pods to nodes. In case you want to define at the node level which types of Pods can run on a specific node, you can use taints and tolerations: define taints on your nodes and tolerations on your Pods so that only a specific set of Pods can run on a given node.
What is the easiest way to run a single Pod on every available worker node as part of the StatefulSet. So, a one to one mapping.
Am I right to say every Pod will run on a different Node by default with a StatefulSet? In which case is it sufficient to add x pods to the SS where x Worker nodes exist in the cluster?
Thanks.
Use DaemonSet instead.
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.
If you really want to use statefulSet, you can take a look at features like nodeSelector or Affinity and Anti-affinity.
Is there a toleration which lets workloads get deployed to every node including master nodes in the cluster regardless of any taints that any node has.
This toleration will deploy your workload into every node including master nodes in your kubernetes cluster regardless of any taints on any nodes.
tolerations:
- operator: Exists
If you want to deploy in every node, why not using a daemonset instead?
I'd like to prepare multiple yaml files customizing arguments of flannel (DaemonSet) and run the flannel pod of the node with yaml matching the condition expressed by the label. Can I label a worker node before joining Kubernetes master ?
You can specify --node-labels when you're kubelet is starting, which will apply the labels to the nodes; but ONLY during registration.
This will not work if your kubelet is starting up and the node is already a member of the cluster.
Kubelet Docs
I am a new cookie to kubernetes . I am wondering if kubernetes have automatically switch the pods to another node if that node resources are on critical.
For example if Pod A , Pod B , Pod C is running on Node A and Pod D is running on Node B. The resources of Node A used by pods would be high. In these case whether kubernetes will migrate the any of the pods running in Node A to Node B.
I have learnt about node affinity and node selector which is used to run the pods in certain nodes. It would be helpfull if kubernetes offer this feature to migrate the pods to another node automatically if resources are used highly.
Can any one know how can we achieve this in kubernetes ?
Thanks
-S
Yes, Kubernetes can migrate the pods to another node automatically if resources are used highly. The pod would be killed and a new pod would be started on another node. You would probably want to learn about Quality of Service Classes, to understand which pod would be killed first.
That said, you may want to read about Automatic Horizontal Pod Autoscaling. This may give you more control.
With Horizontal Pod Autoscaling, Kubernetes automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization (or, with alpha support, on some other, application-provided metrics).
With increase of load it makes more sense to spin up a new pod rather than moving pod between different nodes to avoid distraction of currently running processes inside pod on busy node.
you can do node selector in deployment and move the node
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/