Kubernetes (connection)-drain node with local persistent storage - kubernetes

We use local persistent storage as storage backend for SOLR pods. The pods are redundantly scheduled to multiple kubernetes nodes. If one of the nodes go down there are always enough instances on other nodes.
How can we drain these nodes (without "migrating" the SOLR pods to other nodes) in case we want to do a maintenance on a node? The most important thing for us would be that kube-proxy would no longer send new requests to the pods on the node in question so that after some time we could do the maintenance without interrupting service for running requests.
We tried cordon but cordon will only make sure no new pods are scheduled to a node. Drain does not seem to work with pods with local persistent volumes.

You can check out pod anti-affinity.
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
These constructs allow you to repel or attract pods when certain conditions are met.
In your case the pod anti-affinity 'requiredDuringSchedulingIgnoredDuringExecution' maybe your best bet. I haven't personally used it yet, i hope it can lead you to the right direction.

Related

What will happen when a node is almost out of resource, when deploying K8s daemonset?

When deploying Kubernetes Daemonset, what will happen when single node (out of a few nodes) is almost out of resource, and a pod can't be created, and when there are no pods that can be evicted? Though Kubernetes can be horizontally scaled, I believe it is meaningless to scale horizontally as Daemonset would need every pod on each node.
Though Kubernetes can be horizontally scaled, I believe it is meaningless to scale horizontally as Daemonset would need every pod on each node.
DaemonSet is a workload type that is mostly for operations workload e.g. transporting logs from the node or similar "system services". It is rarely a good fit for workload that is serving your users, but it can be.
what will happen when single node (out of a few nodes) is almost out of resource, and a pod can't be created, and when there are no pods that can be evicted?
As I described above, workload deployed with DaemonSet is typically operations workload that has e.g. an infrastructure role in your cluster. Since this may be more critical pods (or less, depending on what you want), I would use a higher Quality of Service for these pods, so that other pods is evicted when there are few resources on the node.
See Configure Quality of Service for Pods for how to configure your Pods to be in a Quality of Service class, one of:
Guaranteed
Burstable
Best Effort
You might also consider to use Pod Priority and Preemption
The question was about DaemonSet but as a final note: Workload that serves requests from your users, typically is deployed as Deployment and for those, it is very easy to do horizontal scaling using Horizontal Pod Autoscaler.

How to deploy specific pod to all nodes including master, but only for specific pod

I have a security pod that needs to run everywhere including master. I do not want, however, master to run any other (non kubernetes) pods.
I know I can taint master node, and I know I can setup affinity for a pod. Yet (unless I am misunderstanding something) that isn't quite what I want.
What I want is to setup affinity in a way that this security pod runs on every single node including master as a part of same daemon set. It is important that I only have a single definition due to how this security pod gets deployed.
Can this be done?
I am running Kubernetes 1.8
I think this is more or less duplicate to this question.
What you need is a combination of two features:
DaemonSet will allow you to schedule Pod to run on every node
Tolerations in the DaemonSet Pods will allow this workload to run even on the node which has the master taint.
That way your security pods will run everywhere even on the master with the taint because they can tolerate it. I think there is an example directly on the DaemonSet website.
But other pods without this toleration will not be scheduled on master because they do not tolerate the taint.

How to migrate the pods automatically to another node in kubernetes?

I am a new cookie to kubernetes . I am wondering if kubernetes have automatically switch the pods to another node if that node resources are on critical.
For example if Pod A , Pod B , Pod C is running on Node A and Pod D is running on Node B. The resources of Node A used by pods would be high. In these case whether kubernetes will migrate the any of the pods running in Node A to Node B.
I have learnt about node affinity and node selector which is used to run the pods in certain nodes. It would be helpfull if kubernetes offer this feature to migrate the pods to another node automatically if resources are used highly.
Can any one know how can we achieve this in kubernetes ?
Thanks
-S
Yes, Kubernetes can migrate the pods to another node automatically if resources are used highly. The pod would be killed and a new pod would be started on another node. You would probably want to learn about Quality of Service Classes, to understand which pod would be killed first.
That said, you may want to read about Automatic Horizontal Pod Autoscaling. This may give you more control.
With Horizontal Pod Autoscaling, Kubernetes automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization (or, with alpha support, on some other, application-provided metrics).
With increase of load it makes more sense to spin up a new pod rather than moving pod between different nodes to avoid distraction of currently running processes inside pod on busy node.
you can do node selector in deployment and move the node
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

Does HorizontalPodAutoscaler make sense when there is only one Deployment on GKE (Google Container Engine) Kubernetes cluster?

I have a "homogeneous" Kubernetes setup. By this I mean that I am only running instances of a single type of pod (an http server) with a load balancer service distributing traffic to them.
By my reasoning, to get the most out of my cluster (edit: to be concrete -- getting the best average response times to http requests) I should have:
At least one pod running on every node: Not having a pod running on a node, means that I am paying for the node and not having it ready to serve a request.
At most one pod running on every node: The pods are threaded http servers so they can maximize utilization of a node, so running multiple pods on a node does not net me anything.
This means that I should have exactly one pod per node. I achieve this using a DaemonSet.
The alternative way is to configure a Deployment and apply a HorizontalPodAutoscaler to it and have Kubernetes handle the number of pods and pod to node mapping. Is there any disadvantage of my approach in comparison to this?
My evaluation is that the HorizontalPodAutoscaler is relevant mainly in heterogeneous situations, where one HorizontalPodAutoscaler can scale up a Deployment at the expense of another Deployment. But since I have only one type of pod, I would have only one Deployment and I would be scaling up that deployment at the expense of itself, which does not make sense.
HorizontalPodAutoscaler is actually a valid solution for your needs. To address your two concerns:
1. At least one pod running on every node
This isn't your real concern. The concern is underutilizing your cluster. However, you can be underutilizing your cluster even if you have a pod running on every node. Consider a three-node cluster:
Scenario A: pod running on each node, 10% CPU usage per node
Scenario B: pod running on only one node, 70% CPU usage
Even though Scenario A has a pod on each node the cluster is actually being less utilized than in Scenario B where only one node has a pod.
2. At most one pod running on every node
The Kubernetes scheduler tries to spread pods around so that you don't end up with multiple pods of the same type on a single node. Since in your case the other nodes should be empty, the scheduler should have no problems starting the pods on the other nodes. Additionally, if you have the pod request resources equivalent to the node's resources, that will prevent the scheduler from scheduling a new pod on a node that already has one.
Now, you can achieve the same effect whether you go with DaemonSet or HPA, but I personally would go with HPA since I think it fits your semantics better, and would also work much better if you eventually decide to add other types of pods to your cluster
Using a DamonSet means that the pod has to run on every node (or some subset). This is a great fit for something like a logger or a metrics collector which is per-node. But you really just want to use available cluster resources to power your pod as needed, which matches up better with the intent of HPA.
As an aside, I believe GKE supports cluster autoscaling, so you should never be paying for nodes that aren't needed.

Are there issues with running user pods on a Kubernetes master node?

Many of the run-throughs for deploying Kubernetes master nodes suggest you use --register-schedulable=false to prevent user pods being scheduled to the master node (e.g. https://coreos.com/kubernetes/docs/latest/deploy-master.html). On a very small Kubernetes cluster it seems somewhat a wasteful of compute resources to effectively prevent an entire node from being used for pod scheduling unless absolutely essential.
The answer to this question (Will (can) Kubernetes run Docker containers on the master node(s)?) suggests that it is indeed possible to run user pods on a master node - but doesn't address whether there are any issues associated with allowing this.
The only information that I've been able to find to date that suggests there might be issues associated with allowing this is that it appears that pods on master nodes communicate insecurely (see http://kubernetes.io/docs/admin/master-node-communication/ and https://github.com/kubernetes/kubernetes/issues/13598). I assume that this would potentially allow a rogue pod running on a master node to access/hijack Kubernetes functionality not normally accessible to pods on non-master nodes. Probably not a big deal with if only running pods/containers developed internally - although I guess there's always the possibility of someone hacking access to a pod/container and thereby gaining access to the master node.
Does this sound like a viable potential risk associated with this scenario (allowing user pods to run on a Kubernetes master node)? Are there any other potential issues associated with such a setup?
Running pods on the master node is definitely possible.
The security risk you mention is one issue, but if you configure service accounts, it isn't actually much different for all deployed pods to have secure remote access to the apiserver vs. insecure local access.
Another issue is resource contention. If you run a rogue pod on your master node that disrupts the master components, it can destabilize your entire cluster. Clearly this is a concern for production deployments, but if you are looking to maximize utilization of a small number of nodes in a development / experimentation environment, then it should be fine to run a couple of extra pods on the master.
Finally, you need to make sure the master node has a sufficiently large pod cidr allocated to it. In some deployments, the master only gets a /30 which isn't going to allow you to run very many pods.
Now Kubernetes and some Kubernetes distribution have what it calls taint.
taint can decide if the master can run a pod or not.
although running the pod on the master node is not the best practice but it's possible to do so. medium
in Kubernetes, we can read the explanation about taint here and I believe this is also related to scheduler
in Kubernetes or K3S, we can check if the nodes set the taint or not by describing the nodes.
# kubectl describe nodes | grep Taints
Taints: node.kubernetes.io/unreachable:NoExecute
Taints: node.kubernetes.io/unreachable:NoSchedule
Taints: node.kubernetes.io/unreachable:NoExecute
Taints: <none>
NoSchedule: Pods that do not tolerate this taint are not scheduled on the node.
PreferNoSchedule: Kubernetes avoids scheduling Pods that do not tolerate this taint onto the node.
NoExecute: Pod is evicted from the node if it is already running on the node, and is not scheduled onto the node if it is not yet running on the node.
source
if you want to specify one of your nodes, rather master or agent, just mention the nodes
# kubectl describe nodes agent3 | grep Taints
Taints: <none>
# kubectl describe nodes master | grep Taints
Taints: <none>
this is how you apply the taint to your nodes
kubectl taint nodes agent1 key1=value1:NoSchedule
kubectl taint nodes agent2 key1=value1:NoExecute
when your nodes are not running automatically it will show NoSchedule or NoExecute, make sure to check your nodes before checking the taint.
#robert have given a clear answer. I'm just trying to explain in a metaphorical way with a real-time example.
Your company's MANAGER is a better coder. If he starts coding, your company's MANAGER kind of work will be stalled/less efficient, because he can handle one thing in an efficient way. that will put your entire company at risks.
To operate efficiently, Hire more devs to code and don't make your MANAGER code(in order to get the works for the amount you are paying him).