Kubernetes Pods accessible from outside cluster - kubernetes

I have two Kubernetes clusters. I have run an Nginx server pod on one cluster. Its pod IP is 10.40.0.1. When I ping 10.40.0.1 from this cluster nodes it can ping easily from any node of this cluster.
when I ping from the second cluster node to the first cluster pod it is not working. How should I make a pod so, that it is accessible from the second cluster node as well?
I have deployed Nginx server with the below YAML file.
apiVersion: v1
kind: Pod
metadata:
name: Serverpod
spec:
containers:
- name: Nginx
image: nginx:latest
ports:
- containerPort: 80
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- Node1
I have tried the hostnetwork: true but it is not working well.

You have posted a pod spec with nodeAffinity in your question which your pod will always run on the Node1.
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- Node1
If you set hostNetwork: true, you can access the pod as curl <IP of Node1 or just Node1 if the name is resolvable to IP>. You can also expose the pod via kubectl expose pod serverpod --type NodePort --name serverpod --port 31000, in this case you can curl <any node IP:31000> and the request will route to your pod by k8s network proxy. These methods work out of the box that do not require you to install any load balancer, ingress controller or service mesh.

Related

How can I deploy pods in the same node in AWS EKS

I want to deploy pods in the same node in the EKS cluster.
for e.g if I deploy with the following command
kubectl apply -f <deployment.yaml>
is there a way to assign the deployments to a specific node?
Yes, you can use nodeSelector labels and pod affinity
NodeSelector example
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd
Pod affinity example:
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/e2e-az-name
operator: In
values:
- e2e-az1
- e2e-az2
Documentation

Can a Pod with an affinity for one node's label, but without a toleration for that node's taint, mount to that node?

Say you have Node1 with taint node1taint:NoSchedule and label node1specialkey=asdf.
And Node2 with no taints.
Then you create PodA with affinity to Node1:
apiVersion: v1
kind: Pod
metadata:
labels:
name: PodA
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node1specialKey
operator: Exists
containers:
- image: busybox
name: PodA
Which pod should the node schedule to? Will the affinity override the taint?
Thanks!
The pod will not schedule anywhere, because it does not tolerate Node1's taint and it does not have an affinity for Node2.
Here is the missing pod taint that would, in combination with the affinity, successfully schedule PodA on Node1.
tolerations:
- key: "node1taint"
operator: "Exists"
effect: "NoSchedule"
A taint is more powerful than an affinity. The pod needs the toleration, too, because affinity alone is not strong enough here in Kubernetes-land.

Kubernetes pods scheduled to non-tainted node

I have created a GKE Kubernetes cluster and two workloads deployed on that cluster, There are separate node pools for each workload. The node pool for celery workload is tainted with
celery-node-pool=true.
The pod's spec has the following toleration:
tolerations:
- key: "celery-node-pool"
operator: "Exists"
effect: "NoSchedule"
Despite having the node taint and toleration some of the pods from celery workload are deployed to the non-tainted node. Why is this happening and am I doing something wrong? What other taints and tolerations should I add to keep the pods on specific nodes?
Using Taints:
Taints allow a node to repel a set of pods.You have not specified the effect in the taint. It should be node-pool=true:NoSchedule. Also your other node need to repel this pod so you need to add a different taint to other nodes and not have that toleration in the pod.
Using Node Selector:
You can constrain a Pod to only be able to run on particular Node(s) , or to prefer to run on particular nodes.
You can label the node
kubectl label nodes kubernetes-foo-node-1.c.a-robinson.internal node-pool=true
Add node selector in the pod spec:
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
node-pool: true
Using Node Affinity
nodeSelector provides a very simple way to constrain pods to nodes with particular labels. The affinity/anti-affinity feature, greatly expands the types of constraints you can express.
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-pool
operator: In
values:
- true
containers:
- name: with-node-affinity
image: k8s.gcr.io/pause:2.0
What other taints and tolerations should I add to keep the pods on specific nodes?
You should also add a node selector to pin your pods to tainted node, else pod is free to go to a non-tainted node if scheduler wants.
kubectl taint node node01 hostname=node01:NoSchedule
If i taint node01 and want my pods be placed on it with toleration need node selector as well.
nodeSelector provides a very simple way to constrain(affinity) pods to nodes with particular labels.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
tolerations:
- key: "hostname"
operator: "Equal"
value: "node01"
effect: "NoSchedule"
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
kubernetes.io/hostname: node01

Is it possible to assign a pod of StatefulSet to a specific node of a Kubernetes cluster?

I have a 5 node cluster(1-master/4-worker). Is it possible to configure a StatefulSet where I can make a pod(s) to run on a given node knowing it has sufficient capacity rather Kubernetes Scheduler making this decision?
Lets say, my StatefulSet create 4 pods(replicas: 4) as myapp-0,myapp-1,myapp-2 and myapp-3. Now what I am looking for is:
myapp-0 pod-- get scheduled over---> worker-1
myapp-1 pod-- get scheduled over---> worker-2
myapp-2 pod-- get scheduled over---> worker-3
myapp-3 pod-- get scheduled over---> worker-4
Please let me know if it can be achieved somehow? Because if I add a toleration to pods of a StatefulSet, it will be same for all the pods and all of them will get scheduled over a single node matching the taint.
Thanks, J
You can delegate responsibility for scheduling arbitrary subsets of pods to your own custom scheduler(s) that run(s) alongside, or instead of, the default Kubernetes scheduler.
You can write your own custom scheduler. A custom scheduler can be written in any language and can be as simple or complex as you need. Below is a very simple example of a custom scheduler written in Bash that assigns a node randomly. Note that you need to run this along with kubectl proxy for it to work.
SERVER='localhost:8001'
while true;
do
for PODNAME in $(kubectl --server $SERVER get pods -o json | jq '.items[] | select(.spec.schedulerName == "my-scheduler") | select(.spec.nodeName == null) | .metadata.name' | tr -d '"')
;
do
NODES=($(kubectl --server $SERVER get nodes -o json | jq '.items[].metadata.name' | tr -d '"'))
NUMNODES=${#NODES[#]}
CHOSEN=${NODES[$[$RANDOM % $NUMNODES]]}
curl --header "Content-Type:application/json" --request POST --data '{"apiVersion":"v1", "kind": "Binding", "metadata": {"name": "'$PODNAME'"}, "target": {"apiVersion": "v1", "kind"
: "Node", "name": "'$CHOSEN'"}}' http://$SERVER/api/v1/namespaces/default/pods/$PODNAME/binding/
echo "Assigned $PODNAME to $CHOSEN"
done
sleep 1
done
Then just in your StatefulSet configuration file under specification section you will have to add schedulerName: your-scheduler line.
You can also use pod affinity:.
Example:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-cache
spec:
selector:
matchLabels:
app: store
replicas: 3
template:
metadata:
labels:
app: store
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- store
topologyKey: "kubernetes.io/hostname"
containers:
- name: redis-server
image: redis:3.2-alpine
The below yaml snippet of the webserver statefuset has podAntiAffinity and podAffinity configured. This informs the scheduler that all its replicas are to be co-located with pods that have selector label app=store. This will also ensure that each web-server replica does not co-locate on a single node.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web-server
spec:
selector:
matchLabels:
app: web-store
replicas: 3
template:
metadata:
labels:
app: web-store
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- web-store
topologyKey: "kubernetes.io/hostname"
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- store
topologyKey: "kubernetes.io/hostname"
containers:
- name: web-app
image: nginx:1.12-alpine
If we create the above two deployments, our three node cluster should look like below.
node-1 node-2 node-3
webserver-1 webserver-2 webserver-3
cache-1 cache-2 cache-3
The above example uses PodAntiAffinity rule with topologyKey: "kubernetes.io/hostname" to deploy the redis cluster so that no two instances are located on the same host
You can simply define three replicas of specific pod and define particular pod configuration file, egg.:
There is label: nodeName which is the simplest form of node selection constraint, but due to its limitations it is typically not used. nodeName is a field of PodSpec. If it is non-empty, the scheduler ignores the pod and the kubelet running on the named node tries to run the pod. Thus, if nodeName is provided in the PodSpec, it takes precedence over the above methods for node selection.
Here is an example of a pod config file using the nodeName field:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
nodeName: kube-worker-1
More information about scheduler: custom-scheduler.
Take a look on this article: assigining-pods-kubernetes.
You can use the following KubeMod ModRule:
apiVersion: api.kubemod.io/v1beta1
kind: ModRule
metadata:
name: statefulset-pod-node-affinity
spec:
type: Patch
match:
# Select pods named myapp-xxx.
- select: '$.kind'
matchValue: Pod
- select: '$.metadata.name'
matchRegex: myapp-.*
patch:
# Patch the selected pods such that their node affinity matches nodes that contain a label with the name of the pod.
- op: add
path: /spec/affinity/nodeAffinity/requiredDuringSchedulingIgnoredDuringExecution
value: |-
nodeSelectorTerms:
- matchExpressions:
- key: accept-pod/{{ .Target.metadata.name }}
operator: In
values:
- 'true'
The above ModRule will monitor for the creation of pods named myapp-* and will inject a nodeAffinity section into their resource manifest before they get deployed. This will instruct the scheduler to schedule the pod to a node which has a label accept-pod/<pod-name> set to true.
Then you can assign future pods to nodes by adding labels to the nodes:
kubectl label node worker-1 accept-pod/myapp-0=true
kubectl label node worker-2 accept-pod/myapp-1=true
kubectl label node worker-3 accept-pod/myapp-2=true
...
After the above ModRule is deployed, creating the StatefulSet will trigger the creation of its pods, which will be intercepted by the ModRule. The ModRule will dynamically inject the nodeAffinity section using the name of the pod.
If, later on, the StatefulSet is deleted, deploying it again will lead to the pods being scheduled on the same exact nodes as they were before.
You can do this using nodeSelector and node affinity (take a look at this guide https://kubernetes.io/docs/concepts/configuration/assign-pod-node/), anyone can be used to run pods on specific nodes. But if the node has taints (restrictions) then you need to add tolerations for those nodes (more can be found here https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/). Using this approach, you can specify a list of nodes to be used for your pod's scheduling, the catch is if you specify for ex. 3 nodes and you have 5 pods then you don't have control how many pods will run on each of these nodes. They gets distributed as per kube-schedular.
Another relevant use case: If you want to run one pod in each of the specified nodes, you can create a daemonset and select nodes using nodeSelector.
You can use podAntiAffinity to distribute replicas to different nodes.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: "kubernetes.io/hostname"
This would deploy web-0 in worker1 , web-1 in worker2, web-2 in worker3 and web-3 in worker4.
take a look to this guideline https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
however, what you are looking for is the nodeSelector directive that should be placed in the pod spec.

In k8s, how to let the nodes choose by itself what kind of pods they would accept

I want one of my node only accepts some kind of pods.
So I wonder, is there a way to make one node only accept those pods with some specific labels?
You have two options:
Node Affinity: property of Pods which attract them to set of nodes.
Taints & Toleration : Taints are opposite of Node Affinity, they allow node to repel set of Pods.
Using Node Affinity
You need to label your nodes:
kubectl label nodes node1 mylabel=specialpods
Then when you launch Pods specify the affinity:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: mylabel
operator: In
values:
- specialpods
containers:
- name: nginx-container
image: nginx
Using Taint & Toleration
Taint & Toleration work together: you taint a node, and then specify the toleration for pod, only those Pods will be scheduled on node whose toleration "matches" taint:
Taint: kubectl taint nodes node1 mytaint=specialpods:NoSchedule
Add toleration in Pod Spec:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
tolerations:
- key: "mytaint"
operator: "Equal"
value: "specialpods"
effect: "NoSchedule"
containers:
- name: nginx-container
image: nginx