Same node affinity on Kubernetes - kubernetes

I have nginx deployment pods as front that communicates to uwsgi deployment pods as back with ClusterIP service.
I want the nginx pod to use in priority the uwsgi pod that's running on its node.
Is it possible to do that with node affinity without naming nodes?

If you want to run nginx pod on the same node as uwsgi pod, use pod affinity.
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- uwsgi
For more details about pod affinity and anti-affinity click here

The provisioning of pods and there order/scheduling of on the same node can be achieved via node affinity. However, if you want Kubernetes to decide it for you will have to use inter-pod affinity.
Also just to verify if you are doing everything the right way please refer pod-affinity example.

From what I understand you have nginx pods and uwsgi pods,
and nginx pods proxy traffic to uwsgi pods.
And you are trying to make nxinx pods proxy traffic to uwsgi pods that are on the same node.
Previously posted answers are only partialy valid. Let me explain why.
Using PodAffinity will indeed schedule nginx and uwsgi pods together but it won't affect loadBalancing. Nginx <-> uwsgi loadbalancing will stay unchanged (will be random).
The easiest thing you can do is to run nginx container and uwsgi container in the same pod and make them communicate with localhost. In such way you are making sure that:
nginx and uwsgi always get scheduled on the same node
connection over localhost forces traffic to stay inside of a pod.
Let me know if this approach solves your problem or maybe for some reason we should try different approach.

Related

Ingress gateway on each node

Each node runs the same pods and all the nodes do the same. I am using Istio ingress gateway with the NodePort. I need traffic that enters NodePort to be routed to pods not leaving the node.
I am unable to run istio-ingressgateway on each node to do that. Is it possible for each node to route its own traffic?
Bare-metal, k8s 1.19.4, Istio 1.8
Issue
As #Jonas mentioned in comments
The problem is that there is just one istio-ingressgateway pod on node1 and all the traffic from node2 have to come to node1
Solution
You can use kubectl scale to scale your ingress gateway replicas. Below command will create 3 ingress gateway pods instead of just one.
kubectl scale --replicas=3 deployment/istio-ingressgateway -n istio-system
Additionally you can set this up with istio operator replicaCount value.
Note that if you use cloud there might be hpa configured and it might immediately scales back up the pods. There is github issue about that. You can also set hpa min and max replicas with istio.

Kubernetes local cluster Pod hostPort - application not accessible

I am trying to access a web api deployed into my local Kubernetes cluster running on my laptop (Docker -> Settings -> Enable Kubernetes). The below is my Pod Spec YAML.
kind: Pod
apiVersion: v1
metadata:
name: test-api
labels:
app: test-api
spec:
containers:
- name: testapicontainer
image: myprivaterepo/testapi:latest
ports:
- name: web
hostPort: 55555
containerPort: 80
protocol: TCP
kubectl get pods shows the test-api running. However, when I try to connect to it using http://localhost:55555/testapi/index from my laptop, I do not get a response. But, I can access the application from a container in a different pod within the cluster (I did a kubectl exec -it to a different container), using the URL
http://test-api pod cluster IP/testapi/index
. Why cannot I access the application using the localhost:hostport URL?
I'd say that this is strongly not recommended.
According to k8s docs: https://kubernetes.io/docs/concepts/configuration/overview/#services
Don't specify a hostPort for a Pod unless it is absolutely necessary. When you bind a Pod to a hostPort, it limits the number of places the Pod can be scheduled, because each <hostIP, hostPort, protocol> combination must be unique. If you don't specify the hostIP and protocol explicitly, Kubernetes will use 0.0.0.0 as the default hostIP and TCP as the default protocol.
If you only need access to the port for debugging purposes, you can use the apiserver proxy or kubectl port-forward.
If you explicitly need to expose a Pod's port on the node, consider using a NodePort Service before resorting to hostPort.
So... Is the hostPort really necessary on your case? Or a NodePort Service would solve it?
If it is really necessary , then you could try using the IP that is returning from the command:
kubectl get nodes -o wide
http://ip-from-the-command:55555/testapi/index
Also, another test that may help your troubleshoot is checking if your app is accessible on the Pod IP.
UPDATE
I've done some tests locally and understood better what the documentation is trying to explain. Let me go through my test:
First I've created a Pod with hostPort: 55555, I've done that with a simple nginx.
Then I've listed my Pods and saw that this one was running on one of my specific Nodes.
Afterwards I've tried to access the Pod in the port 55555 through my master node IP and other node IP without success, but when trying to access through the Node IP where this Pod was actually running, it worked.
So, the "issue" (and actually that's why this approach is not recommended), is that the Pod is accessible only through that specific Node IP. If it restarts and start in a different Node, the IP will also change.

Is it possible to take pods directly offline in kubernetes loadbalancer

I have an app running on three pods behind a loadbalancer, all set up with Kubernetes. My problem is that when I take pods down or update them, this results in a couple of 503s before the loadbalancer notices the pod is unavailable and stops sending traffic to it. Is there any way to inform the loadbalancer directly that it should stop sending traffic to a pod? So we can avoid the 503s on pod update
You need to keep in mind if the pods are down the loadbalancer will still be redirecting the traffic to the designated service ports, and as no pod is servicing those ports.
Hence you should use rolling update mechanism in kubernetes which gives zero down time to the deployment. link
Since there are 3 pods running behind a Load balancer, I believe you must be using Deployment/Statefulset to manage them.
If by updating the pods you mean updating docker image version running in the pod then you can make use of Update strategies in Deployment to do rolling update. This will update your pods with zero downtime.
Additionally you can also make use of startup, readiness and liveness probe to only direct traffic to the pod when the pod is ready/live to serve traffic.
You should implement probes for pods. Read Configure Liveness, Readiness and Startup Probes.
There are readinessProbe and LivenessProbes which you can make use of. In your case I think you can make use of readinessProbe, only when your readinessProbe will pass, kubernetes will start sending traffic to your Pod.
For example
apiVersion: v1
Kind: Pod
metadata:
name: my-nginx-pod
spec:
containers:
- name: my-web-server
image: nginx
readinessProbe:
httpGet:
path: /login
port: 3000
in this above example, the nginx Pod will only receive traffic in case it passed the readinessProbe.
You can find more about probes here https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

Loadbalacing in minikube

I am trying to follow this tutorial https://kubernetes.io/docs/tutorials/hello-minikube/#create-a-service
What confuses me is
kubectl expose deployment hello-node --type=LoadBalancer --port=8080
Can some explain if this will balance load across the pods in the node? I want to, say, make 5 requests to the service of a deployment with 5 pods and want each pod to handle each request in parallel. How can I make minikube equally distribute requests across pods in a node?
Edit: There is also --type=NodePort how does it differ from type LoadBalancer above? Do any of these distribute incoming requests across pods on its own?
A service is the way to expose your deployment to external requests. Type loadbalancer gives the service an external ip which will forward your request to the deployment. The deployment defaults to round Robin (based on the docs). If you want different types of load balancing use istio or another service mesh

How to merge ingress-nginx with existing nginx on worker node?

One worker node has already installed a nginx and listened on port 80. I want to leverage ingress-nginx and keep former service in worker node still working. Is there any way to merge ingress-nginx with existing nginx on worker node?
I'm working on baremetal environment.
Having multiple pods listening on port 80 should not be an issue as they should be in their own network namespaces, unless you explicitly run them with hostNetwork: true which in most cases you should not.
For running nginx-ingress on baremetal you should expose it with NodePort Service on predefined ports like ie. 32080 and 32443, which will make your ingress availabe on all the nodes on these ports, and then configure your network so that some IP 80/443 traffic is directed by your loadbalancer to kube nodes on these predefined ports
The ingress-nginx has its own nginx running, it watches the resources at the api-server and update the nginx configuration dynamically, while the nginx uses the static configuration, so they can't be merged together. I guess you can configure ingress to access nginx through ingress-nginx.