Each node runs the same pods and all the nodes do the same. I am using Istio ingress gateway with the NodePort. I need traffic that enters NodePort to be routed to pods not leaving the node.
I am unable to run istio-ingressgateway on each node to do that. Is it possible for each node to route its own traffic?
Bare-metal, k8s 1.19.4, Istio 1.8
Issue
As #Jonas mentioned in comments
The problem is that there is just one istio-ingressgateway pod on node1 and all the traffic from node2 have to come to node1
Solution
You can use kubectl scale to scale your ingress gateway replicas. Below command will create 3 ingress gateway pods instead of just one.
kubectl scale --replicas=3 deployment/istio-ingressgateway -n istio-system
Additionally you can set this up with istio operator replicaCount value.
Note that if you use cloud there might be hpa configured and it might immediately scales back up the pods. There is github issue about that. You can also set hpa min and max replicas with istio.
Related
I'm following this guide to preserve source ip for service type nodeport.
kubectl create deployment source-ip-app --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
At this point my service is accessible externally with nodeip:nodeport
When I change the service traffic policy,
kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'
my service is not accessible.
I found a similar issue , But the solution is not much helpful or not understandable for me . I saw some github threads which says its something to do with hostname override in kube proxy , I'm not clear with it too.
I'm using kubernetes version v1.15.3. Kube proxy is running in iptables mode. I have a single master node and few worker nodes.
I'm facing the same issue in my minikube too.
Any help would be greatly appreciated.
From the docs here
If there are no local endpoints, packets sent to the node are dropped
So you need to use the correct node IP of the kubernetes node to access the service. Here correct node IP is the node's IP where the pod is scheduled.
This is not necessary if you can make sure every node(master and workers) has a replica of the pod.
I have nginx deployment pods as front that communicates to uwsgi deployment pods as back with ClusterIP service.
I want the nginx pod to use in priority the uwsgi pod that's running on its node.
Is it possible to do that with node affinity without naming nodes?
If you want to run nginx pod on the same node as uwsgi pod, use pod affinity.
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- uwsgi
For more details about pod affinity and anti-affinity click here
The provisioning of pods and there order/scheduling of on the same node can be achieved via node affinity. However, if you want Kubernetes to decide it for you will have to use inter-pod affinity.
Also just to verify if you are doing everything the right way please refer pod-affinity example.
From what I understand you have nginx pods and uwsgi pods,
and nginx pods proxy traffic to uwsgi pods.
And you are trying to make nxinx pods proxy traffic to uwsgi pods that are on the same node.
Previously posted answers are only partialy valid. Let me explain why.
Using PodAffinity will indeed schedule nginx and uwsgi pods together but it won't affect loadBalancing. Nginx <-> uwsgi loadbalancing will stay unchanged (will be random).
The easiest thing you can do is to run nginx container and uwsgi container in the same pod and make them communicate with localhost. In such way you are making sure that:
nginx and uwsgi always get scheduled on the same node
connection over localhost forces traffic to stay inside of a pod.
Let me know if this approach solves your problem or maybe for some reason we should try different approach.
I am using Traefik 2.1 with kubernetes CRD's. My setup is very similar to the user guide. In my application, I have defined a livenessProbe and readinessProbe on the deployment. I assumed that traefik would route requests to the kubernetes load balancer, and kubernetes would know if the pod was ready or not. Kubernetes would also restart the container if the livenessProbe failed. Is there a default healthCheck for kubernetes CRD's? Does Traefik use the load balancer provided by the kubernetes service, or does it get the IP's for the pods underneath the service and route directly to them? Is it recommended to use a healthCheck with Traefik CRD's? Is there a way to not have to repeat the config for the readinessProbe and Traefik CRD healthCheck? Thank you
Is there a default healthCheck for kubernetes CRD's?
No
Does Traefik use the load balancer provided by the kubernetes service,
or does it get the IP's for the pods underneath the service and route
directly to them?
No. It directly gets IPs from the endpoint object
Is there a way to not have to repeat the config for the readinessProbe
and Traefik CRD healthCheck ?
Traefik will update its configuration when it sees that endpoint object do not have IPs which happens when liveness/readiness probe fails.So you can configure readiness and liveness probe on your pods and expect traefik do honour that.
Is there a way to not have to repeat the config for the readinessProbe
and Traefik CRD healthCheck
The benefit of using CRD approach is its dynamic in nature. Even If you are using the CRD along with health check mechanism provided by the CRD , the liveness and rediness probe of pods are still necessary for kubernetes to restart the pods and not send traffic to the pods from other pods which uses the kubernetes service corresponding to those pods.
I am trying to follow this tutorial https://kubernetes.io/docs/tutorials/hello-minikube/#create-a-service
What confuses me is
kubectl expose deployment hello-node --type=LoadBalancer --port=8080
Can some explain if this will balance load across the pods in the node? I want to, say, make 5 requests to the service of a deployment with 5 pods and want each pod to handle each request in parallel. How can I make minikube equally distribute requests across pods in a node?
Edit: There is also --type=NodePort how does it differ from type LoadBalancer above? Do any of these distribute incoming requests across pods on its own?
A service is the way to expose your deployment to external requests. Type loadbalancer gives the service an external ip which will forward your request to the deployment. The deployment defaults to round Robin (based on the docs). If you want different types of load balancing use istio or another service mesh
I create cluster on Google Kubernetes Engine with Cluster Autoscaler option enabled.
I want to config the scaling behavior such as --scale-down-delay-after-delete according to https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md .
But I found no Pod or Deployment on kube-system which is cluster autoscaler.
Anyone has ideas?
Edit:
I am not saying Horizontal Pod Autoscaler.
And I hope I can configure it as like this :
$ gcloud container clusters update cluster-1 --enable-autoscaling --scan-interval=5 --scale-down-unneeded-time=3m
ERROR: (gcloud.container.clusters.update) unrecognized arguments:
--scan-interval=5
--scale-down-unneeded-time=3m
It is not possible according to https://github.com/kubernetes/autoscaler/issues/966
Probably because there is no way to access the executable (which it seems to be) on GKE.
You can't even view the logs of the autoscaler on GKE: https://github.com/kubernetes/autoscaler/issues/972
One way is to not enable the GKE autoscaler, and then manually install it on a worker node - per the project's docs:
Users can put it into kube-system namespace (Cluster Autoscaler doesn't scale down node with non-mirrored kube-system pods running on them) and set a priorityClassName: system-cluster-critical property on your pod spec (to prevent your pod from being evicted).
https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#deployment
I would also think you could annotate the autoscaler pod(s) with the following:
"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"
If i correclty understand you need this:
Check your deployments name by:
kubectl get deployments
And autoscale it by:
kubectl autoscale deployment your_deployment_name --cpu-percent=100 --min=1 --max=10