kubectl, information about node from inside docker - kubernetes

i am running ipyparallel in an kube cluster. I have several pods running on one node which is fine. But for my computation i want to help ipyparallel in loadbalancing by choosing pods evenly over all nodes.
Is there a way to get this information from inside the pods/docker?

You could use a Kubernetes Service which does round-robin loadbalancing.
If you need the IP addresses, you could do a DNS A- or SRV-Records lookup and get all IPs of all running instances: http://kubernetes.io/docs/admin/dns/

Related

Can a worker node in kubernetes can run two different pods?

In kubernetes we can pods on worker nodes and pods share the resources and IP address,but what if we run two diiferent pods on a same worker node does that mean that both the pods will have different IP address?
To answer the main question - yes. A node can and does run different pods. Even if you have only one Deployment you can run
kubectl describe nodes my-node
Or even
kubectl get pods --all-namespaces
To see some pods that kubernetes uses for its control plane on each node.
About the second question, it really depends on your deployment, id recommend on reading about kube proxy which is a pod running on every node! (Regarding your first question) and is in charge of the networking layer and communication within the cluster
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/
The pods will have their own IP address within that node, and there are ways to directly communicate with pods
https://superuser.openstack.org/articles/review-of-pod-to-pod-communications-in-kubernetes/
https://kubernetes.io/docs/concepts/cluster-administration/networking/

EKS provisioned LoadBalancers reference all nodes in the cluster. If the pods reside on 1 or 2 nodes is this efficient as the ELB traffic increases?

In Kubernetes (on AWS EKS) when I create a service of type LoadBalancer the resultant EC2 LoadBalancer is associated with all nodes (instances) in the EKS cluster even though the selector in the service will only find the pods running on 1 or 2 of these nodes (ie. a much smaller subset of nodes).
I am keen to understand is this will be efficient as the volume of traffic increases.
I could not find any advice on this topic and am keen to understand if this the correct approach.
This could introduce additional SNAT if the request arrives at the node which the pods is not running on and also does not preserve the source IP of the request. You can change externalTrafficPolicy to Local which only associates nodes have pods running to the LoadBalancers.
You can get more information from the following links.
Perserve source IP
EKS load balancer support
On EKS, if you are using AWS CNI, which is default for EKS, then you can use aws-alb-ingress-loadbalancer to create ELB & ALB.
While creating loadbalancer you can use below annotation, then traffic is only routed to your pods.
alb.ingress.kubernetes.io/target-type: ip
Reference:
https://github.com/aws/amazon-vpc-cni-k8s
https://github.com/kubernetes-sigs/aws-alb-ingress-controller
https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/#target-type

Kubernetes: Same node but different IPs

I have created a K8s cluster on GCP, and I deployed an application.
Then I scaled it:
kubectl scale deployment hello-world-rest-api --replicas=3
Now when I run 'kubectl get pods', I see three pods. Their NODE value is same. I understand it means they all are deployed on same machine. But I observe that IP value for all three is different.
If NODE is same, then why is IP different?
There are several networks in a k8s cluster. The pods are on the pod network, so every pod deployed on the nodes of a k8s cluster can see each other as though they are independent nodes on a network. The pod address space is different from the node address space. So, each pod running on a node gets a unique address from the pod network, which is also different from the node network. The k8s components running on each node perform the address translation.

Kubernetes - Anyway to load balance requests to a service running on multiple nodes without an external load balancer?

So running and scaling a deployment running multiple pods on a single node works nicely, and when exposing the service with a type "nodePort" nicely balances requests to the virtual IP between the multiple pods on that individual node.
I've since added an additional node to my cluster, and when exposing the Service using nodePort and then running pods over 2 nodes, I of course need to specify each host specifically to hit the endpoints running in different pods on different nodes.
I would like to send requests to a single VIP and load balance accross the different nodes. I am running this small cluster on my home network, so my question is, is there anyway to send requests to a single VIP, and load balance across the nodes / pods without using an external load-balancer? E.g., is there some config within kubernetes to handle this?
I tried using a service type load balancer (instead of node port) but this didn't load balance accross nodes.
Take a look at Keepalived in Kubernetes.
The idea is to expose a Virtual IP (VIP) address per service, outside
of the kubernetes cluster. keepalived then uses VRRP to sync this
"mapping" in the local network. With 2 or more instance of the pod
running in the cluster is possible to provide HA using a single VIP
address.
In my view, if all your pods in both the nodes are attached to the same clusterIP then all pods will be load balanced between the 2 nodes. ClusterIp service works for you as internal load balancer..

Kubernetes Service IP entry in IP tables

Deployed pod using replication controller with replicas set to 3. Cluster has 5 nodes. Created a service (type nodeport) for the pod. Now kube-proxy adds entry about the service into ip-tables of all 5 nodes. Would it not be a overhead if there are 50 nodes in the cluster?
This is not an overhead. Every node needs to be able to communicate with services even if it does not host the pods of that service (ie. it may have pods that connect to that service).
That said, in some very large clusters it was reported that performance of iptables updates might be poor (mind that this is for a very, very big scale). If that is the case, you might prefer to look into solutions like Linkerd (https://linkerd.io/) or Istio (https://istio.io/)