how pods manage the IP address? - kubernetes

I would like to know how exactly pods get an IP address, and how they distribute the pods to agent and master.
I have 1 master node and 2 agent nodes. my pods all are running well, but I am curious how the pods get an IP address.
some pods have IP cluster nodes, meanwhile, some have an ethernet IP address. I run Nginx and Metallb for the load balancer. Disable Traefik and Klipper.
if we can see the agent-03 has 2 IP addresses run on
root:/# kubectl get pods -A -o wide
ingress nginx-dep-fdcd8sdfs-gj5gff 1/1 Running 0 46h 10.42.0.80 master <none> <none>
ingress nginx-dep-fdcd8sdfs-dn80n 1/1 Running 0 46h 10.42.0.79 master <none> <none>
ingress nginx-doc-7cc85c5899-sdh55 1/1 Running 0 44h 10.42.0.82 master <none> <none>
ingress nginx-doc-7cc85c5899-gjghs 1/1 Running 0 44h 10.42.0.83 master <none> <none>
prometheus prometheus-node-exporter-6tl8t 1/1 Running 0 47h 192.168.1.3 agent-03 <none> <none>
ingress ingress-controller-nginx-ingress-controller-rqs8n 1/1 Running 5 47h 192.168.1.3 agent-03 <none> <none>
prometheus prometheus-kube-prometheus-operator-68fbcb6d67-8qsnf 1/1 Running 1 46h 10.42.2.52 agent-03 <none> <none>
ingress nginx-doc-7cc85c5899-b77j6 1/1 Running 0 43h 10.42.2.57 agent-03 <none> <none>
metallb-system speaker-sk4pz 1/1 Running 1 47h 192.168.1.3 agent-03 <none> <none>
in my pod's shows agent-03 run Nginx-doc use IP cluster while metal use IP ethernet, or it depends on what service are running in pods?
ingress nginx-doc-7cc85c5899-b77j6 1/1 Running 0 43h 10.42.2.57 agent-03 <none> <none>
metallb-system speaker-sk4pz 1/1 Running 1 47h 192.168.1.3 agent-03 <none> <none>
and I can see master has 2 Nginx-doc pods running, which means when I deploy 3 Nginx-doc one agent will not get any Nginx-doc because it has been taken by the master. and it is not divided equally.
If I miss configuring which part do I need to fix.

Based on your internal plugin your POD will get the IPs. Which again will be the internal IPs mostly.
There are different types of Network interfaces, we can use CNI as per need : https://kubernetes.io/docs/concepts/cluster-administration/networking/
POD gets exposed by the service. There are different types of services. Cluster IP, Node Port, Load Balancer. https://kubernetes.io/docs/concepts/services-networking/service/
in my pod's shows agent-03 run Nginx-doc use IP cluster while metal
use IP ethernet, or it depends on what service are running in pods?
Could be possible due to the service type you are using due to that IP is different and using ethernet.
If your service type is LoadBalancer using MetalLb which means that the service is exposed using the IP, not like internal IP that PODs have mostly.
kubectl get svc -n <namespace name> and check
and I can see master has 2 Nginx-doc pods running, which means when I
deploy 3 Nginx-doc one agent will not get any Nginx-doc because it has
been taken by the master. and it is not divided equally.
There is no guarantee on that, K8s put and assign pods based on score.
You can read more about score at here : https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/
If you want to fix your POD on a specific node, suppose you are running the GPU with Node your should schedule on that Node to use GPU in that case you can use.
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

Pod's IP address is provided by CNI driver from range that was specified when cluster was created using --pod-network-cidr, see here.
Some CNI implementations can implement additional behavior.
In your particular case I believe that pods in question are started using hostNetwork: true in their PodSpec, which gives them access to host network

Related

how to communicate with daemonset pod from another pod in another node?

I have a daemonset configuration that runs on all nodes.
every pod listens on port 34567. I want from other pod on different node to communicate with this pod. how can I achieve that?
Find the target Pod's IP address as shown below
controlplane $ k get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-fb8b8dccf-42pq8 1/1 Running 1 5m43s 10.88.0.4 node01 <none> <none>
coredns-fb8b8dccf-f9n5x 1/1 Running 1 5m43s 10.88.0.3 node01 <none> <none>
etcd-controlplane 1/1 Running 0 4m38s 172.17.0.23 controlplane <none> <none>
katacoda-cloud-provider-74dc75cf99-2jrpt 1/1 Running 3 5m42s 10.88.0.2 node01 <none> <none>
kube-apiserver-controlplane 1/1 Running 0 4m33s 172.17.0.23 controlplane <none> <none>
kube-controller-manager-controlplane 1/1 Running 0 4m45s 172.17.0.23 controlplane <none> <none>
kube-keepalived-vip-smkdc 1/1 Running 0 5m27s 172.17.0.26 node01 <none> <none>
kube-proxy-8sxkt 1/1 Running 0 5m27s 172.17.0.26 node01 <none> <none>
kube-proxy-jdcqc 1/1 Running 0 5m43s 172.17.0.23 controlplane <none> <none>
kube-scheduler-controlplane 1/1 Running 0 4m47s 172.17.0.23 controlplane <none> <none>
weave-net-8cxqg 2/2 Running 1 5m27s 172.17.0.26 node01 <none> <none>
weave-net-s4tcj 2/2 Running 1 5m43s 172.17.0.23 controlplane <none> <none>
Next "exec" into the originating pod - kube-proxy-8sxkt in my example
kubectl -n kube-system exec -it kube-proxy-8sxkt sh
Next, you will use the destination pod's IP and port (10256 - my example) number to connect. Please note that you may have to install curl/telnet if your originating container's image does not include the application
# curl telnet://172.17.0.23:10256
HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=utf-8
Connection: close
You can call via pod's IP.
Note: This IP can only be used in the k8s cluster.
POD address (IP) is a good option you can use it, unless you know the POD IP which might get changed from time to time due to deployment and scaling changes.
i would suggest trying out the Daemon set by exposing it using the service type Node port if you have a fix amount of Node and not much autoscaling there.
If you want to connect your POD with a specific POD you can use the Node IP on which POD is scheduled and use the Node port service.
Node IP:Node port
Read more at : https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
If you don't want to connect to a specific POD and just any of the Daemon sets replica will work to connect with you can use the service name to connect PODs with each other.
my-svc.my-namespace.svc.cluster-domain.example
Read more about the service and POD DNS
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

Redis Sentinel doesn't auto discover new slave instances

I've deployed the redis helm chart on k8s with Sentinel enabled.
I've set up the Master-Replicas with Sentinel topology, it means one master and two slaves. Each pod is running both the redis and sentinel container successfully:
NAME READY STATUS RESTARTS AGE IP NODE
my-redis-pod-0 2/2 Running 0 5d22h 10.244.0.173 node-pool-u
my-redis-pod-1 2/2 Running 0 5d22h 10.244.1.96 node-pool-j
my-redis-pod-2 2/2 Running 0 3d23h 10.244.1.145 node-pool-e
Now, I've a python script that connects to redis and discovers the master by passing it the pod's ip.
sentinel = Sentinel([('10.244.0.173', 26379),
('10.244.1.96',26379),
('10.244.1.145',26379)],
sentinel_kwargs={'password': 'redispswd'})
host, port = sentinel.discover_master('mymaster')
redis_client = StrictRedis(
host=host,
port=port,
password='redispswd')
Let's suposse the master node is on my-redis-pod-0, when I do kubectl delete pod to simulate a problem that leads me to loss the pod, Sentinel will promote one of the others slaves to master and kubernetes will give me a new pod with redis and sentinel.
NAME READY STATUS RESTARTS AGE IP NODE
my-redis-pod-0 2/2 Running 0 3m 10.244.0.27 node-pool-u
my-redis-pod-1 2/2 Running 0 5d22h 10.244.1.96 node-pool-j
my-redis-pod-2 2/2 Running 0 3d23h 10.244.1.145 node-pool-e
The question is, how can I do to tell Sentinel to add this new ip to the list automatically (without code changes)?
Thanks!
Instead of using IPs, you may use the dns entries for a headless service.
A headless service is created by explicitly specifying
ClusterIP: None
Then you will be able to use the dns entries as under, where redis-0 will be the master
#syntax
pod_name.service_name.namespace.svc.cluster.local
#Example
redis-0.redis.redis.svc.cluster.local
redis-1.redis.redis.svc.cluster.local
redis-2.redis.redis.svc.cluster.local
References:
What is a headless service, what does it do/accomplish, and what are some legitimate use cases for it?
https://www.containiq.com/post/deploy-redis-cluster-on-kubernetes

Access application running on Kubernetes cluster

I'm currently learning kubernetes and started to deploy ELK stack on a minikube cluster (running on a linux EC2 instance), though i was able to run all the objects successfully, I'm not able to access any of the tool from my windows browser, looking for some inputs on how to access all below exposed ports from my windows browser.
Cluster details:
NAME READY STATUS RESTARTS AGE
pod/elasticsearch-deployment-5c7d5cb5fb-g55ft 1/1 Running 0 3m43s
pod/kibana-deployment-76d8744864-ddx4h 1/1 Running 0 3m43s
pod/logstash-deployment-56849fcd7b-bjlzf 1/1 Running 0 3m43s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elasticsearch-service ClusterIP XX.XX.XX.XX <none> 9200/TCP 3m43s
service/kibana-service ClusterIP XX.XX.XX.XX <none> 5601/TCP 3m43s
service/kubernetes ClusterIP XX.XX.XX.XX <none> 443/TCP 5m15s
service/logstash-service ClusterIP 10.XX.XX.XX <none> 9600/TCP,5044/TCP 3m43s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/elasticsearch-deployment 1/1 1 1 3m43s
deployment.apps/kibana-deployment 1/1 1 1 3m43s
deployment.apps/logstash-deployment 1/1 1 1 3m43s
NAME DESIRED CURRENT READY AGE
replicaset.apps/elasticsearch-deployment-5c7d5cb5fb 1 1 1 3m43s
replicaset.apps/kibana-deployment-76d8744864 1 1 1 3m43s
replicaset.apps/logstash-deployment-56849fcd7b 1 1 1 3m43s
Note: I also tried to run all the above services as NodePort and using the minikube ip i was able hit curl commands to check the status of the application, but still not able to access any of it via my browser
Generally if you want expose anything outside the cluster you need to user service type:
NodePort, LoadBalancer or use Ingress. If you will check Minikube documentaton, you will find that Minikube supports all those types.
If you thought about LoadBalancer, you can use minikube tunnel.
When you are using cloud environment and non standard ports, you should check firewall rules to check if port/traffic is open.
Regarding error from comment, it seems that you have issue with Kibana port 5601.
Did you check similar threads like this or this? If this won't be helpful, please provide Kibana configuration.
did you check just a normal port-forward instead of minikube ip, and expose. Those didnt work for me neither.
something like this may would help.
kubectl port-forward deployment/kibana-kibana 5601

LoadBalancer using Metallb on bare metal RPI cluster not working after installation

I'm fiddling around with my RPI cluster that I've setup using Kubeadm and I want to make LoadBalancers able to work on the cluster.
The IPs for the nodes are static and set to the range of 192.168.1.100-192.168.1.103 for the master and worker nodes.
I've installed the Metallb using the offical site docs.
This is my configmap for Metallb:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.2.240-192.168.2.250
From what I've understood you're supposed to give the loadbalancers a range outside of the DHCP range of the router? But even changing the addresses range to something like 192.168.1.200-192.168.1.240 doesn't change the results I'm getting.
The DHCP settings for my router.
K8s node info
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready master 5d15h v1.17.4 192.168.1.100 <none> Raspbian GNU/Linux 10 (buster) 4.19.97-v7l+ docker://19.3.8
k8s-worker-01 Ready worker 5d15h v1.17.4 192.168.1.101 <none> Raspbian GNU/Linux 10 (buster) 4.19.97-v7+ docker://19.3.8
k8s-worker-02 Ready worker 4d14h v1.17.4 192.168.1.102 <none> Raspbian GNU/Linux 10 (buster) 4.19.97-v7+ docker://19.3.8
k8s-worker-03 Ready worker 4d14h v1.17.4 192.168.1.103 <none> Raspbian GNU/Linux 10 (buster) 4.19.97-v7+ docker://19.3.8
Then I try to setup a small nginx deployment
kubectl run nginx --image=nginx
kubectl expose deploy nginx --port=80 --type=LoadBalancer
Running kubectl get svc returns:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d15h
nginx LoadBalancer 10.96.53.50 <pending> 80:31253/TCP 4s
This is where I'm stuck now. I don't seem to be able to get a LoadBalancer to work with this setup and I'm not really sure where I'm going wrong.
Metallb output
NAME READY STATUS RESTARTS AGE
pod/controller-65895b47d4-q25b8 1/1 Running 0 68m
pod/speaker-bnkpq 1/1 Running 0 68m
pod/speaker-d56zg 1/1 Running 0 68m
pod/speaker-h9vpr 1/1 Running 0 68m
pod/speaker-qsl6f 1/1 Running 0 68m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/speaker 4 4 4 4 4 beta.kubernetes.io/os=linux 68m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/controller 1/1 1 1 68m
NAME DESIRED CURRENT READY AGE
replicaset.apps/controller-65895b47d4 1 1 1 68m
MetalLB layer2 mode doesn't receive broadcast packets unless promiscuous mode is enabled.
Try below
sudo ifconfig wlan0 promisc
Add a Crontab to run each start up so you will not lose this change on restart.
1. sudo crontab -e
2. Add this line at the end of the file
#reboot sudo ifconfig wlan0 promisc
In my setup with MetalLB, Cilium and HAProxy Ingress Controller I'd to change externalTrafficPolicy from Local to Cluster
kubectl --namespace ingress-controller patch svc haproxy-ingress \
-p '{"spec":{"externalTrafficPolicy":"Cluster"}}'

kubernetes v1.15.0 master is not able to reach pod ip address

kubernetes v1.15.0 master is not able to reach pod ip address. I have been able to get it working till 1.14 but this time its not working any more. I have been using and setting up k8s clustors in ec2 using kubeadm.
Please find a log below; Any comments.
[ec2-user#ip-172-31-18-31 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-16-120.ap-south-1.compute.internal Ready <none> 97m v1.15.0
ip-172-31-18-31.ap-south-1.compute.internal Ready master 116m v1.15.0
[ec2-user#ip-172-31-18-31 ~]$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-deploy-7fd5fc7ff-dh9pw 1/1 Running 0 6m32s 10.44.0.3 ip-172-31-16-120.ap-south-1.compute.internal <none> <none>
hello-deploy-7fd5fc7ff-vrxbd 1/1 Running 0 6m32s 10.44.0.4 ip-172-31-16-120.ap-south-1.compute.internal <none> <none>
hello-pod1 1/1 Running 0 22m 10.44.0.1 ip-172-31-16-120.ap-south-1.compute.internal <none> <none>
[ec2-user#ip-172-31-18-31 ~]$ hostname
ip-172-31-18-31.ap-south-1.compute.internal
[ec2-user#ip-172-31-18-31 ~]$ curl http://10.44.0.4
Just simply create service for your pod to access it within the cluster, type of service should be ClusterIP.
Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a type in the ServiceSpec.
ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluste
Egg.:
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
selector:
app: test
Remember to match selector of service to pod's selector.