How to know a Pod_A's PortIP address from another a container_b in the Pod_B? - kubernetes

Example:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
test pod_A 1/1 Running 1 17d 192.168.1.171 10-0-0-171.master
test pod_B 1/1 Running 1 17d 192.168.1.172 10-0-0-172.node
By using:
kubectl exec -it pod_B --namespace=test -- sh
I can get a shell into the container_b that is running in my pod_B.
But, how can I get the Pod_A's PortIP address of "192.168.1.171" when I am in the shell of container_b?
There is a similar question, How to know a Pod's own IP address from a container in the Pod?, and some official documents like Exposing Pod Information to Containers Through Environment Variables and
Exposing Pod Information to Containers Using a DownwardApiVolumeFile. But those can not solve my problem.
-----------UPDATE-----------
Recently, I fixed it by using K8s apiserver. In container_b of Pod_B, I can get a JSON response from http://k8s_apiserver:port/api/v1/pods, parse it, and get the PortIp of Pod_A at end. BUT, is there any easier way to deal with it?

You can use kube-dns for service-discovery between your pods.
When you create a service for your pod pod_A with the name service_A the service will it's own IP address which points to your pod address. The service will also get a dns name like service_A.service.default.cluster.local. When you want to communicate from your pod_B to pod_A you now use the dns name and don't have to worry about the ip address from pod_A anymore.

Related

how to access pods from host?

I'm running colima with kubernetes like:
colima start --kuberenetes
I created a few running pods, and I want to see access the through the browsers.
But I don't know what is the colima IP (or kubernetes node IP).
help appreciated
You can get the nodeIp so:
kubectl get node
NAME STATUS ROLES AGE VERSION
nodeName Ready <none> 15h v1.26.0
Then with the nodeName:
kubectl describe node nodeName
That gives you a descrition of the node and you should look for this section:
Addresses:
InternalIP: 10.165.39.165
Hostname: master
Ping it to verify the network.
Find your host file on Mac and make an entry like:
10.165.39.165 test.local
This let you access the cluster with a domain name.
Ping it to verify.
You can not access from outside the cluster a ClusterIp.
To access your pod you have several possibilities.
if your service is type ClusterIp, you can create a temporary connection from your host with a port forward.
kubectl port-forward svc/yourservicename localport:podport
(i would raccomend this) create a service type: NodePort
Then
kubectl get svc -o wide
Shows you the NodePort: between(30000-32000).
You can access now the Pod by: test.local:nodePort or Ipaddress:NodePort.
Note: If you deployed in a namespace other than default, add -n yournamespace in the kubectl commands.
Update:
if you want to start colima with an ipAddress, first find one of your local network which is available.
Your network setting you can get with:
ifconfig
find the network. Should be the same of that of your Internet router.
Look for the subnet. Most likely 255.255.255.0.
The value to pass then:
--network-address xxx.xxx.xxx.xxx/24
In case the subnet is 255.255.0.0 then /16. But i dont think, if you are connect from home. Inside a company however this is possible.
Again check with ping and follow the steps from begining to verify the kubernetes node configuration.

Kubernetes DNS name for static Pod's

I dig everywhere to see why we don't have DNS resolution for static pods and couldn't find a right answer. Most basic stuff and couldn't find appealing answer.
Like you create a static pod, exec into it, do a "nslookup pod-name" or like "nslookup 10-44-0-7.default.pod.cluster.local", I know the second one is for Deployment and DaemonSet which creates A record, why not for static pods because they are ephemeral, in that way Deployment also is. Please tell me if it is possible and how we enable it.
My testing for the failed queries, all are static pods created with "kubectl run busybox-2 --image=busybox --command sleep 1d"
Used this as syntax:
In general a pod has the following DNS resolution: pod-ip-address.my-namespace.pod.cluster-domain.example.
vagrant#kubemaster:~$ kubectl get pods -n default -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox-1 1/1 Running 0 135m 10.36.0.6 kubenode02 <none> <none>
busybox-2 1/1 Running 0 134m 10.44.0.7 kubenode01 <none> <none>
busybox-sleep 1/1 Running 19 (24h ago) 23d 10.44.0.3 kubenode01 <none> <none>
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local Dlink
options ndots:5
/ # nslookup 10-44-0-7.default.pod.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: 10-44-0-7.default.pod.cluster.local
Address: 10.44.0.7
*** Can't find 10-44-0-7.default.pod.cluster.local: No answer
/ # nslookup 10-44-0-6.default.pod.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10:53
*** Can't find 10-44-0-6.default.pod.cluster.local: No answer
Appreciate the help.
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
Katacoda
Play with Kubernetes
Your cluster must be running the CoreDNS add-on. Migrating to CoreDNS explains how to use kubeadm to migrate from kube-dns.
Your Kubernetes server must be at or later than version v1.12. To check the version, enter kubectl version.
DNS Lookup & Reverse lookups does not work for pods/podIPs (By Design!).
Why ?
I also had the similar question , After spending a lot of time exploring following are reasons that convinced me :
Pods & and its IPs are ephemeral. even static pods when they get restarted(recreated) they might end up getting a different IP Address.
It will be huge overload on coredns/any dns server to keep a track of ever changing POD IP Addresses.
Due to above reasons It is recommended to access the POD through a service because service will have a constant IP irrespective of how the endpoint IPS have changed. for services DNS lookups & reverse loookups work fine.

Public IP address used by a kubernetes pod

I wanted to check the address that a pod uses to connect to a FTP server.
I wanted to test it by running kubectl -n cdol exec -it pod-namer -- curl ipinfo.io/ip but the connection is blocked by an engress policy.
I know the CLUSTER-IP of the pod. It has no EXTERNAL-IP.
I know the CLUSTER-IP of the service that pod uses.
I know the node the pod is running so I am able to check the network interfaces in Instana monitoring tool. But all the IPs are private IPs, how can I know what IP other services sees when a pod is connecting to them?

How to expose service outside k8s cluster?

I have run a Hello World application using the below command.
kubectl run hello-world --replicas=2 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080
Created a service as below
kubectl expose deployment hello-world --type=NodePort --name=example-service
The pods are running
NAME READY STATUS RESTARTS AGE
hello-world-68ff65cf7-dn22t 1/1 Running 0 2m20s
hello-world-68ff65cf7-llvjt 1/1 Running 0 2m20s
Service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-service NodePort 10.XX.XX.XX <none> 8080:32023/TCP 66s
Here, I am able to test it through curl inside the cluster.
curl http://10.XX.XX.XX:8080
Hello Kubernetes!
How can I access this service outside my cluster? (example, through laptop browser)
you shoud try
http://IP_OF_KUBERNETES:32023
IP_OF_KUBERNETES can be your master IP your worker IP
when you expose a port in kubernetes .It expose that port in all of your server in cluster.Imagine you have two worker node with IP1 and IP2
and one pode is running in IP1 and in worker2 there is no pods but you can access your pod by
http://IP1:32023
http://IP2:32023
You should be able to access it outside the cluster using NodePort assingned(32023). Please paste following http://<IP>:<Port> in your browser and you will able to access your app:
http://<MASTER/WORKER_IP>:32023
There are answers already provided, but I felt like this topic needed some consolidation.
This seems to be fairly easy. NodePort actually exposes your application as the name says on the port of each node. So all you have to do is just find the IP address of the Node on which the pod is. You can do it by running:
kubectl get pods -o wide so you can find the IP or name of the node on which the pod is, then just follow what previous answers state: so http://<MASTER/WORKER_IP>:PORT
There is more methods:
You can deploy Ingress Controller and configure Ingress so the application will be reachable through the internet.
You can also use kubectl proxy to expose ClusterIP service outside of the cluster. Like in this example with Dashboard.
Another way is to use LoadBalancer type, which requires underlying cloud infrastructure.
If you are using minikube you can try to run minikube service list to check your exposed services and their IP.
You can access your service using MasterIP or WorkerIP. If you are planning to use it in production or in a more reliable way you should create a service with type LoadBalancer. And use load balancers IP to access it.
If you are using any cloud env, make sure the firewall rules allow incoming traffic.
This will take care of redirecting request to which ever node the pod is running on. Else you will have to manually hit masterIP or workerIP depending on where the pod is running. If the pod gets moved to different node, you will have to change the ip you are hitting
Service Load Balancer

For pod-to-pod communication, what IP should be used? The service's ClusterIP or the endpoint

I've deployed the Rancher Helm chart to my Kubernetes cluster and want to access the Rancher API/UI from another pod (i.e. a pod running an ingress-controller).
when I list the services and the endpoints. The IP addresses differ:
$ kubectl get ep | grep rancher
release-name-rancher 10.200.23.13:80 18h
and
$ kubectl get services | grep rancher
release-name-rancher ClusterIP 10.100.200.253 <none> 80/TCP 18h
Within the container of the client (i.e. the ingress controller), I see the service beeing represented with the service's ClusterIP:
$ env | grep RELEASE_NAME_RANCHER_SERVICE_HOST
RELEASE_NAME_RANCHER_SERVICE_HOST=10.100.200.253
Trying to reach the backend via the IP address in the Env does not work (curl 10.100.200.253 just delivers no response and blocks forever).
Trying to reach the backend via the endpoint address works:
$ curl 10.200.23.13
Found.
I'm quite confused why the endpoint IP address and the ClusterIP address differ and why is it not possible to connect to the ClusterIP address. Any hints to polish my understanding?
In Kubernetes, every Pod and Service gets its own IP address. The kubectl get services IP address is the Kubernetes-internal address of the Service; the get ep address address of the Pod behind it. The Service actually acts like a load balancer, and there can be multiple Pods attached to it. The Kubernetes Service documentation goes into a lot of detail about what exactly is happening here.
Kubernetes also provides an internal DNS service that can resolve Service names. You generally shouldn't use any of these IP addresses directly; instead, use the host name release-name-rancher.default.svc.cluster.local (or replace "default" if you're running in some other Kubernetes namespace).
While the ..._SERVICE_HOST environment variable you reference is supported and documented, I'd avoid using it. Of particular note, if you helm install or kubectl apply a large set of resources at once and the Pod gets created before the Service, you'll be in a consistent state except that the Pod won't actually have this environment variable. In a Helm land where Services don't have fixed names, the environment variable name won't be fixed either. Prefer the DNS name.