I have a deployment pod that needs to grab another the IP address of another deployment pod and use that as an environment variable. The closest I could find was this how-to-know-a-pods-own-ip-address-from-inside-a-container-in-the-pod
I know I can grab the IP address of a service using the environment variable: $<SVC NAME>_SERVICE_HOST injected in a pod that gets created after this service. Is there a similar way to inject a deployment pod's IP address into another deployment pod after the first gets created?
You should consider exposing your target pod through a ClusterIP service, and access that pod using the service's cluster DNS FQDN. Using this method, you don't have to worry about exactly what IP your target pod is at because the Kube proxy will take care of all the DNS and routing for you. You will then only need to know what the ClusterIP service endpoint is and access your target pod through that.
The official docs contain a great case study and an interactive tutorial on this subject.
Hope this helps!
There is not way currently to find another pod's IP in DNS or environment variables. For that you need to query Kubernetes API. You may create serviceaccount with pod and deployment list permissions and then use Kubernetes API library or kubectl.
Related
Could you please explain use of each "Kind" of OpenShift in a short sentences?
It is okay, that deployment contains data about, image source, pod counts, limits etc.
With the route we can determine the URL for each deployment as well as Ingress, but what is the difference and when should use route and when ingress?
And what is the exact use of service?
Thanks for your help in advance!
Your question cannot be answered simply in short words or one line answers, go through the links and explore more,
Deployment: It is used to change or modify the state of the pod. A pod can be one or more running containers or a group of duplicate pods called ReplicaSets.
Service: Each pod is given an IP address when using a Kubernetes service. The service provides accessibility, connects the appropriate pod automatically, and this address may not be directly identifiable.
Route:Similar to the Kubernetes Ingress resource, OpenShift's Route was developed with a few additional features, including the ability to split traffic between multiple backends.
Ingress: It offers routing rules for controlling who can access the services in a Kubernetes cluster.
Difference between route and ingress?
OpenShift uses HAProxy to get (HTTP) traffic into the cluster. Other Kubernetes distributions use the NGINX Ingress Controller or something similar. You can find more in this doc.
when to use route and ingress: It depends on your requirements. From the image below you can find the feature of the ingress and route and you select according to your requirements.
Exact use of service:
Each pod in a Kubernetes cluster has its own unique IP address. However, the IP addresses of the Pods in a Deployment change as they move around. Therefore, using Pod IP addresses directly is illogical. Even if the IP addresses of the member Pods change, you will always have a consistent IP address with a Service.
A Service also provides load balancing. Clients call a single, dependable IP address, and the Service's Pods distribute their requests evenly.
I have a pod running in a statefulset but it needs to know the hostname or address of all pods running in another statefulset to communicate with them. The second statefulset is being created by a separate helm chart. Can the pod work this out dynamically? Can I inject this information into the pod through an env similar to setting .Status.ip?
Edit: Each statefulSet has its own headless service
As discussed in the comments, the way to go here is to use a service-resource as this will give you a static DNS within the cluster to reach all the pods that a targeted by that service.
The DNS for the service is:
the services name if you access it from within the same namespace
<my-service-name>.<namespace-name>.svc.cluster.local if you access it from another namespace, and where cluster.local is the clusters domain that might differ from cluster to cluster depending on the clusters configuration
If you further need more configuration options, e.g. when you want to deploy your chart into different cloud environments where the clusters-domain might actually differ, you can use kustomize.io to adjust your configuration at apply time.
Every where its mentioned "cluster type of service makes pod accessible within a Kubernetes cluster"
Does it mean, after adding cluster service to a POD, then that POD can be connected only using cluster service IP of POD, we will not be able to connect POD using the IP of POD generated before adding cluster ?
Please help me understanding, am learning Kubernetes so.
When a service is created using the ClusterIP then that service is accessible only inside the cluster as service IP's are virtual IP.
Although if you want to access the pod from outside using the service IP then you can use the nodeport or loadbalancer type service which will allow you to access the pod using the Node's IP or the loadbalancer's IP.
Main reason behind using services to access pod is that it give a fixed location (ClusterIP or service name) to access. Pod's can come an go but service IP will remain same.
I was wondering how pods are accessed when no service is defined for that specific pod. If it's through the environment variables, how does the cluster retrieve these?
Also, when services are defined, where on the master node is it stored?
Kind regards,
Charles
If you define a service for your app , you can access it outside the cluster using that service
Services are of several types , including nodePort , where you can access that port on any cluster node and you will have access to the service regardless of the actual location of the pod
you can access the endpoints or actual pod ports inside the cluster as well , but not outside
all of the above uses the kubernetes service discovery
There are two type of service dicovery though
Internal Service discovery
External Service Discovery.
You cannot "access" a pods container port(s) without a service. Services are objects that define the desired state of an ultimate set of iptable rule(s).
Also, services, like all other objects, are stored in etcd and maintained through your master(s).
You could however manually create an iptable rule forwarding traffic to the local container port that docker has exposed.
Hope this helps! If you still have any questions drop them here.
Just for debugging purposes, you can forward a port from your machine to one in the pod:
kubectl port-forward POD_NAME HOST_PORT:POD_PORT
If you have to access it from anywhere, you should use services, but you got to have a deployment created
Create deployment
kubectl create -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/service/networking/run-my-nginx.yaml
Expose the deployment with a NodePort service
kubectl expose deployment deployment/my-nginx --type=NodePort --name=nginx-service
Then list the services and get the port of the service
kubectl get services | grep nginx-service
All cluster data is stored in etcd which is a distributed key-value store. If etcd goes down, cluster becomes unstable and no new pods can come up.
Kubernetes has a way to access any pod within the cluster. Service is a logical way to access a set of pods bound by a selector. An individual pod can still be accessed irrespective of the service. Further service can be created to access the pods from outside the cluster (NodePort service)
My pod (pod1) internally can connect to another pod using its service like the following:
pod2-service.namespace.svc.cluster.local
However, I want pod1 to connect to pod2 using a URL like abc.com which is not registered in a DNS. Basically, I want pod1 to resolve abc.com as pod2-service.namespace.svc.cluster.local.
I was looking at hostAliases here:
https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/.
However, it needs an IP. How can I do this in Kubernetes?
I think you can use a fixed ip as the service ip of your pod2, then use this ip in your hostalias definition.
There are a couple of things:
StatefulSets where you will always know the pod name and you can find it based on its ordinal index.
Using Pod hostname and subdomain spec field (Only works for standalone pods, afaik)
However, pod to pod doesn't seem to be natively supported by Kubernetes in Deployments, my guess the rationale here is that the pods can constantly change IP addresses and names. You could use Pod default DNS entries but again the DNS entries will vary depending on the IP addresses that are assigned to pods.
The other solution that I can think of for Deployments is to use something like Consul with stub domains, then on each pod you will have to add an initContainer or consul agent sidecar to register its IP with the consul service, every time a pod restarts it will need to overwrite the DNS registration in Consul.
If you don't want to use stub domain there's also the option of using Pod DNS Configs.
you can get the service ip and append to /etc/hosts in pod1 before your application code running.
echo "$(getent hosts pod2-service.namespace.svc.cluster.local | awk '{ print $1 }') abc.com" >> /etc/hosts
Notice: It is pretty hacky because you should guarantee service ip of pod2 is fixed. When service ip changed, pod1 will fail to reslove the host.