Kubernetes: How to allow two pods running in same/different namespace communicate irrespective of the protocol using a servicename? - kubernetes

Allow two pods (say pod A and B) running in same/different namespace communicate irrespective of the protocol(say http,https,akka.tcp) along with a valid Network policy applied.
Solutions tried:
Tried applying network policy to both the pods and also used the service name: “my-svc.my-namespace.svc.cluster.local” to make pod B
communicate to pod A which is running the service “my-svc” but both
failed to communicate.
Also tried adding the IP address and host mapping of pod A in pod B while it’s deployment, then pod B was able to communicate to pod A
but inverse communication is failing.
Kindly suggest me a way to fix this.

By default, pods can communicate with each other by their IP address, regardless of the namespace they're in.
You can see the IP address of each pod with:
kubectl get pods -o wide --all-namespaces
However, the normal way to communicate within a cluster is through Service resources.
A Service also has an IP address and additionally a DNS name. A Service is backed by a set of pods. The Service forwards requests to itself to one of the backing pods.
The fully qualified DNS name of a Service is:
<service-name>.<service-namespace>.svc.cluster.local
This can be resolved to the IP address of the Service from anywhere in the cluster (regardless of namespace).
For example, if you have:
Namespace ns-a: Service svc-a → set of pods A
Namespace ns-b: Service svc-b → set of pods B
Then a pod of set A can reach a pod of set B by making a request to:
svc-b.ns-b.svc.cluster.local

You can put the Pods behind Services and use Service DNS for communication. Calls to service-name allow Pods in the same namespace to communicate. Calls to service-name.namespace allow Pods in different namespaces to communicate.

Related

Network policy behavior for multi-node cluster

I have a multi-node cluster setup. There are Kubernetes network policies defined for the pods in the cluster. I can access the services or pods using their clusterIP/podIP only from the node where the pod resides. For services with multiple pods, I cannot access the service from the node at all (I guess when the service directs the traffic to the pod with the resident node same as from where I am calling then the service will work).
Is this the expected behavior?
Is it a Kubernetes limitation or a security feature?
For debugging etc., we might need to access the services from the node. How can I achieve it?
No, it is not the expected behavior for Kubernetes. Pods should be accessible for all the nodes inside the same cluster through their internal IPs. ClusterIP service exposes the service on a cluster-internal IP and making it reachable from within the cluster - it is basically set by default for all the service types, as stated in Kubernetes documentation.
Services are not node-specific and they can point to a pod regardless of where it runs in the cluster at any given moment in time. Also make sure that you are using the cluster-internal port: while trying to reach the services. If you still can connect to the pod only from node where it is running, you might need to check if something is wrong with your networking - e.g, check if UDP ports are blocked.
EDIT: Concerning network policies - by default, a pod is non-isolated either for egress or ingress, i.e. if no NetworkPolicy resource is defined for the pod in Kubernetes, all traffic is allowed to/from this pod - so-called default-allow behavior. Basically, without network policies all pods are allowed to communicate with all other pods/services in the same cluster, as described above.
If one or more NetworkPolicy is applied to a particular pod, it will reject all traffic that is not explicitly allowed by that policies (meaning, NetworkPolicythat both selects the pod and has "Ingress"/"Egress" in its policyTypes) - default-deny behavior.
What is more:
Network policies do not conflict; they are additive. If any policy or policies apply to a given pod for a given direction, the connections allowed in that direction from that pod is the union of what the applicable policies allow.
So yes, it is expected behavior for Kubernetes NetworkPolicy - when a pod is isolated for ingress/egress, the only allowed connections into/from the pod are those from the pod's node and those allowed by the connection list of NetworkPolicy defined.
To be compatible with it, Calico network policy follows the same behavior for Kubernetes pods.
NetworkPolicy is applied to pods within a particular namespace - either the same or different with the help of the selectors.
As for node specific policies - nodes can't be targeted by their Kubernetes identities, instead CIDR notation should be used in form of ipBlock in pod/service NetworkPolicy - particular IP ranges are selected to allow as ingress sources or egress destinations for pod/service.
Whitelisting Calico IP addresses for each node might seem to be a valid option in this case, please have a look at the similar issue described here.

Could service port be the same in Kubernetes

In kubernetes, I have an application pod (A-pod), then I create a service (A-service) for this pod and expose service's port as 5678.
Now in a cluster, I have 5 namespaces, each namespace will running a service (A-service) and a pod (A-pod), so in total there are 5 A-services that are running.
My question is, because 5 A-services is using the same port (5678), does it cause conflict? How to access the different services in different namespace with service name?
Yes, it assigns each different Service name in each namespace. If you have a Service called A-service in a Kubernetes namespace your-ns, the control plane and the DNS Service acting together create a DNS record for A-service.your-ns appropriately. Refer here for more details.

How to access pods without services in Kubernetes

I was wondering how pods are accessed when no service is defined for that specific pod. If it's through the environment variables, how does the cluster retrieve these?
Also, when services are defined, where on the master node is it stored?
Kind regards,
Charles
If you define a service for your app , you can access it outside the cluster using that service
Services are of several types , including nodePort , where you can access that port on any cluster node and you will have access to the service regardless of the actual location of the pod
you can access the endpoints or actual pod ports inside the cluster as well , but not outside
all of the above uses the kubernetes service discovery
There are two type of service dicovery though
Internal Service discovery
External Service Discovery.
You cannot "access" a pods container port(s) without a service. Services are objects that define the desired state of an ultimate set of iptable rule(s).
Also, services, like all other objects, are stored in etcd and maintained through your master(s).
You could however manually create an iptable rule forwarding traffic to the local container port that docker has exposed.
Hope this helps! If you still have any questions drop them here.
Just for debugging purposes, you can forward a port from your machine to one in the pod:
kubectl port-forward POD_NAME HOST_PORT:POD_PORT
If you have to access it from anywhere, you should use services, but you got to have a deployment created
Create deployment
kubectl create -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/service/networking/run-my-nginx.yaml
Expose the deployment with a NodePort service
kubectl expose deployment deployment/my-nginx --type=NodePort --name=nginx-service
Then list the services and get the port of the service
kubectl get services | grep nginx-service
All cluster data is stored in etcd which is a distributed key-value store. If etcd goes down, cluster becomes unstable and no new pods can come up.
Kubernetes has a way to access any pod within the cluster. Service is a logical way to access a set of pods bound by a selector. An individual pod can still be accessed irrespective of the service. Further service can be created to access the pods from outside the cluster (NodePort service)

Kubernetes - Getting IPs of pods of a proxy service

I have a proxy service that wraps 3 pods (say pod A, pod B, pod C). Some container inside pod A needs to get virtual IPs of other two pods. How can I do this?
Two options:
Talk to the Kubernetes API to get the endpoints for the service. (either with kubectl get endpoints SVCNAME or by GETing the /api/v1/namespaces/{namespace}/endpoints/{svcname} path on the apiserver)
Less likely to be of use, but if you create a service without a cluster IP, the DNS for that service will return a list of the IP addresses of the backing pods rather than a virtual IP address.
The IPs returned in either case are the IP addresses of all the pods backing the service.

How to discover headless service endpoints

Is there a way to discover all the endpoints of a headless service from outside the cluster?
Preferably using DNS or Static IPs
By watching changes to a list of Endpoints:
GET /api/v1/watch/namespaces/{namespace}/endpoints
Headless Services are a group of Pod IPs. Pod IPs are not (generally) available outside the cluster/cloud-provider.
Are you trying to get external IPs for a headless service or are you within the same network (e.g. in the GCE project) but not in the cluster?
The DNS addon is exactly what you're after. From the docs:
For example, if you have a Service called "my-service" in Kubernetes
Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods
which exist in the "my-ns" Namespace should be able to find it by
simply doing a name lookup for "my-service". Pods which exist in other
Namespaces must qualify the name as "my-service.my-ns". The result of
these name lookups is the cluster IP.
And in the case of a headless service:
DNS is configured to return multiple A records (addresses) for the
Service name, which point directly to the Pods backing the Service.
However, this service is only available inside the cluster. But KubeDNS is just another pod:
kubectl get po --namespace=kube-system
kubectl describe po kube-dns-pod-name --namespace=kube-system
Which means you can create a service with an externally accessible address to expose this service. Just use a selector matching your kube-dns pod label.
http://kubernetes.io/v1.1/docs/user-guide/services.html#dns
https://github.com/kubernetes/kubernetes/blob/release-1.1/cluster/addons/dns/README.md