i want to enable the pod to pod communication in different namespace same cluster - kubernetes

I have kubernetes cluster with 1 master 1 worker , i have DB service postgres running one namespace "PG" and i have another service config-server running in default namespace and i am unable to access postgres from config-server service which is in default namespace
Kubernetes version 1.13
overlay network -calico
as per the articles i read if pods doesnt have any network policy defined then pods can be reached to any other namespace pod without any restriction , need help in how to achieve it

should be able to reach any pod from another pod on the same cluster.
one quick way to check is to ping the service dns of the pod from another pod
get into config service pod and try to run the below command
ping <postgres-service-name>.<namespace>.svc.cluster.local
you should be able to get ping response

I was using kubernetes cluster with overlay network as calico , if there is no network policy created , by default kubernetes core dns will resolve the service but we have to add the . in the application or env variable where you are calling the service in another namespace. That will allow cross namespace communication

Related

Could service port be the same in Kubernetes

In kubernetes, I have an application pod (A-pod), then I create a service (A-service) for this pod and expose service's port as 5678.
Now in a cluster, I have 5 namespaces, each namespace will running a service (A-service) and a pod (A-pod), so in total there are 5 A-services that are running.
My question is, because 5 A-services is using the same port (5678), does it cause conflict? How to access the different services in different namespace with service name?
Yes, it assigns each different Service name in each namespace. If you have a Service called A-service in a Kubernetes namespace your-ns, the control plane and the DNS Service acting together create a DNS record for A-service.your-ns appropriately. Refer here for more details.

Can't access kubernetes service which have externalTrafficPolicy as "Local"

I'm following this guide to preserve source ip for service type nodeport.
kubectl create deployment source-ip-app --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
At this point my service is accessible externally with nodeip:nodeport
When I change the service traffic policy,
kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'
my service is not accessible.
I found a similar issue , But the solution is not much helpful or not understandable for me . I saw some github threads which says its something to do with hostname override in kube proxy , I'm not clear with it too.
I'm using kubernetes version v1.15.3. Kube proxy is running in iptables mode. I have a single master node and few worker nodes.
I'm facing the same issue in my minikube too.
Any help would be greatly appreciated.
From the docs here
If there are no local endpoints, packets sent to the node are dropped
So you need to use the correct node IP of the kubernetes node to access the service. Here correct node IP is the node's IP where the pod is scheduled.
This is not necessary if you can make sure every node(master and workers) has a replica of the pod.

DNS for Pod in another Kubernetes cluster on GKE

I am trying to connect to the MongoDB replica set that is hosted in another Kubernetes cluster of the same GCP project. I want to use DNS names in the connection string.
I was able to connect to mongodb hosted in the same cluster using this connection string:
mongodb://<pod-name>.<service-name>.<namespace>.svc.cluster.local:27017,<pod-name>.<service-name>.<namespace>.svc.cluster.local:27017/?replicaSet=<rs-name>
So my question is:
Is it possible to use the DNS name to reference the pod in another cluster? I looked through this document and it states:
Any pods created by a Deployment or DaemonSet have the following DNS
resolution available:
pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example.
But I am not sure what is the format of the cluster-domain.example part.
You can not use Kubernetes Service DNS(CoreDNS) to access a pod from outside the kubernetes cluster even from another kubernetes cluster. You need to expose the mongodb pod via LoadBalancer(recommended) or NodePort type service and access it using LoadBalancer endpoint or NodeIP:NodePort from the other kubernetes cluster.

Accessing application in kubernetes cluster through ingress

I have a cluster setup locally. I have configure ingress controller with traefik v2.2. I have deployed my application and configured the ingress. Ingress service will query the clusterIP. I have configured my DNS with the A record of master node.
Now the problem is i am unable to access the application through ingress when the A record is set to master node. I have accessed the shell of ingress controller pod in all the and tried to curl the clusterIP. I cannot get response from the pod in master node but the pods in worker node give me the response i want. Also I can access my application with A record is set to any of the worker node.
I have tried to disable my firewalld service and tried but its same.
Did i miss anything while configuring?
Note: I have spin off my cluster with kubeadm.
Thank You.

How to access pods without services in Kubernetes

I was wondering how pods are accessed when no service is defined for that specific pod. If it's through the environment variables, how does the cluster retrieve these?
Also, when services are defined, where on the master node is it stored?
Kind regards,
Charles
If you define a service for your app , you can access it outside the cluster using that service
Services are of several types , including nodePort , where you can access that port on any cluster node and you will have access to the service regardless of the actual location of the pod
you can access the endpoints or actual pod ports inside the cluster as well , but not outside
all of the above uses the kubernetes service discovery
There are two type of service dicovery though
Internal Service discovery
External Service Discovery.
You cannot "access" a pods container port(s) without a service. Services are objects that define the desired state of an ultimate set of iptable rule(s).
Also, services, like all other objects, are stored in etcd and maintained through your master(s).
You could however manually create an iptable rule forwarding traffic to the local container port that docker has exposed.
Hope this helps! If you still have any questions drop them here.
Just for debugging purposes, you can forward a port from your machine to one in the pod:
kubectl port-forward POD_NAME HOST_PORT:POD_PORT
If you have to access it from anywhere, you should use services, but you got to have a deployment created
Create deployment
kubectl create -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/service/networking/run-my-nginx.yaml
Expose the deployment with a NodePort service
kubectl expose deployment deployment/my-nginx --type=NodePort --name=nginx-service
Then list the services and get the port of the service
kubectl get services | grep nginx-service
All cluster data is stored in etcd which is a distributed key-value store. If etcd goes down, cluster becomes unstable and no new pods can come up.
Kubernetes has a way to access any pod within the cluster. Service is a logical way to access a set of pods bound by a selector. An individual pod can still be accessed irrespective of the service. Further service can be created to access the pods from outside the cluster (NodePort service)