Accessing application in kubernetes cluster through ingress - kubernetes

I have a cluster setup locally. I have configure ingress controller with traefik v2.2. I have deployed my application and configured the ingress. Ingress service will query the clusterIP. I have configured my DNS with the A record of master node.
Now the problem is i am unable to access the application through ingress when the A record is set to master node. I have accessed the shell of ingress controller pod in all the and tried to curl the clusterIP. I cannot get response from the pod in master node but the pods in worker node give me the response i want. Also I can access my application with A record is set to any of the worker node.
I have tried to disable my firewalld service and tried but its same.
Did i miss anything while configuring?
Note: I have spin off my cluster with kubeadm.
Thank You.

Related

Can't access kubernetes service which have externalTrafficPolicy as "Local"

I'm following this guide to preserve source ip for service type nodeport.
kubectl create deployment source-ip-app --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
At this point my service is accessible externally with nodeip:nodeport
When I change the service traffic policy,
kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'
my service is not accessible.
I found a similar issue , But the solution is not much helpful or not understandable for me . I saw some github threads which says its something to do with hostname override in kube proxy , I'm not clear with it too.
I'm using kubernetes version v1.15.3. Kube proxy is running in iptables mode. I have a single master node and few worker nodes.
I'm facing the same issue in my minikube too.
Any help would be greatly appreciated.
From the docs here
If there are no local endpoints, packets sent to the node are dropped
So you need to use the correct node IP of the kubernetes node to access the service. Here correct node IP is the node's IP where the pod is scheduled.
This is not necessary if you can make sure every node(master and workers) has a replica of the pod.

Kubernetes nginx ingress controller returns 504 error

Our on-premise Kubernetes/Kubespray cluster has suddenly stopped routing traffic between the nginx-ingress and node port services. All external requests to the ingress endpoint return a "504 - gateway timeout" error.
How do I diagnose what has broken?
I've confirmed that the containers/pods are running, the node application has started and if I exec into the pod then I can run a local curl command and get a response from the app.
I've checked the logs on the ingress pods and traffic is arriving and nginx is trying to forward the traffic on to the service endpoint/node port but it is reporting an error.
I've also tried to curl directly to the node via the node port but I get no response.
I've looked at the ipvs configuration and the settings look valid (e.g. there are rules for the node to forward traffic on the node port the service endpoint address/port)
We couldn't resolve this issue and, in the end, the only workaround was to uninstall and reinstall the cluster.
I was getting this because the nginx ingress controller pod was running out of memory, I just increased the memory for the pod and it worked.
I was facing a similar issue and the simple fix was to increase the values for the K8S_CPU_LIMIT and K8S_MEMORY_LIMIT for the application pods running on the cluster.

i want to enable the pod to pod communication in different namespace same cluster

I have kubernetes cluster with 1 master 1 worker , i have DB service postgres running one namespace "PG" and i have another service config-server running in default namespace and i am unable to access postgres from config-server service which is in default namespace
Kubernetes version 1.13
overlay network -calico
as per the articles i read if pods doesnt have any network policy defined then pods can be reached to any other namespace pod without any restriction , need help in how to achieve it
should be able to reach any pod from another pod on the same cluster.
one quick way to check is to ping the service dns of the pod from another pod
get into config service pod and try to run the below command
ping <postgres-service-name>.<namespace>.svc.cluster.local
you should be able to get ping response
I was using kubernetes cluster with overlay network as calico , if there is no network policy created , by default kubernetes core dns will resolve the service but we have to add the . in the application or env variable where you are calling the service in another namespace. That will allow cross namespace communication

How to access pods without services in Kubernetes

I was wondering how pods are accessed when no service is defined for that specific pod. If it's through the environment variables, how does the cluster retrieve these?
Also, when services are defined, where on the master node is it stored?
Kind regards,
Charles
If you define a service for your app , you can access it outside the cluster using that service
Services are of several types , including nodePort , where you can access that port on any cluster node and you will have access to the service regardless of the actual location of the pod
you can access the endpoints or actual pod ports inside the cluster as well , but not outside
all of the above uses the kubernetes service discovery
There are two type of service dicovery though
Internal Service discovery
External Service Discovery.
You cannot "access" a pods container port(s) without a service. Services are objects that define the desired state of an ultimate set of iptable rule(s).
Also, services, like all other objects, are stored in etcd and maintained through your master(s).
You could however manually create an iptable rule forwarding traffic to the local container port that docker has exposed.
Hope this helps! If you still have any questions drop them here.
Just for debugging purposes, you can forward a port from your machine to one in the pod:
kubectl port-forward POD_NAME HOST_PORT:POD_PORT
If you have to access it from anywhere, you should use services, but you got to have a deployment created
Create deployment
kubectl create -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/service/networking/run-my-nginx.yaml
Expose the deployment with a NodePort service
kubectl expose deployment deployment/my-nginx --type=NodePort --name=nginx-service
Then list the services and get the port of the service
kubectl get services | grep nginx-service
All cluster data is stored in etcd which is a distributed key-value store. If etcd goes down, cluster becomes unstable and no new pods can come up.
Kubernetes has a way to access any pod within the cluster. Service is a logical way to access a set of pods bound by a selector. An individual pod can still be accessed irrespective of the service. Further service can be created to access the pods from outside the cluster (NodePort service)

Accessing services without pod (without istio envoy) from outside cluster through istio ingress rules in K8s

Steps:
1. I have created 2 namespaces (ns1 and ns2).
2. in ns1, i have deployed service where envoy proxy is enabled (istioctl kube-inject service.yaml)
2. in ns1, i have created istio ingress rules pointing to the service and i am able access it from outside the cluster.
3. in ns2, i havnt deploy any service because it is my shared namespace hence i have created headless service (External Name) which is pointing to the service deployed in ns1 namespace.
The problem is; i am not able to access service which is deployed in ns2 from outside the cluster. it is throwing 404 service not found.
did i miss anything here... do we have any other solution to address this?
Thanks,
Nikhil