I am developing microservices using SpringBoot and using Kubernetes for deployment. For that I have two services Order and Customer.
Then Order service calls the Customer service to get some data on http protocol. It call Kubernetes service. I tried both name of Customer service and ip as well but during this communication it throw time out exception.
Following is piece of code.
Customer Service :
I tried to use call both using IP address and service name as well, something like below code.
but it does not work.
It throws following error. In screen shot I attach with name but It gives me same error with IP address as well.
Its Minikube single node cluster.
What wrong I am doing here?
Please use below link to troubleshoot Kubernetes Services
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/
Please use below commands to check if you can curl to Kubernetes Service customer-service on Port 8080 from any pod inside the same namespace
kubectl run -it --rm=true --restart=Never curl --image=radial/busyboxplus -- curl http://customer-service:8080
kubectl run -it --rm=true --restart=Never curl --image=radial/busyboxplus -- curl http://10.104.255.198:8080
Also order-service Pods should call Customer Service Pods using customer-service and not using Ingress. Ingress is for sending requests to Customer Service Pods from outside the kubernetes cluster
You can use the following steps to check:
1.Use the telnet command to check access to the POD mapping port;
2.Check whether your Service rules are configured correctly, here you can debug by creating a Node port type;
3.Check iptables or ipvs in your underlying communication forwarding rules;
Related
I've developed a python script, using python kubernetes-client to harvest Pods' internal IPs.
But when I try to make an http request to these IPs, from another pod, I get Connection refused error.
I spin up a temporary curl container:
kubectl run curl --image=radial/busyboxplus:curl -it --rm
And having the internal IP of one of the pods, I try to make a GET request:
curl http://10.133.0.2/stats
and the response is:
curl: (7) Failed to connect to 10.133.0.2 port 80: Connection refused
Both pods are in the same default namespace and use the same default ServiceAccount.
I know that I can call the Pods thru the ClusterIP service by which they're load-balanced, but this way I will only access a single Pod at random (depending which one the service forwards the call to), when I have multiple replicas of the same Deployment.
I need to be able to call each Pod of a multi-replica Deployment separately. That's why I'm going for the internal IPs.
I guess you missed the port number here
It should be like this
curl POD_IP:PORT/stats
I am deploy kubernetes UI using this command:
kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
And it response "Unable to connect to the server: dial tcp 185.199.110.133:443: i/o timeout"
I behind proxy, how can i fix it?
All the services that you deployed via the supplied url don't have a kind specified. This means they will be using the default service type which is ClusterIP.
Services of Kind ClusterIP are only accessible from inside your Kubernetes Cluster.
If you want the Dashboard to be accessible from outside your Cluster, you will need a service of type NodePort. A NodePort Service will assign a random high number port on all your nodes on which your application, in this case the k8s dashboard, will be accessible via ${ip-of-any-node}:${assigned-nodeport}.
For more information, please take a look at the official k8s documentation.
If your cluster is behind a proxy, also make sure, that you can reach your clusters node's external ip from wherever you are trying to send the request from.
In order to find out what port number has been assigned to your NodePort service use kubectl describe service ${servicename} or kubectl get service ${servicename} -o yaml
I am deploy kubernetes UI using this command:
kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
And it response "Unable to connect to the server: dial tcp 185.199.110.133:443: i/o timeout"
I behind proxy, how can i fix it?
All the services that you deployed via the supplied url don't have a kind specified. This means they will be using the default service type which is ClusterIP.
Services of Kind ClusterIP are only accessible from inside your Kubernetes Cluster.
If you want the Dashboard to be accessible from outside your Cluster, you will need a service of type NodePort. A NodePort Service will assign a random high number port on all your nodes on which your application, in this case the k8s dashboard, will be accessible via ${ip-of-any-node}:${assigned-nodeport}.
For more information, please take a look at the official k8s documentation.
If your cluster is behind a proxy, also make sure, that you can reach your clusters node's external ip from wherever you are trying to send the request from.
In order to find out what port number has been assigned to your NodePort service use kubectl describe service ${servicename} or kubectl get service ${servicename} -o yaml
I'm following this guide to preserve source ip for service type nodeport.
kubectl create deployment source-ip-app --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
At this point my service is accessible externally with nodeip:nodeport
When I change the service traffic policy,
kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'
my service is not accessible.
I found a similar issue , But the solution is not much helpful or not understandable for me . I saw some github threads which says its something to do with hostname override in kube proxy , I'm not clear with it too.
I'm using kubernetes version v1.15.3. Kube proxy is running in iptables mode. I have a single master node and few worker nodes.
I'm facing the same issue in my minikube too.
Any help would be greatly appreciated.
From the docs here
If there are no local endpoints, packets sent to the node are dropped
So you need to use the correct node IP of the kubernetes node to access the service. Here correct node IP is the node's IP where the pod is scheduled.
This is not necessary if you can make sure every node(master and workers) has a replica of the pod.
kubectl proxy and kubectl port-forwarding look similar and confusing to me, what are their main differences and use cases?
As mentioned in "How kubectl port-forward works?"
kubectl port-forward forwards connections to a local port to a port on a pod.
Compared to kubectl proxy, kubectl port-forward is more generic as it can forward TCP traffic while kubectl proxy can only forward HTTP traffic.
As an example, see "Kubernetes port forwarding simple like never before" from Alex Barashkov:
Port forwarding mostly used for the purpose of getting access to internal cluster resources and debugging.
How does it work?
Generally speaking, using port forwarding you could get on your ‘localhost’ any services launched in your cluster.
For example, if you have Redis installed in the cluster on 6379, by using a command like this:
kubectl port-forward redis-master-765d459796-258hz 7000:6379
you could forward Redis from the cluster to localhost:7000, access it locally and do whatever you want to do with it.
For a limited HTTP access, see kubectl proxy, and, as an example, "On Securing the Kubernetes Dashboard" from Joe Beda:
The easiest and most common way to access the cluster is through kubectl proxy. This creates a local web server that securely proxies data to the dashboard through the Kubernetes API server.
As shown in "A Step-By-Step Guide To Install & Use Kubernetes Dashboard" from Awanish:
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Accessing Dashboard using the kubectl
kubectl proxy
It will proxy server between your machine and Kubernetes API server.
Now, to view the dashboard in the browser, navigate to the following address in the browser of your Master VM:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/