I am trying to connect to kubernetes service through a pod.
I can list down the service using kubectl get svc and I can see the CLusterIP and port are there, but when the pod tries to connect to it, I get the error
dial tcp 10.0.0.153:xxxx: i/o timeout.
Any idea how to debug that? Or what could be the reason?
Related
I've developed a python script, using python kubernetes-client to harvest Pods' internal IPs.
But when I try to make an http request to these IPs, from another pod, I get Connection refused error.
I spin up a temporary curl container:
kubectl run curl --image=radial/busyboxplus:curl -it --rm
And having the internal IP of one of the pods, I try to make a GET request:
curl http://10.133.0.2/stats
and the response is:
curl: (7) Failed to connect to 10.133.0.2 port 80: Connection refused
Both pods are in the same default namespace and use the same default ServiceAccount.
I know that I can call the Pods thru the ClusterIP service by which they're load-balanced, but this way I will only access a single Pod at random (depending which one the service forwards the call to), when I have multiple replicas of the same Deployment.
I need to be able to call each Pod of a multi-replica Deployment separately. That's why I'm going for the internal IPs.
I guess you missed the port number here
It should be like this
curl POD_IP:PORT/stats
I am deploy kubernetes UI using this command:
kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
And it response "Unable to connect to the server: dial tcp 185.199.110.133:443: i/o timeout"
I behind proxy, how can i fix it?
All the services that you deployed via the supplied url don't have a kind specified. This means they will be using the default service type which is ClusterIP.
Services of Kind ClusterIP are only accessible from inside your Kubernetes Cluster.
If you want the Dashboard to be accessible from outside your Cluster, you will need a service of type NodePort. A NodePort Service will assign a random high number port on all your nodes on which your application, in this case the k8s dashboard, will be accessible via ${ip-of-any-node}:${assigned-nodeport}.
For more information, please take a look at the official k8s documentation.
If your cluster is behind a proxy, also make sure, that you can reach your clusters node's external ip from wherever you are trying to send the request from.
In order to find out what port number has been assigned to your NodePort service use kubectl describe service ${servicename} or kubectl get service ${servicename} -o yaml
I am working on setting up istio in my kubernetes cluster.
I downloaded istio-1.4.2 and installed demo profile and did manual sidecar injection.
But when I check sidecar pod logs, I am getting the below error.
2019-12-26T08:54:17.694727Z error k8s.io/client-go#v11.0.1-0.20190409021438-1a26190bd76a+incompatible/tools/cache/reflector.go:98: Failed to list *v1beta1.MutatingWebhookConfiguration: Get https://10.96.0.1:443/apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations?fieldSelector=metadata.name%3Distio-sidecar-injector&limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
It seems to be the networking issue, but could you please let me know what it is trying to do exactly?
Is there a way to get more logs than just 'connection refused'?
How do we verify networking issues between istio pods. It seems I cannot run 'wget', 'curl', 'tcpdump', 'netstat' etc within istio sidecar pod to debug further.
All the pods in kube-system namespace are working fine.
Check what port your API Server is serving https traffic(controlled by this flag --secure-port int Default: 6443). It may be 6443 instead of 443.
Check what is the value of server in your kubeconfig and are you able to connect to your kubernetes via kubectl using that kubeconfig.
Another thing to check is whether you have network policy attached to the namespace which blocks egress traffic.
And you could use an ephemeral container to debug issue with the sidecar
https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/
https://github.com/aylei/kubectl-debug
I am running a kubernetes cluster using kubeadm and virtualbox. To manage traffic with the outside world, I have an nginx service as a nodeport running with an external ip.
$kubectl get svc --all-namespaces
ingress-nginx nginx-ingress NodePort 10.97.117.136 192.168.290.89 80:31738/TCP,443:32320/TCP,22:31488/TCP 26m
When I
curl 10.97.117.136:80
from inside the cluster,
I get
default backend - 404.
However, when I
curl 192.168.290.89:31738
from outside the cluster,
I get
curl: (7) Failed to connect to 192.168.290.89 port 31738: Connection timed out
Does anyone understand this behavior and know how to remedy it?
Before I could run this command kubectl logs <pod> without issue for many days/versions. However, after I pushed another image and deployed recently, I faced below error:
Error from server: Get https://aks-agentpool-xxx-0:10250/containerLogs/default/<-pod->/<-service->: dial tcp 10.240.0.4:10250: i/o timeout
I tried to re-build and re-deploy but failed.
Below was the Node info for reference:
Not sure if your issue is caused by the problem described in this troubleshooting. But maybe you can take a try, it shows below:
Make sure that the default network security group isn't modified and
that both port 22 and 9000 are open for connection to the API server.
Check whether the tunnelfront pod is running in the kube-system
namespace using the kubectl get pods --namespace kube-system command.
If it isn't, force deletion of the pod and it will restart.