Connection to the Kubernetes container refused from endpoint - kubernetes

I created a cluster (sample for learning) on GCP. Created a Helloworld container pulled from docker hub. I can see the container running on POD. but when I click on the endpoints, in the chrome browser it says "connection refused". I have active internet connection without any issues
any suggestions please?
Container Running
endpoints refused the connection
endpoints from cluster

Related

Connectivity issues to EKS Fargate pod

I'm running an EKS cluster with several regular EC2 nodes, and a single pod running in fargate (karpenter). My problem is that I can't seem to connect from any of the EC2 nodes into the fargate pod. Here's what I've tried:
Started ubuntu pod in one of the EC2 nodes, ran nslookup against the service in fargate, it resolves properly:
root#debug:/# nslookup karpenter.karpenter.svc.cluster.local
Server: 172.20.0.10
Address: 172.20.0.10#53
Name: karpenter.karpenter.svc.cluster.local
Address: 172.20.73.25
Send curl request to fargate service:
root#debug:/# curl -I http://karpenter.karpenter.svc.cluster.local:8080/metrics
curl: (28) Failed to connect to karpenter.karpenter.svc.cluster.local port 8080 after 129842 ms: Connection timed out
I've setup port forwarding directly to the karpenter service, and I'm able to connect just fine
So it seems the problem is just network connectivity from EC2 to Fargate. Any ideas on how else to troubleshoot this?

Helm: how set Kubernetes cluster Endpoint

I have two containers:
one hosting the cluster (minikube)
one where the deployment is triggered (with helm)
When running elm install I get
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "http://localhost:8080/version": dial tcp 127.0.0.1:8080: connect: connection refused
This is clear, because my cluster is running on a different host). How/Where can I set the Kubernetes Cluster IP address? When I run helm install my app should be deployed on the remote cluster.
It can be done with
helm --kube-context=
The steps to create the context are desribed here

Cannot get nodes using kubectl get nodes with gcloud shell

My GCP GKE cluster is connected to the Rancher (v 2.3.3) but it shows unavailable with the msg:
Failed to communicate with API server: Get https://X.x.X.x:443/api/v1/namespaces/kube-system?timeout=30s: waiting for cluster agent to connect
When I try to connect to the GCP K8s Cluster via gcloudshell I cannot retrieve any info with command: kubectl get nodes !!
Any idea why it is happening ... all workloads and services are running and green, only Ingress stuff is with warning info some of them with Unhealthy status from the backend services. But first need to know how can I troubleshoot the problem with connectivity to the k8s cluster with gcloud or rancher !!

How do we debug networking issues within istio pods?

I am working on setting up istio in my kubernetes cluster.
I downloaded istio-1.4.2 and installed demo profile and did manual sidecar injection.
But when I check sidecar pod logs, I am getting the below error.
2019-12-26T08:54:17.694727Z error k8s.io/client-go#v11.0.1-0.20190409021438-1a26190bd76a+incompatible/tools/cache/reflector.go:98: Failed to list *v1beta1.MutatingWebhookConfiguration: Get https://10.96.0.1:443/apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations?fieldSelector=metadata.name%3Distio-sidecar-injector&limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
It seems to be the networking issue, but could you please let me know what it is trying to do exactly?
Is there a way to get more logs than just 'connection refused'?
How do we verify networking issues between istio pods. It seems I cannot run 'wget', 'curl', 'tcpdump', 'netstat' etc within istio sidecar pod to debug further.
All the pods in kube-system namespace are working fine.
Check what port your API Server is serving https traffic(controlled by this flag --secure-port int Default: 6443). It may be 6443 instead of 443.
Check what is the value of server in your kubeconfig and are you able to connect to your kubernetes via kubectl using that kubeconfig.
Another thing to check is whether you have network policy attached to the namespace which blocks egress traffic.
And you could use an ephemeral container to debug issue with the sidecar
https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/
https://github.com/aylei/kubectl-debug

Unable to connect to the server: dial tcp accounts.google.com :443: getsockopt: operation timed out

I'm trying to get the pods list from the gcloud project.
The gcloud project I've created in the gcp using different laptop.
Now I'm using different machine but logged into same gcp account and using same project.
When I run the command kubectl get pods I get the below error.
Unable to connect to the server: dial tcp a.b.c.d:443: getsockopt: operation timed out
I tried to add an argument --verbose but that doesn't seems to be valid.
How can I further proceed in resolving this error.
gcloud container clusters get-credentials my-cluster-name will log you into your cluster locally
From the docs:
"updates a kubeconfig file with appropriate credentials and endpoint information to point kubectl at a specific cluster in Google Kubernetes Engine." - src