Minikube pod cannot access localhost service - minikube

I am running Minikube deployment where the pod [gitlab-runner] is trying to execute POST against a local gitlab API in my machine but it is getting the following error:
WARNING: Checking for jobs... failed runner=3bS1tafj status=couldn't execute POST against http://127.0.0.1/api/v4/jobs/request: Post "http://127.0.0.1/api/v4/jobs/request": dial tcp 127.0.0.1:80: connect: connection refused
WARNING: Checking for jobs... failed runner=3bS1tafj status=couldn't execute POST against http://127.0.0.1/api/v4/jobs/request: Post "http://127.0.0.1/api/v4/jobs/request": dial tcp 127.0.0.1:80: connect: connection refused
WARNING: Checking for jobs... failed

In K8S pods can communicate with each other using localhost, localhost is the network namespace inside the pod itself!!!!!
You can't use localhost to communicate from the pod to the "outside" world in this way.
What can you do?
You can add hostNetwork: true to your Pods description.
HostNetwork - Controls whether the pod may use the node network namespace.
The hostNetwork setting applies to the Kubernetes pods.
When a pod is configured with hostNetwork: true, the applications running in such a pod can directly see the network interfaces of the host machine where the pod was started.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
hostNetwork: true

Related

Helm: how set Kubernetes cluster Endpoint

I have two containers:
one hosting the cluster (minikube)
one where the deployment is triggered (with helm)
When running elm install I get
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "http://localhost:8080/version": dial tcp 127.0.0.1:8080: connect: connection refused
This is clear, because my cluster is running on a different host). How/Where can I set the Kubernetes Cluster IP address? When I run helm install my app should be deployed on the remote cluster.
It can be done with
helm --kube-context=
The steps to create the context are desribed here

Kubernetes pod that runs kubectl get deploy returns connection error

I have a pod that runs kubectl get deploy <Deployment Name>. However, the pod doesn’t get created, and returns the message below.
The connection to the server 172.20.0.1:443 was refused - did you specify the right host or port?
Which permission do I have to add as a serviceaccount ?

How do we debug networking issues within istio pods?

I am working on setting up istio in my kubernetes cluster.
I downloaded istio-1.4.2 and installed demo profile and did manual sidecar injection.
But when I check sidecar pod logs, I am getting the below error.
2019-12-26T08:54:17.694727Z error k8s.io/client-go#v11.0.1-0.20190409021438-1a26190bd76a+incompatible/tools/cache/reflector.go:98: Failed to list *v1beta1.MutatingWebhookConfiguration: Get https://10.96.0.1:443/apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations?fieldSelector=metadata.name%3Distio-sidecar-injector&limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
It seems to be the networking issue, but could you please let me know what it is trying to do exactly?
Is there a way to get more logs than just 'connection refused'?
How do we verify networking issues between istio pods. It seems I cannot run 'wget', 'curl', 'tcpdump', 'netstat' etc within istio sidecar pod to debug further.
All the pods in kube-system namespace are working fine.
Check what port your API Server is serving https traffic(controlled by this flag --secure-port int Default: 6443). It may be 6443 instead of 443.
Check what is the value of server in your kubeconfig and are you able to connect to your kubernetes via kubectl using that kubeconfig.
Another thing to check is whether you have network policy attached to the namespace which blocks egress traffic.
And you could use an ephemeral container to debug issue with the sidecar
https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/
https://github.com/aylei/kubectl-debug

Istio on minikube - envoy missing listener for inbound application port: 9095

I follow this istio tutorial (part 3). After I created minikube local registry, I need to run the following command:
kubectl run hellodemo --image=hellodemo:v1 --port=9095 --image-pull-policy=IfNotPresent
Which should run image and istio-proxy on the Pod.
When I run kubectl get pods, I get:
NAME READY STATUS RESTARTS AGE
hellodemo-6d49fc6c51-adsa1 1/2 Running 0 1h
When I run kubectl logs hellodemo-6d49fc6c51-adsa1 istio-proxy:
* failed checking application ports. listeners="0.0.0.0:15090","10.110.201.202:16686","10.96.0.1:443","10.104.103.28:15443","10.104.103.28:15031","10.101.128.212:14268","10.104.103.28:15030","10.111.177.172:443","10.104.103.28:443","10.109.4.23:80","10.111.177.172:15443","10.104.103.28:15020","10.104.103.28:15032","10.105.175.151:15011","10.101.128.212:14267","10.96.0.10:53","10.104.103.28:31400","10.104.103.28:15029","10.98.84.0:443","10.99.194.141:443","10.99.175.237:42422","0.0.0.0:9411","0.0.0.0:3000","0.0.0.0:15010","0.0.0.0:15004","0.0.0.0:8060","0.0.0.0:9901","0.0.0.0:20001","0.0.0.0:8080","0.0.0.0:9091","0.0.0.0:80","0.0.0.0:15014","0.0.0.0:9090","172.17.0.6:15020","0.0.0.0:15001"
* envoy missing listener for inbound application port: 9095
2019-05-02T16:24:28.709972Z info Envoy proxy is NOT ready: 2 errors occurred:
* failed checking application ports. listeners="0.0.0.0:15090","10.110.201.202:16686","10.96.0.1:443","10.104.103.28:15443","10.104.103.28:15031","10.101.128.212:14268","10.104.103.28:15030","10.111.177.172:443","10.104.103.28:443","10.109.4.23:80","10.111.177.172:15443","10.104.103.28:15020","10.104.103.28:15032","10.105.175.151:15011","10.101.128.212:14267","10.96.0.10:53","10.104.103.28:31400","10.104.103.28:15029","10.98.84.0:443","10.99.194.141:443","10.99.175.237:42422","0.0.0.0:9411","0.0.0.0:3000","0.0.0.0:15010","0.0.0.0:15004","0.0.0.0:8060","0.0.0.0:9901","0.0.0.0:20001","0.0.0.0:8080","0.0.0.0:9091","0.0.0.0:80","0.0.0.0:15014","0.0.0.0:9090","172.17.0.6:15020","0.0.0.0:15001"
* envoy missing listener for inbound application port: 9095
2019-05-02T16:24:30.729987Z info Envoy proxy is NOT ready: 2 errors occurred:
* failed checking application ports. listeners="0.0.0.0:15090","10.110.201.202:16686","10.96.0.1:443","10.104.103.28:15443","10.104.103.28:15031","10.101.128.212:14268","10.104.103.28:15030","10.111.177.172:443","10.104.103.28:443","10.109.4.23:80","10.111.177.172:15443","10.104.103.28:15020","10.104.103.28:15032","10.105.175.151:15011","10.101.128.212:14267","10.96.0.10:53","10.104.103.28:31400","10.104.103.28:15029","10.98.84.0:443","10.99.194.141:443","10.99.175.237:42422","0.0.0.0:9411","0.0.0.0:3000","0.0.0.0:15010","0.0.0.0:15004","0.0.0.0:8060","0.0.0.0:9901","0.0.0.0:20001","0.0.0.0:8080","0.0.0.0:9091","0.0.0.0:80","0.0.0.0:15014","0.0.0.0:9090","172.17.0.6:15020","0.0.0.0:15001"
* envoy missing listener for inbound application port: 9095
Do you know what is the problem that prevent the istio-proxy container to come up?
I use istio-1.1.4 on minikube.
I was also having the same problem. I followed the documentation, and it said to enable SDS in the gateway. However, I enabled it in the gateway and also at the global scope as well, and this caused the error above.
I removed the following code from my values.yml file and everything worked:
global:
sds:
enabled: true

Can't connect to a kubernetes service, error: time out

I am trying to connect to kubernetes service through a pod.
I can list down the service using kubectl get svc and I can see the CLusterIP and port are there, but when the pod tries to connect to it, I get the error
dial tcp 10.0.0.153:xxxx: i/o timeout.
Any idea how to debug that? Or what could be the reason?