minikube service url connection refused - kubernetes

I am beginner to kubernetes. I am trying to install minikube wanted to run my application in kubernetes. I am using ubuntu 16.04
I have followed the installation instructions provided here
https://kubernetes.io/docs/setup/learning-environment/minikube/#using-minikube-with-an-http-proxy
Issue1:
After installing kubectl, virtualbox and minikube I have run the command
minikube start --vm-driver=virtualbox
It is failing with following error
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0912 17:39:12.486830 17689 start.go:305] Error restarting
cluster: restarting kube-proxy: waiting for kube-proxy to be
up for configmap update: timed out waiting for the condition
But when I checked the virtualbox I see the minikube VM running and when I run the kubectl
kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10
I see the deployments
kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-minikube 1 1 1 1 27m
I exposed the hello-minikube deployment as service
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube LoadBalancer 10.102.236.236 <pending> 8080:31825/TCP 15m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h
I got the url for the service
minikube service hello-minikube --url
http://192.168.99.100:31825
When I try to curl the url I am getting the following error
curl http://192.168.99.100:31825
curl: (7) Failed to connect to 192.168.99.100 port 31825: Connection refused
1)If minikube cluster got failed while starting, how did the kubectl able to connect to minikube to do deployments and services?
2) If cluster is fine, then why am i getting connection refused ?
I was looking at this proxy(https://kubernetes.io/docs/setup/learning-environment/minikube/#starting-a-cluster) what is my_proxy in this ?
Is this minikube ip and some port ?
I have tried this
Error restarting cluster: restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition
but do not understand how #3(set proxy) in solution will be done. Can some one help me getting instructions for proxy ?
Adding the command output which was asked in the comments
kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
etcd-minikube 1/1 Running 0 4m
kube-addon-manager-minikube 1/1 Running 0 5m
kube-apiserver-minikube 1/1 Running 0 4m
kube-controller-manager-minikube 1/1 Running 0 6m
kube-dns-86f4d74b45-sdj6p 3/3 Running 0 5m
kube-proxy-7ndvl 1/1 Running 0 5m
kube-scheduler-minikube 1/1 Running 0 5m
kubernetes-dashboard-5498ccf677-4x7sr 1/1 Running 0 5m
storage-provisioner 1/1 Running 0 5m

I deleted minikube and removed all files under ~/.minikube and
reinstalled minikube. Now it is working fine. I did not get the output
before but I have attached it after it is working to the question. Can
you tell me what does the output of this command tells ?
It will be very difficult or even impossible to tell what was exactly wrong with your Minikube Kubernetes cluster when it is already removed and set up again.
Basically there were a few things that you could do to properly troubleshoot or debug your issue.
Adding the command output which was asked in the comments
The output you posted is actually only part of the task that #Eduardo Baitello asked you to do. kubectl get po -n kube-system command simply shows you a list of Pods in kube-system namespace. In other words this is the list of system pods forming your Kubernetes cluster and, as you can imagine, proper functioning of each of these components is crucial. As you can see in your output the STATUS of your kube-proxy pod is Running:
kube-proxy-7ndvl 1/1 Running 0 5m
You were also asked in #Eduardo's question to check its logs. You can do it by issuing:
kubectl logs kube-proxy-7ndvl
It could tell you what was wrong with this particular pod at the time when the problem occured. Additionally in such case you may use describe command to see other pod details (sometimes looking at pod events may be very helpful to figure out what's going on with it):
kubectl describe pod kube-proxy-7ndvl
The suggestion to check this particular Pod status and logs was most probably motivated by this fragment of the error messages shown during your Minikube startup process:
E0912 17:39:12.486830 17689 start.go:305] Error restarting
cluster: restarting kube-proxy: waiting for kube-proxy to be
up for configmap update: timed out waiting for the condition
As you can see this message clearly suggests that there is in short "something wrong" with kube-proxy so it made a lot of sense to check it first.
There is one more thing you may have not noticed:
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube LoadBalancer 10.102.236.236 <pending> 8080:31825/TCP 15m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h
Your hello-minikube service was not completely ready. In EXTERNAL-IP column you can see that its state was pending. As you can use describe command to describe Pods you can do so to get details of the service. Simple:
describe service hello-minikube
could tell you quite a lot in such case.
1)If minikube cluster got failed while starting, how did the kubectl
able to connect to minikube to do deployments and services? 2) If
cluster is fine, then why am i getting connection refused ?
Remember that Kubernetes Cluster is not a monolith structure and consists of many parts that depend on one another. The fact that kubectl worked and you could create deployment doesn't mean that the whole cluster was working fine and as you can see in the error message it was suggesting that one of its components, namely kube-proxy, could actually not function properly.
Going back to the beginning of your question...
I have followed the installation instructions provided here
https://kubernetes.io/docs/setup/learning-environment/minikube/#using-minikube-with-an-http-proxy
Issue1: After installing kubectl, virtualbox and minikube I have run
the command
minikube start --vm-driver=virtualbox
as far as I understood you don't use the http proxy so you didn't follow instructions from this particular fragment of the docs that you posted, did you ?
I have the impression that you mix 2 concepts. kube-proxy which is a Kubernetes cluster component and which is deployed as pod in kube-system space and http proxy server mentioned in this fragment of documentation.
I was looking at this
proxy(https://kubernetes.io/docs/setup/learning-environment/minikube/#starting-a-cluster)
what is my_proxy in this ?
If you don't know what is your http proxy address, most probably you simply don't use it and if you don't use it to connect to the Internet from your computer, it doesn't apply to your case in any way.
Otherwise you need to set it up for your Minikube by providing additional flags when you start it as follows:
minikube start --docker-env http_proxy=http://$YOURPROXY:PORT \
--docker-env https_proxy=https://$YOURPROXY:PORT
If you were able to start your Minikube and now it works properly only using the command:
minikube start --vm-driver=virtualbox
your issue was caused by something else and you don't need to provide the above mentioned flags to tell your Minikube what is your http proxy server that you're using.
As far as I understand currently everything is up and running and you can access the url returned by the command minikube service hello-minikube --url without any problem, right ? You can also run the command kubectl get service hello-minikube and check if its output differs from what you posted before. As you didn't attach any yaml definition files it's difficult to tell if it was nothing wrong with your service definition. Also note that Load Balancer is a service type designed to work with external load balancers provided by cloud providers and minikube uses NodePort instead of it.

Related

minikube dashabord unable to access it from outsude/internet

Here is the output of minikube dashbaord
ubuntu#ip-172-31-5-166:~$ minikube dashboard
* Enabling dashboard ...
* Verifying dashboard health ...
* Launching proxy ...
* Verifying proxy health ...
* Opening http://127.0.0.1:45493/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
- http://127.0.0.1:45493/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
I have enabled port 45493 at Security Group Level and also on Linux VM. However,, when I'm trying to access the Kube dashboard, I don't have luck
wget http://13.211.44.210:45493/
--2020-04-16 05:50:52-- http://13.211.44.210:45493/
Connecting to 13.211.44.210:45493... failed: Connection refused.
However, when I do the below, it works and produces index.html file with status code 200
wget http://127.0.0.1:45493/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--2020-04-16 05:52:55-- http://127.0.0.1:45493/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
Connecting to 127.0.0.1:45493... connected.
HTTP request sent, awaiting response... 200 OK
Steps to reproduce at high level is as below:
EC2 Ubuntu of size t2.large
Install minikube, minikube start --driver=docker
Perform deployment as like kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-84bfdf55ff-xx8pl 1/1 Running 0 26m
kubernetes-dashboard-bc446cc64-7nl68 1/1 Running 0 26m
5.kubectl get svc -n kubernetes-dashboard
TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.102.85.110 <none> 8000/TCP 40m
kubernetes-dashboard ClusterIP 10.99.75.241 <none> 80/TCP 40m
My question is why I'm unable to access the internet?
This is by design, minikube is a development tool for local environments.
You can deploy an ingress or loadbalancer service to expose the dashboard, if you really know what you are doing.

How to access pod in k8 cluster via url

I have a service running in a cluster in a namespace:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
amundsen-frontend LoadBalancer 10.100.59.220 a563823867e6f11ea82a90a9c116adac-124ae00284b50400.elb.us-west-2.amazonaws.com 80:31866/TCP 70m
And when I run pods:
kubectl get pods
NAME READY STATUS RESTARTS AGE
amundsen-frontend-595b49d856-mkbjj 1/1 Running 0 74m
amundsen-metadata-5df6c6c8d8-nrk9f 1/1 Running 0 74m
amundsen-search-c8b7cd9f6-mspzr 1/1 Running 0 74m
dsci-amundsen-elasticsearch-client-65f858c656-znjfd 1/1 Running 0 74m
dsci-amundsen-elasticsearch-data-0 1/1 Running 0 74m
dsci-amundsen-elasticsearch-master-0 1/1 Running 0 74m
I'm not really sure what to do here. How do I access the url? Can I port forward in development? What do I do in production? The front-end pod is one I want to access, so is the search pod.
This is what's in my charts.yaml for helm:
frontEnd:
##
## frontEnd.serviceName -- The frontend service name.
##
serviceName: frontend
##
## frontEnd.imageVersion -- The frontend version of the metadata container.
##
imageVersion: 2.0.0
##
## frontEnd.servicePort -- The port the frontend service will be exposed on via the loadbalancer.
##
servicePort: 80
With so little information I don't know if I can solve your problem, but will try to help you find it.
To start with it will be helpful if we can see your service and pod config?
kubectl get sa amundsen-frontend -o yaml
kubectl get pod amundsen-frontend-595b49d856-mkbjj -o yaml
You can try to reach the fronted from another pod, this will help figure out if the problem is in the pod or ingress layer.
To gain shell access inside search pod container run:
kubectl exec -it amundsen-search-c8b7cd9f6-mspzr --container <<name of container>> -- sh
If you have only one container in the pod you can omit the container part from the command above
Once inside check if your are able to reach amundsen-frontend-595b49d856-mkbjj with curl
curl amundsen-frontend-595b49d856-mkbjj
curl amundsen-frontend-595b49d856-mkbjj:31866
If you are able to establish communication, then look for the problem in the ingress layer. You may want to look at your ingress logs to see why it's timing out.
Network security groups in AWS as also worth exploring.
Is your ingress configured properly?

Logging for Kubernetes Calico NetworkPolicy?

I am new to Kubernetes NetworkPolicy and the Network plugin calico.
I have successfully implemented calico in my Kubernetes cluster:
[root#node1 ~]# kubectl get po --all-namespaces -o wide | grep calico
kube-system calico-kube-controllers-5d8b5bc986-sllmk 1/1 Running
kube-system calico-node-4wk8f 1/1 Running
kube-system calico-node-5kz99 1/1 Running
kube-system calico-node-bfk9w 1/1 Running
kube-system calico-node-f2tb2 1/1 Running
kube-system calico-node-hrcf4 1/1 Running
kube-system calico-node-wvh8d 1/1 Running
I have also configured relevant network policies and they work perfectly fine.
The only only I am concerned about is logging. I am unable to find any logs that could tell me whether some request is being accepted or blocked.
Ive tried checking the logs of the calico-nodes-* pods but they do not provide any reasonable logs.
Are there any others logs that I can look at ?
You can inspect calico-node containers logs across your Kubernetes cluster within this path /var/log/calico, or it can be modified via --log-dir parameter used in calicoctl node run command, as described in this link.
However, if you want to observe logs along CNI Network, please visit this page.
I found it very helpful to log out events from Calico CNI using kubelet as target point and then collect them via systemd, besides you can specify a value for log_level parameter.
Kubernetes NetworkPolicy doesn't support logging, but Calico's native NetworkPolicy supports a "log" action that allows you to log packets to the system log.
Tigera's (disclaimer: I work for Tigera) commercial product, CNX, which is built on Calico offers additional auditing and compliance features so you might want to check that out.

kubernetes service IPs not reachable

So I've got a Kubernetes cluster up and running using the Kubernetes on CoreOS Manual Installation Guide.
$ kubectl get no
NAME STATUS AGE
coreos-master-1 Ready,SchedulingDisabled 1h
coreos-worker-1 Ready 54m
$ kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default curl-2421989462-h0dr7 1/1 Running 1 53m 10.2.26.4 coreos-worker-1
kube-system busybox 1/1 Running 0 55m 10.2.26.3 coreos-worker-1
kube-system kube-apiserver-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1
kube-system kube-controller-manager-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1
kube-system kube-proxy-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1
kube-system kube-proxy-coreos-worker-1 1/1 Running 0 58m 192.168.0.204 coreos-worker-1
kube-system kube-scheduler-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1
$ kubectl get svc --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes 10.3.0.1 <none> 443/TCP 1h
As with the guide, I've setup a service network 10.3.0.0/16 and a pod network 10.2.0.0/16. Pod network seems fine as busybox and curl containers get IPs. But the services network has problems. Originally, I've encountered this when deploying kube-dns: the service IP 10.3.0.1 couldn't be reached, so kube-dns couldn't start all containers and DNS was ultimately not working.
From within the curl pod, I can reproduce the issue:
[ root#curl-2421989462-h0dr7:/ ]$ curl https://10.3.0.1
curl: (7) Failed to connect to 10.3.0.1 port 443: No route to host
[ root#curl-2421989462-h0dr7:/ ]$ ip route
default via 10.2.26.1 dev eth0
10.2.0.0/16 via 10.2.26.1 dev eth0
10.2.26.0/24 dev eth0 src 10.2.26.4
It seems ok that there's only a default route in the container. As I understood it, the request (to default route) should be intercepted by the kube-proxy on the worker node, forwarded to the the proxy on the master node where the IP is translated via iptables to the masters public IP.
There seems to be a common problem with a bridge/netfilter sysctl setting, but that seems fine in my setup:
core#coreos-worker-1 ~ $ sysctl net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-iptables = 1
I'm having a real hard time to troubleshoot, as I lack the understanding of what the service IP is used for, how the service network is supposed to work in terms of traffic flow and how to best debug this.
So here're the questions I have:
What is the 1st IP of the service network (10.3.0.1 in this case) used for?
Is above description of the traffic flow correct? If not, what steps does it take for a container to reach a service IP?
What are the best ways to debug each step in the traffic flow? (I can't get any idea what's wrong from the logs)
Thanks!
The Sevice network provides fixed IPs for Services. It is not a routeable network (so don't expect ip ro to show anything nor will ping work) but a collection iptables rules managed by kube-proxy on each node (see iptables -L; iptables -t nat -L on the nodes, not Pods). These virtual IPs (see the pics!) act as load balancing proxy for endpoints (kubectl get ep), which are usually ports of Pods (but not always) with a specific set of labels as defined in the Service.
The first IP on the Service network is for reaching the kube-apiserver itself. It's listening on port 443 (kubectl describe svc kubernetes).
Troubleshooting is different on each network/cluster setup. I would generally check:
Is kube-proxy running on each node? On some setups it's run via systemd and on others there is a DeamonSet that schedules a Pod on each node. On your setup it is deployed as static Pods created by the kubelets thrmselves from /etc/kubernetes/manifests/kube-proxy.yaml
Locate logs for kube-proxy and find clues (can you post some?)
Change kube-proxy into userspace mode. Again, the details depend on your setup. For you it's in the file I mentioned above. Append --proxy-mode=userspace as a parameter on each node
Is the overlay (pod) network functional?
If you leave comments I will get back to you..
I had this same problem, and the ultimate solution that worked for me was enabling IP forwarding on all nodes in the cluster, which I had neglected to do.
$ sudo sysctl net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1
Service IPs and DNS started working immediately afterwards.
I had the same issue, turned out to be a configuration issue in kube-proxy.yaml For the "master" parameter I had the ip address as in - --master=192.168.3.240 but it actually required to be a url like - --master=https://192.168.3.240
FYI my kube-proxy sucessfully uses --proxy-mode=iptables (v1.6.x)

Kubernetes dashboard on vSphere, fresh install gives "no route to host"

This was taken from Github (issue #24407) to Stackoverflow.
Even with the commit from Friday, May 6th 2016 (commit c11229f) to cluster/vsphere, this error
Error: 'dial tcp 172.17.0.2:9090: no route to host'
Trying to reach: 'http://172.17.0.2:9090/'
remains.
I tried on a fresh install of VMware vSphere ESXi 6.0.0; installed k8s with the standard KUBERNETES_PROVIDER=vsphere cluster/kube-up.sh and the script finished with positive results, this time with "kubernetes-dashboard" enabled from the start:
Cluster validation succeeded
Done, listing cluster services:
Kubernetes master is running at https://192.168.1.36
KubeDNS is running at https://192.168.1.36/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://192.168.1.36/api/v1/proxy /namespaces/kube-system/services/kubernetes-dashboard
Yet still unable to connect to the dashboard from my Mac with the infamous "no route to host"...
Am I mistakenly under the impression that a k8s installation should work out of the box on VMware vSphere?
Or is e.g. the lack of an external IP a probable cause in this? (if so I need to find out how to enable one - am under the impression kube-proxy is taking care of stuff)
$ kubectl get svc --namespace=kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.244.240.240 <none> 53/UDP,53/TCP 2h
kubernetes-dashboard 10.244.240.121 <none> 80/TCP 2h