Here is the output of minikube dashbaord
ubuntu#ip-172-31-5-166:~$ minikube dashboard
* Enabling dashboard ...
* Verifying dashboard health ...
* Launching proxy ...
* Verifying proxy health ...
* Opening http://127.0.0.1:45493/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
- http://127.0.0.1:45493/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
I have enabled port 45493 at Security Group Level and also on Linux VM. However,, when I'm trying to access the Kube dashboard, I don't have luck
wget http://13.211.44.210:45493/
--2020-04-16 05:50:52-- http://13.211.44.210:45493/
Connecting to 13.211.44.210:45493... failed: Connection refused.
However, when I do the below, it works and produces index.html file with status code 200
wget http://127.0.0.1:45493/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--2020-04-16 05:52:55-- http://127.0.0.1:45493/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
Connecting to 127.0.0.1:45493... connected.
HTTP request sent, awaiting response... 200 OK
Steps to reproduce at high level is as below:
EC2 Ubuntu of size t2.large
Install minikube, minikube start --driver=docker
Perform deployment as like kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-84bfdf55ff-xx8pl 1/1 Running 0 26m
kubernetes-dashboard-bc446cc64-7nl68 1/1 Running 0 26m
5.kubectl get svc -n kubernetes-dashboard
TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.102.85.110 <none> 8000/TCP 40m
kubernetes-dashboard ClusterIP 10.99.75.241 <none> 80/TCP 40m
My question is why I'm unable to access the internet?
This is by design, minikube is a development tool for local environments.
You can deploy an ingress or loadbalancer service to expose the dashboard, if you really know what you are doing.
Related
I deployed a Kubernetes (v1.17.5) cluster on OpenStack instances using Kubespray. Those instances are CentOS 7.6.1811 qcow2 images imported in Glance.
The install was successful, and I can see my nodes and pods with kubectl commands.
I used the deploy_netchecker option to deploy NetChecker and test the network within my cluster, and set network_plugin="flannel".
I also tried kube_proxy_mode="iptables", but it doesn't seem to affect the result.
That's pretty much all the changes I did in the k8s-cluster.yml file.
All the pods are running, services too :
[centos#cl1-master-0 ~]$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 46h
default netchecker-service NodePort 10.233.13.213 <none> 8081:31081/TCP 46h
kube-system coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 46h
kube-system dashboard-metrics-scraper ClusterIP 10.233.59.12 <none> 8000/TCP 46h
kube-system kubernetes-dashboard ClusterIP 10.233.63.20 <none> 443/TCP 46h
But netchecker API gives the following answer :
[root#localhost ~]# curl http://X.X.X.X:31081/api/v1/connectivity_check
{"Message":"Connectivity check fails. Reason: there are absent or outdated pods; look up the payload","Absent":["netchecker-agent-hostnet-kk56x","netchecker-agent-hostnet-klldn","netchecker-agent-hostnet-r2vqs","netchecker-agent-hostnet-wqhjs"],"Outdated":["netchecker-agent-4jsgf","netchecker-agent-c9pcf","netchecker-agent-hostnet-jzbfv","netchecker-agent-vxgpf"]}
For an unknown reason, I cannot access the API from a cluster node with localhost, so I used a floating IP with OpenStack.
Here are some logs from the agent :
[centos#cl1-master-0 ~]$ sudo vi /var/log/pods/default_netchecker-agent-vjnwl_d8290268-3ea4-4e3c-acb4-295ab162a735/netchecker-agent/0.log
{"log":"I0701 13:04:01.814246 1 agent.go:135] Response status code: 200\n","stream":"stderr","time":"2020-07-01T13:04:01.81437579Z"}
{"log":"I0701 13:04:01.814272 1 agent.go:128] Sleep for 15 second(s)\n","stream":"stderr","time":"2020-07-01T13:04:01.814393199Z"}
{"log":"I0701 13:04:16.817398 1 agent.go:55] Send payload via URL: http://netchecker-service:8081/api/v1/agents/netchecker-agent-vjnwl\n","stream":"stderr","time":"2020-07-01T13:04:16.817786735Z"}
[centos#cl1-master-0 ~]$ sudo vi /var/log/pods/default_netchecker-agent-hostnet-klldn_d5fa6e72-885f-44e1-97a6-880a25e6d6d6/netchecker-agent/0.log
{"log":"E0701 13:05:22.804428 1 agent.go:133] Error while sending info. Details: Post http://netchecker-service:8081/api/v1/agents/netchecker-agent-hostnet-klldn: dial tcp 10.233.13.213:8081: i/o timeout\n","stream":"stderr","time":"2020-07-01T13:05:22.805138032Z"}
{"log":"I0701 13:05:22.804474 1 agent.go:128] Sleep for 15 second(s)\n","stream":"stderr","time":"2020-07-01T13:05:22.805190295Z"}
{"log":"I0701 13:05:37.807140 1 agent.go:55] Send payload via URL: http://netchecker-service:8081/api/v1/agents/netchecker-agent-hostnet-klldn\n","stream":"stderr","time":"2020-07-01T13:05:37.807309111Z"}
Logs from the server do not indicate any error.
I tried to check DNS resolve with the following :
[centos#cl1-master-0 ~]$ kubectl exec -it netchecker-agent-4jsgf -- /bin/sh
/ $ nslookup kubernetes.default
Server: 169.254.25.10
Address 1: 169.254.25.10
nslookup: can't resolve 'kubernetes.default'
[centos#cl1-master-0 ~]$ kubectl exec -it netchecker-agent-4jsgf -- cat /etc/resolv.conf
nameserver 169.254.25.10
search default.svc.cluster.local svc.cluster.local cluster.local openstacklocal
options ndots:5
169.254.25.10 is the IP of the nodelocaldns, but it doesn't seem to query the coredns service deployed.
When I use nslookup netchecker-service.default.svc.cluster.local 10.233.0.3, with the coredns IP, I get a correct answer.
What can be wrong with my configuration ?
Thanks in advance
UPDATE : The plugin Flannel has an issue and contains a fix to apply on all nodes of the cluster. Once done, the pods successfully report back to the netchecker server.
UPDATE : The plugin Flannel has an issue and contains a fix to apply on all nodes of the cluster. Once done, the pods successfully report back to the netchecker server.
I have a very simple springboot service deployed on minikube in windows 10.
C:\Software\Kubernetes>kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
myspringbootserver 1/1 1 1 68m
C:\Software\Kubernetes>kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 49d
myspringbootserver NodePort 10.110.179.207 <none> 9080:30451/TCP 6m50s
C:\Software\Kubernetes>minikube service myspringbootserver --url
http://192.168.99.101:30451
But when I try to hit the service from my chrome browser with url
http://192.168.99.101:30451/MySpringBootServer/heartbeat
getting connection refused exception.Not sure what is going wrong.Could anyone help to resolve it please?
enter image description here
Can you curl or wget using the IP address of the pod?
For example kubectl exec -it podname -- curl http://podip:9080/MySpringBootServer/heartbeat
if not, ensure the path is correct
if yes, make sure the pod exists as an endpoint of the service
kubectl get endpoints myspringbootserver
there is a good debugging document regarding services here:
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/#debugging-services
I'm learning about Kubernetes and ingress controllers but I'm stucked getting this error when I try to apply kong ingress manifest...
ingress-kong-7dd57556c5-bh687 0/2 Init:0/1 0 29s
kong-migrations-gzlqj 0/1 Init:0/1 0 28s
postgres-0 0/1 Pending 0 28s
Is it possible to run this ingress on my home server without minikube ? If so, how?
Note: I have a FQDN pointing to my home server.
I guess you run manifest from Github
Issues with Pods
I have reproduced your case. As you have 3 pods, you have used option with DB.
If you will describe pods using
$ kubectl describe pod <podname> -n kong
you will receive error output:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 7s (x4 over 17s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
You can also check job in kong namespace.
It is work correctly on fresh Minikube cluster, so I guess you might apply same changes to storageclass.
Is it possible to run this ingress on my home server without minikube ? If so, how?
You have to use Kubernetes to do it. Since Minikube is supporting LoadBalancer you can can use it in Home.
You can check this thread about FQDN. As mentioned:
The host machine should be able to resolve the name of that FQDN. You
might add a record into the /etc/hosts at the Mac host to achieve
that:
10.0.0.2 mydb.mytestdomain
But in your case it should be IP address of LoadBalancer, kong-proxy.
Obtain LoadBalancer IP in Minikube
If you will deploy everything correctly you can check your services.
$ kubectl get svc -n kong
You will see kong-proxy service with LoadBalancer type wit <pending> EXTERNAL-IP.
To obtain ExternalIP you have to use minikbue tunnel.
Please note that you need have $ sudo minikube tunnel run in one console whole time.
Before Minikube tunnel
$ kubectl get svc -n kong
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-proxy LoadBalancer 10.110.218.74 <pending> 80:31881/TCP,443:31319/TCP 103m
kong-validation-webhook ClusterIP 10.108.204.137 <none> 443/TCP 103m
postgres ClusterIP 10.105.9.54 <none> 5432/TCP 103m
After
$ kubectl get svc -n kong
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-proxy LoadBalancer 10.110.218.74 10.110.218.74 80:31881/TCP,443:31319/TCP 104m
kong-validation-webhook ClusterIP 10.108.204.137 <none> 443/TCP 104m
postgres ClusterIP 10.105.9.54 <none> 5432/TCP 104m
Testing Kong
Here you can find how to get start with Kong. It will show you how to create Ingress. Later as I mentioned you have to edit ingress and add rule (host) similar like in K8s docs.
Having a kubernetes service (of type ClusterIP) connected to a set of pods, but none of them are currently ready - what will happen to the request?
Will it:
fail eagerly
timeout
wait until a ready pod is available (or forever, whichever is earlier)
something else?
It will time out.
Kube-proxy pulls out the IP addresses from healthy pods and sets as endpoints of the service (backends). Also, note that all kube-proxy does is to re-write the iptables when you create, delete or modify a service.
So, when you send a request within your network and there is no one to reply, your request will timeout.
Deployed nginx service
[node1 ~]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 2h
my-nginx ClusterIP 10.100.1.134 80/TCP 9s
$ curl 10.100.1.134
curl: (7) Failed connect to 10.100.1.134:80; Connection refused
Deployed nginx deployment
$ kubectl create -f nginx-depl.yaml
$ kubectl get po
NAME READY STATUS RESTARTS AGE
my-nginx-f9945ffdd-2f77f 1/1 Running 0 1m
my-nginx-f9945ffdd-rk68v 1/1 Running 0 1m
$ curl 10.100.1.134
Welcome to nginx!
most likely you would get Connection refused error
I have created a sample node.js app and other required files (deployment.yml, service.yml) but I am not able to access the external IP of the service.
#kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.7.240.1 <none> 443/TCP 23h
node-api LoadBalancer 10.7.254.32 35.193.227.250 8000:30164/TCP 4m37s
#kubectl get pods
NAME READY STATUS RESTARTS AGE
node-api-6b9c8b4479-nclgl 1/1 Running 0 5m55s
#kubectl describe svc node-api
Name: node-api
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=node-api
Type: LoadBalancer
IP: 10.7.254.32
LoadBalancer Ingress: 35.193.227.250
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
NodePort: <unset> 30164/TCP
Endpoints: 10.4.0.12:8000
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 6m19s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 5m25s service-controller Ensured load balancer
When I try to do a curl on external ip it gives connection refused
curl 35.193.227.250:8000
curl: (7) Failed to connect to 35.193.227.250 port 8000: Connection refused
I have exposed port 8000 in Dockerfile also. Let me know if I am missing anything.
Looking at your description on this thread it seems everything is fine.
Here is what you can try:
SSH to the GKE node where the pod is running. You can get the node name by running the same command you used with "-o wide" flag.
$ kubectl get pods -o wide
After that doing the SSH, try to curl Cluster as well as Service IP to see if you get response or not.
Try to SSH to the pod
$ kubectl exec -it -- /bin/bash
After that, run local host to see if you get response or not
$ curl localhost
So if you get response upon trying above troubleshooting steps then it could be an issue underlying at the GKE. You can file a defect report here.
If you do not get any response while trying the above steps, it is possible that you have misconfigured the cluster somewhere.
This seems to me a good starting point for troubleshooting your use case.