I have a service running in a cluster in a namespace:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
amundsen-frontend LoadBalancer 10.100.59.220 a563823867e6f11ea82a90a9c116adac-124ae00284b50400.elb.us-west-2.amazonaws.com 80:31866/TCP 70m
And when I run pods:
kubectl get pods
NAME READY STATUS RESTARTS AGE
amundsen-frontend-595b49d856-mkbjj 1/1 Running 0 74m
amundsen-metadata-5df6c6c8d8-nrk9f 1/1 Running 0 74m
amundsen-search-c8b7cd9f6-mspzr 1/1 Running 0 74m
dsci-amundsen-elasticsearch-client-65f858c656-znjfd 1/1 Running 0 74m
dsci-amundsen-elasticsearch-data-0 1/1 Running 0 74m
dsci-amundsen-elasticsearch-master-0 1/1 Running 0 74m
I'm not really sure what to do here. How do I access the url? Can I port forward in development? What do I do in production? The front-end pod is one I want to access, so is the search pod.
This is what's in my charts.yaml for helm:
frontEnd:
##
## frontEnd.serviceName -- The frontend service name.
##
serviceName: frontend
##
## frontEnd.imageVersion -- The frontend version of the metadata container.
##
imageVersion: 2.0.0
##
## frontEnd.servicePort -- The port the frontend service will be exposed on via the loadbalancer.
##
servicePort: 80
With so little information I don't know if I can solve your problem, but will try to help you find it.
To start with it will be helpful if we can see your service and pod config?
kubectl get sa amundsen-frontend -o yaml
kubectl get pod amundsen-frontend-595b49d856-mkbjj -o yaml
You can try to reach the fronted from another pod, this will help figure out if the problem is in the pod or ingress layer.
To gain shell access inside search pod container run:
kubectl exec -it amundsen-search-c8b7cd9f6-mspzr --container <<name of container>> -- sh
If you have only one container in the pod you can omit the container part from the command above
Once inside check if your are able to reach amundsen-frontend-595b49d856-mkbjj with curl
curl amundsen-frontend-595b49d856-mkbjj
curl amundsen-frontend-595b49d856-mkbjj:31866
If you are able to establish communication, then look for the problem in the ingress layer. You may want to look at your ingress logs to see why it's timing out.
Network security groups in AWS as also worth exploring.
Is your ingress configured properly?
Related
Here is the output of minikube dashbaord
ubuntu#ip-172-31-5-166:~$ minikube dashboard
* Enabling dashboard ...
* Verifying dashboard health ...
* Launching proxy ...
* Verifying proxy health ...
* Opening http://127.0.0.1:45493/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
- http://127.0.0.1:45493/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
I have enabled port 45493 at Security Group Level and also on Linux VM. However,, when I'm trying to access the Kube dashboard, I don't have luck
wget http://13.211.44.210:45493/
--2020-04-16 05:50:52-- http://13.211.44.210:45493/
Connecting to 13.211.44.210:45493... failed: Connection refused.
However, when I do the below, it works and produces index.html file with status code 200
wget http://127.0.0.1:45493/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--2020-04-16 05:52:55-- http://127.0.0.1:45493/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
Connecting to 127.0.0.1:45493... connected.
HTTP request sent, awaiting response... 200 OK
Steps to reproduce at high level is as below:
EC2 Ubuntu of size t2.large
Install minikube, minikube start --driver=docker
Perform deployment as like kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-84bfdf55ff-xx8pl 1/1 Running 0 26m
kubernetes-dashboard-bc446cc64-7nl68 1/1 Running 0 26m
5.kubectl get svc -n kubernetes-dashboard
TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.102.85.110 <none> 8000/TCP 40m
kubernetes-dashboard ClusterIP 10.99.75.241 <none> 80/TCP 40m
My question is why I'm unable to access the internet?
This is by design, minikube is a development tool for local environments.
You can deploy an ingress or loadbalancer service to expose the dashboard, if you really know what you are doing.
I am beginner to kubernetes. I am trying to install minikube wanted to run my application in kubernetes. I am using ubuntu 16.04
I have followed the installation instructions provided here
https://kubernetes.io/docs/setup/learning-environment/minikube/#using-minikube-with-an-http-proxy
Issue1:
After installing kubectl, virtualbox and minikube I have run the command
minikube start --vm-driver=virtualbox
It is failing with following error
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0912 17:39:12.486830 17689 start.go:305] Error restarting
cluster: restarting kube-proxy: waiting for kube-proxy to be
up for configmap update: timed out waiting for the condition
But when I checked the virtualbox I see the minikube VM running and when I run the kubectl
kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10
I see the deployments
kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-minikube 1 1 1 1 27m
I exposed the hello-minikube deployment as service
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube LoadBalancer 10.102.236.236 <pending> 8080:31825/TCP 15m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h
I got the url for the service
minikube service hello-minikube --url
http://192.168.99.100:31825
When I try to curl the url I am getting the following error
curl http://192.168.99.100:31825
curl: (7) Failed to connect to 192.168.99.100 port 31825: Connection refused
1)If minikube cluster got failed while starting, how did the kubectl able to connect to minikube to do deployments and services?
2) If cluster is fine, then why am i getting connection refused ?
I was looking at this proxy(https://kubernetes.io/docs/setup/learning-environment/minikube/#starting-a-cluster) what is my_proxy in this ?
Is this minikube ip and some port ?
I have tried this
Error restarting cluster: restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition
but do not understand how #3(set proxy) in solution will be done. Can some one help me getting instructions for proxy ?
Adding the command output which was asked in the comments
kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
etcd-minikube 1/1 Running 0 4m
kube-addon-manager-minikube 1/1 Running 0 5m
kube-apiserver-minikube 1/1 Running 0 4m
kube-controller-manager-minikube 1/1 Running 0 6m
kube-dns-86f4d74b45-sdj6p 3/3 Running 0 5m
kube-proxy-7ndvl 1/1 Running 0 5m
kube-scheduler-minikube 1/1 Running 0 5m
kubernetes-dashboard-5498ccf677-4x7sr 1/1 Running 0 5m
storage-provisioner 1/1 Running 0 5m
I deleted minikube and removed all files under ~/.minikube and
reinstalled minikube. Now it is working fine. I did not get the output
before but I have attached it after it is working to the question. Can
you tell me what does the output of this command tells ?
It will be very difficult or even impossible to tell what was exactly wrong with your Minikube Kubernetes cluster when it is already removed and set up again.
Basically there were a few things that you could do to properly troubleshoot or debug your issue.
Adding the command output which was asked in the comments
The output you posted is actually only part of the task that #Eduardo Baitello asked you to do. kubectl get po -n kube-system command simply shows you a list of Pods in kube-system namespace. In other words this is the list of system pods forming your Kubernetes cluster and, as you can imagine, proper functioning of each of these components is crucial. As you can see in your output the STATUS of your kube-proxy pod is Running:
kube-proxy-7ndvl 1/1 Running 0 5m
You were also asked in #Eduardo's question to check its logs. You can do it by issuing:
kubectl logs kube-proxy-7ndvl
It could tell you what was wrong with this particular pod at the time when the problem occured. Additionally in such case you may use describe command to see other pod details (sometimes looking at pod events may be very helpful to figure out what's going on with it):
kubectl describe pod kube-proxy-7ndvl
The suggestion to check this particular Pod status and logs was most probably motivated by this fragment of the error messages shown during your Minikube startup process:
E0912 17:39:12.486830 17689 start.go:305] Error restarting
cluster: restarting kube-proxy: waiting for kube-proxy to be
up for configmap update: timed out waiting for the condition
As you can see this message clearly suggests that there is in short "something wrong" with kube-proxy so it made a lot of sense to check it first.
There is one more thing you may have not noticed:
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube LoadBalancer 10.102.236.236 <pending> 8080:31825/TCP 15m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h
Your hello-minikube service was not completely ready. In EXTERNAL-IP column you can see that its state was pending. As you can use describe command to describe Pods you can do so to get details of the service. Simple:
describe service hello-minikube
could tell you quite a lot in such case.
1)If minikube cluster got failed while starting, how did the kubectl
able to connect to minikube to do deployments and services? 2) If
cluster is fine, then why am i getting connection refused ?
Remember that Kubernetes Cluster is not a monolith structure and consists of many parts that depend on one another. The fact that kubectl worked and you could create deployment doesn't mean that the whole cluster was working fine and as you can see in the error message it was suggesting that one of its components, namely kube-proxy, could actually not function properly.
Going back to the beginning of your question...
I have followed the installation instructions provided here
https://kubernetes.io/docs/setup/learning-environment/minikube/#using-minikube-with-an-http-proxy
Issue1: After installing kubectl, virtualbox and minikube I have run
the command
minikube start --vm-driver=virtualbox
as far as I understood you don't use the http proxy so you didn't follow instructions from this particular fragment of the docs that you posted, did you ?
I have the impression that you mix 2 concepts. kube-proxy which is a Kubernetes cluster component and which is deployed as pod in kube-system space and http proxy server mentioned in this fragment of documentation.
I was looking at this
proxy(https://kubernetes.io/docs/setup/learning-environment/minikube/#starting-a-cluster)
what is my_proxy in this ?
If you don't know what is your http proxy address, most probably you simply don't use it and if you don't use it to connect to the Internet from your computer, it doesn't apply to your case in any way.
Otherwise you need to set it up for your Minikube by providing additional flags when you start it as follows:
minikube start --docker-env http_proxy=http://$YOURPROXY:PORT \
--docker-env https_proxy=https://$YOURPROXY:PORT
If you were able to start your Minikube and now it works properly only using the command:
minikube start --vm-driver=virtualbox
your issue was caused by something else and you don't need to provide the above mentioned flags to tell your Minikube what is your http proxy server that you're using.
As far as I understand currently everything is up and running and you can access the url returned by the command minikube service hello-minikube --url without any problem, right ? You can also run the command kubectl get service hello-minikube and check if its output differs from what you posted before. As you didn't attach any yaml definition files it's difficult to tell if it was nothing wrong with your service definition. Also note that Load Balancer is a service type designed to work with external load balancers provided by cloud providers and minikube uses NodePort instead of it.
kubernetes v1.15.0 master is not able to reach pod ip address. I have been able to get it working till 1.14 but this time its not working any more. I have been using and setting up k8s clustors in ec2 using kubeadm.
Please find a log below; Any comments.
[ec2-user#ip-172-31-18-31 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-16-120.ap-south-1.compute.internal Ready <none> 97m v1.15.0
ip-172-31-18-31.ap-south-1.compute.internal Ready master 116m v1.15.0
[ec2-user#ip-172-31-18-31 ~]$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-deploy-7fd5fc7ff-dh9pw 1/1 Running 0 6m32s 10.44.0.3 ip-172-31-16-120.ap-south-1.compute.internal <none> <none>
hello-deploy-7fd5fc7ff-vrxbd 1/1 Running 0 6m32s 10.44.0.4 ip-172-31-16-120.ap-south-1.compute.internal <none> <none>
hello-pod1 1/1 Running 0 22m 10.44.0.1 ip-172-31-16-120.ap-south-1.compute.internal <none> <none>
[ec2-user#ip-172-31-18-31 ~]$ hostname
ip-172-31-18-31.ap-south-1.compute.internal
[ec2-user#ip-172-31-18-31 ~]$ curl http://10.44.0.4
Just simply create service for your pod to access it within the cluster, type of service should be ClusterIP.
Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a type in the ServiceSpec.
ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluste
Egg.:
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
selector:
app: test
Remember to match selector of service to pod's selector.
Having a kubernetes service (of type ClusterIP) connected to a set of pods, but none of them are currently ready - what will happen to the request?
Will it:
fail eagerly
timeout
wait until a ready pod is available (or forever, whichever is earlier)
something else?
It will time out.
Kube-proxy pulls out the IP addresses from healthy pods and sets as endpoints of the service (backends). Also, note that all kube-proxy does is to re-write the iptables when you create, delete or modify a service.
So, when you send a request within your network and there is no one to reply, your request will timeout.
Deployed nginx service
[node1 ~]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 2h
my-nginx ClusterIP 10.100.1.134 80/TCP 9s
$ curl 10.100.1.134
curl: (7) Failed connect to 10.100.1.134:80; Connection refused
Deployed nginx deployment
$ kubectl create -f nginx-depl.yaml
$ kubectl get po
NAME READY STATUS RESTARTS AGE
my-nginx-f9945ffdd-2f77f 1/1 Running 0 1m
my-nginx-f9945ffdd-rk68v 1/1 Running 0 1m
$ curl 10.100.1.134
Welcome to nginx!
most likely you would get Connection refused error
So I've got a Kubernetes cluster up and running using the Kubernetes on CoreOS Manual Installation Guide.
$ kubectl get no
NAME STATUS AGE
coreos-master-1 Ready,SchedulingDisabled 1h
coreos-worker-1 Ready 54m
$ kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default curl-2421989462-h0dr7 1/1 Running 1 53m 10.2.26.4 coreos-worker-1
kube-system busybox 1/1 Running 0 55m 10.2.26.3 coreos-worker-1
kube-system kube-apiserver-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1
kube-system kube-controller-manager-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1
kube-system kube-proxy-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1
kube-system kube-proxy-coreos-worker-1 1/1 Running 0 58m 192.168.0.204 coreos-worker-1
kube-system kube-scheduler-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1
$ kubectl get svc --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes 10.3.0.1 <none> 443/TCP 1h
As with the guide, I've setup a service network 10.3.0.0/16 and a pod network 10.2.0.0/16. Pod network seems fine as busybox and curl containers get IPs. But the services network has problems. Originally, I've encountered this when deploying kube-dns: the service IP 10.3.0.1 couldn't be reached, so kube-dns couldn't start all containers and DNS was ultimately not working.
From within the curl pod, I can reproduce the issue:
[ root#curl-2421989462-h0dr7:/ ]$ curl https://10.3.0.1
curl: (7) Failed to connect to 10.3.0.1 port 443: No route to host
[ root#curl-2421989462-h0dr7:/ ]$ ip route
default via 10.2.26.1 dev eth0
10.2.0.0/16 via 10.2.26.1 dev eth0
10.2.26.0/24 dev eth0 src 10.2.26.4
It seems ok that there's only a default route in the container. As I understood it, the request (to default route) should be intercepted by the kube-proxy on the worker node, forwarded to the the proxy on the master node where the IP is translated via iptables to the masters public IP.
There seems to be a common problem with a bridge/netfilter sysctl setting, but that seems fine in my setup:
core#coreos-worker-1 ~ $ sysctl net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-iptables = 1
I'm having a real hard time to troubleshoot, as I lack the understanding of what the service IP is used for, how the service network is supposed to work in terms of traffic flow and how to best debug this.
So here're the questions I have:
What is the 1st IP of the service network (10.3.0.1 in this case) used for?
Is above description of the traffic flow correct? If not, what steps does it take for a container to reach a service IP?
What are the best ways to debug each step in the traffic flow? (I can't get any idea what's wrong from the logs)
Thanks!
The Sevice network provides fixed IPs for Services. It is not a routeable network (so don't expect ip ro to show anything nor will ping work) but a collection iptables rules managed by kube-proxy on each node (see iptables -L; iptables -t nat -L on the nodes, not Pods). These virtual IPs (see the pics!) act as load balancing proxy for endpoints (kubectl get ep), which are usually ports of Pods (but not always) with a specific set of labels as defined in the Service.
The first IP on the Service network is for reaching the kube-apiserver itself. It's listening on port 443 (kubectl describe svc kubernetes).
Troubleshooting is different on each network/cluster setup. I would generally check:
Is kube-proxy running on each node? On some setups it's run via systemd and on others there is a DeamonSet that schedules a Pod on each node. On your setup it is deployed as static Pods created by the kubelets thrmselves from /etc/kubernetes/manifests/kube-proxy.yaml
Locate logs for kube-proxy and find clues (can you post some?)
Change kube-proxy into userspace mode. Again, the details depend on your setup. For you it's in the file I mentioned above. Append --proxy-mode=userspace as a parameter on each node
Is the overlay (pod) network functional?
If you leave comments I will get back to you..
I had this same problem, and the ultimate solution that worked for me was enabling IP forwarding on all nodes in the cluster, which I had neglected to do.
$ sudo sysctl net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1
Service IPs and DNS started working immediately afterwards.
I had the same issue, turned out to be a configuration issue in kube-proxy.yaml For the "master" parameter I had the ip address as in - --master=192.168.3.240 but it actually required to be a url like - --master=https://192.168.3.240
FYI my kube-proxy sucessfully uses --proxy-mode=iptables (v1.6.x)