Cannot access the proxy of a kubernetes pod - kubernetes

I created a kubernetes cluster on my debian 9 machine using kind.
Which apparently works because I can run kubectl cluster-info with valid output.
Now I wanted to fool around with the tutorial on Learn Kubernetes Basics site.
I have already deployed the app
kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1
and started the kubectl proxy.
Output of kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kubernetes-bootcamp 1/1 1 1 17m
My problem now is: when I try to see the output of the application using curl I get
Error trying to reach service: 'dial tcp 10.244.0.5:80: connect: connection refused'
My commands
export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/
For the sake of completeness I can run curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/ and I get valid output.

The steps from this tutorial module represent environment as if You were working on one of the cluster nodes.
And the command tries to check connectivity to service locally on the node.
However In Your case by running Your kubernetes in a docker (kind) cluster the curl command is most likely ran from the host that is serving the docker containers that have kubernetes in it.
It might be possible to use docker exec to get inside kind node and try to run curl command from there.
Hope this helps.

I'm also doing following the tutorial using kind and got it to work forwarding the port:
kubectl port-forward $POD_NAME 8001:8001

Try add :8080 after the $POD_NAME
curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:8080/proxy/

Related

Execute a command on Kubernetes node from the master

I would like to execute a command on a node from the master. For e.g let's say I have worker node: kubenode01
Now a pod (pod-test) is running on this node. Using "kubectl get pods --output=wide" on the master shows that the pod is running on this node.
Trying to execute a command on that pod from the master results into an error e.g:
kubectl exec -ti pod-test -- cat /etc/resolv.conf
The result is:
Error from server: error dialing backend: dial tcp 10.0.22.131:10250: i/o timeout
Any idea?
Thanks in advance
You can execute kubectl commands from anywhere as long as your kubeconfig is configured to point to the right cluster URL (kube-apiserver), with the right credentials and the firewall allows connecting to the kube-apiserver port.
In your case, I'd check if your 10.0.22.131:10250 is the real IP:PORT for your kube-apiserver and that you can access it.
Note that kubectl exec -ti pod-test -- cat /etc/resolv.conf runs on the Pod and not on the Node. If you'd like to run on the Node just simply use SSH.
Update:
There are two other alternatives here:
You can create a pod (or debug pod) with a nodeSelector that specifically makes that pod run on the specific node.
If you are trying to debug something on a pod already running on a specific node, you can also try creating a debug ephemeral container.
On newer versions of Kubernetes you can use a debug pod to run something on a specific node
✌️

How to access port forward services on gke

I'm new to gke/gcp and this is my first project.
I'm setting up istio using https://istio.io/docs/setup/kubernetes/quick-start-gke-dm/ tutorial.
I've exposed grafana as shown in the post using:
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000 &
curl http://localhost:3000/dashboard/db/istio-dashboard
gives me http page on terminal, to access it from the browser I'm using master ip I get after executing kubectl cluster-info.
http://{master-ip}:3000/dashboard/db/istio-dashboard is not accessible.
How do I access services using port-forward on gke?
First grab the name of the Pod
$ kubectl get pod
and then use the port-forward command.
$ kubectl port-forward <pod-name> 3000:3000
It worked for me, I've found it from this nice website also explained on detail how to do it. Hope it can be useful.
What (exact) http page is returned by the curl command? Both of these docs [1]&[2] suggest using the url (with localhost) in the browser after setting up a tunnel to Grafana: http://localhost:3000/dashboard/db/istio-dashboard
Alternatively, have you tried with istio-ingressgateway IP address?
[1] https://github.com/GoogleCloudPlatform/gke-istio-telemetry-demo#view-grafana-ui
[2] https://istio.io/docs/setup/kubernetes/quick-start-gke-dm/#grafana

How to debug kubectl apply for kube-flannel.yml?

I'm trying to create a kubernetes cluster following the document at: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
First I have installed kubeadm with docker image on Coreos (1520.9.0) inside VirtualBox with Vagrant:
docker run -it \
-v /etc:/rootfs/etc \
-v /opt:/rootfs/opt \
-v /usr/bin:/rootfs/usr/bin \
-e K8S_VERSION=v1.8.4 \
-e CNI_RELEASE=v0.6.0 \
xakra/kubeadm-installer:0.4.7 coreos
This was my kubeadm init:
kubeadm init --pod-network-cidr=10.244.0.0/16
When run the command:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
It returns:
clusterrole "flannel" configured
clusterrolebinding "flannel" configured
serviceaccount "flannel" configured
configmap "kube-flannel-cfg" configured
daemonset "kube-flannel-ds" configured
But if I check "kubectl get pods --all-namespaces"
It returns:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-coreos1 1/1 Running 0 18m
kube-system kube-apiserver-coreos1 1/1 Running 0 18m
kube-system kube-controller-manager-coreos1 0/1 CrashLoopBackOff 8 19m
kube-system kube-scheduler-coreos1 1/1 Running 0 18m
With journalctl -f -u kubelet I can see this error: Unable to update cni config: No networks found in /etc/cni/net.d
I suspect that something was wrong with the command kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
Is there a way to know why this command doesn't work? Can I get some logs from anywhere?
Just tonight I used kubespray to provision a vagrant cluster, on CoreOS, using flannel (vxlan), and I was also mystified about how flannel could be a Pod inside Kubernetes
It turns out, as seen here, that they are using flannel-cni image from quay.io to write out CNI files using a flannel side-car plus hostDir volume-mounts; it outputs cni-conf.json (that configures CNI to use flannel), and then net-conf.json (that configures the subnet and backend used by flannel).
I hope the jinja2 mustache syntax doesn't obfuscate the answer, but I found it very interesting to see how the Kubernetes folks chose to do it "for real" to compare and contrast against the example DaemonSet given in the flannel-cni README. I guess that's the long way of saying: try the descriptors in the flannel-cni README, then if it doesn't work see if they differ in some way from the known-working kubespray setup
update: as a concrete example, observe that the Documentation yaml doesn't include the --iface= switch, and if your Vagrant setup is using both NAT and "private_network" then it likely means flannel is binding to eth0 (the NAT one) and not eth1 with a more static IP. I saw that caveat mentioned in the docs, but can't immediately recall where in order to cite it
update 2
Is there a way to know why this command doesn't work? Can I get some logs from anywhere?
One may almost always access the logs of a Pod (even a statically defined one such as kube-controller-manager-coreos1) in the same manner: kubectl --namespace=kube-system logs kube-controller-manager-coreos1, and in the CrashLoopBackOff circumstance, adding in the -p for "-p"revious will show the logs from the most recent crash (but only for a few seconds, not indefinitely), and occasionally kubectl --namespace=kube-system describe pod kube-controller-manager-coreos1 will show helpful information in either the Events section at the bottom, or in the "Status" block near the top if it was Terminated for cause
In the case of a very bad failure, such as the apiserver failing to come up (and thus kubectl logs won't do anything), then ssh-ing to the Node and using a mixture of journalctl -u kubelet.service --no-pager --lines=150 and docker logs ${the_sha_or_name} to try and see any error text. You will almost certainly need docker ps -a in the latter case to find the exited container's sha or name, but that same "only for a few seconds" applies, too, as dead containers will be pruned after some time.
In the case of vagrant, one can ssh into the VM in one of several ways:
vagrant ssh coreos1
vagrant ssh-config > ssh-config && ssh -F ssh-config coreos1
or if it has a "private_network" address, such as 192.168.99.101 or such, then you can usually ssh -i ~/.vagrant.d/insecure_private_key core#192.168.99.101 but one of the first two are almost always more convenient

ibm Cloud private console Not coming after installation

I have installed the Ibm private cloud private with 3 nodes. MASTER,PROXY worker and management are configured on all the nodes. I also added vsphere cloud provider configuration in the config.yaml before those installation.
Installation is successful and i got the url for console http://proxy_vip:8443. But i cannot access the console. The port 8443 is not listening.
When i checked the pod status i got the below output.
i found this issue while running 'kubectl -s 127.0.0.1:8888 -n kube-system get pods. Other pods are running
Try deleting the POD using kubectl delete pod icp-router -n kube-system. It should reinitialize the POD.
The admin console will be available at https://master_ip:8443/console. If the port isn't listening, then you can confirm the health of the icp-router pod(s):
kubectl -n kube-system get pods -o wide | grep icp-router
The output will show you the pod which is used to serve access to the web console. If it's not running or in a bad state, then your web console may not be accessible. If you can post logs from the container, then it may provide more insight into what's going on within your cluster:
kubectl -n kube-system logs icp-router-[XXXXX]
After ICP 2.1.0 installation, if the pods is CrashLoopBackOff, and kubectl logs or docker logs command shows 'Illegal instruction (core dumped)' error, you need to check your CPU information by command 'cat /proc/cpuinfo'. Ensure your CPU has 'sse4_2' flag.

Service discovery on Kubernetes

I have kubeDNS set up on a bare metal kubernetes cluster. I thought that would allow me to access services as described here (http:// for those who don't want to follow the link), but when I run
curl https://monitoring-influxdb:8083
I get the error
curl: (6) Could not resolve host: monitoring-influxdb
This is true when I run curl on a service name in any namespace. Is this an error with my kubDNS setup or are there different steps I need to take in order to achieve this? I get the expected output when I run the test at the end of this article.
For reference:
kubeDNS controller yaml files
kubeDNS service yaml file
kubelet flags
output of kubectl get svc in default and kube-system namespaces
The service discovery that you're trying to is documented at https://kubernetes.io/docs/concepts/services-networking/dns-pod-serv‌​ice, and is for communications within one pod talking to an existing service, not from nodes (or the master) to speak to Kubernetes services.
You will want to leverage the DNS for the service in form of <servicename>.<namespace> or <servicename>.<namespace>.svc.cluster.local. To see this in operation, kick up an interactive pod with busybox (or use an existing pod of your own) with something like:
kubectl run -i --tty alpine-interactive --image=alpine --restart=Never
and within that shell that is provided there, make an nslookup command. From your example, I'm guessing you're trying to access influxDB from https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb, then it will be installed into the kube-system namespace, and the service name you'd use from another Pod internally to the cluster would be:
monitoring-influxdb.kube-system.svc.cluster.local
For example:
kubectl run -i --tty alpine --image=alpine --restart=Never
If you don't see a command prompt, try pressing enter.
/ # nslookup monitoring-influxdb.kube-system.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: monitoring-influxdb.kube-system.svc.cluster.local
Address 1: 10.102.27.233 monitoring-influxdb.kube-system.svc.cluster.local
As #Michael Hausenblas pointed out in the comments, curl http://monitoring-influxdb:8086 needs to be run from within a pod. Doing that provided the expected results