I would like to execute a command on a node from the master. For e.g let's say I have worker node: kubenode01
Now a pod (pod-test) is running on this node. Using "kubectl get pods --output=wide" on the master shows that the pod is running on this node.
Trying to execute a command on that pod from the master results into an error e.g:
kubectl exec -ti pod-test -- cat /etc/resolv.conf
The result is:
Error from server: error dialing backend: dial tcp 10.0.22.131:10250: i/o timeout
Any idea?
Thanks in advance
You can execute kubectl commands from anywhere as long as your kubeconfig is configured to point to the right cluster URL (kube-apiserver), with the right credentials and the firewall allows connecting to the kube-apiserver port.
In your case, I'd check if your 10.0.22.131:10250 is the real IP:PORT for your kube-apiserver and that you can access it.
Note that kubectl exec -ti pod-test -- cat /etc/resolv.conf runs on the Pod and not on the Node. If you'd like to run on the Node just simply use SSH.
Update:
There are two other alternatives here:
You can create a pod (or debug pod) with a nodeSelector that specifically makes that pod run on the specific node.
If you are trying to debug something on a pod already running on a specific node, you can also try creating a debug ephemeral container.
On newer versions of Kubernetes you can use a debug pod to run something on a specific node
✌️
Related
I've a question.
How can I run a netstat command in a pod A from a pod B?, pod A and B are in a different namespace. My pod A stablish connection with a server outside the cluster and my pod B contains a script that convert netstat result in SNMP traps. I can't modify pod A image to include anything. Pod B it's from my own.
Thanks.
If there's no network policies in place then something like this should work :
kubectl exec -it <pod B> -- sh
> ssh user#podA.podAnamespace "your command"
Note that ssh must be installed on the podB and you must have a user to connect with on Pod A.
When I ran the command below, I got the below messages
bistel#BISTelResearchDev-DN03:~$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
While in the master node, I get the information as below:
bistel#BISTelResearchDev-NN:/etc/kubernetes$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
bistelresearchdev-dn03 NotReady <none> 62s v1.19.3
bistelresearchdev-nn Ready master 57m v1.19.3
bistel#BISTelResearchDev-NN:/etc/kubernetes$
The bistelresearchdev-dn03 is the worker node and the message appears when I ran any command using kubectl as follows The connection to the server localhost:8080 was refused - did you specify the right host or port?.
I googled it a lot but any trials didn't work for me.
Thanks,
kubectl works only on master node in cluster. If you are getting this error then there is no issue.
I can see the issue here is node is NotReady status for that you can check below things.
Check kubelet is running on node bistelresearchdev-dn03 with systemctl status kubelet
Check network plugin is installed on your cluster.
The first computer you ran on is missing the kube config file.
Normally kubectl expects to find it at
~/.kube/config
If you get the one off the master node and copy it onto your machine your kubectl will see it and be able to use it.
I created a kubernetes cluster on my debian 9 machine using kind.
Which apparently works because I can run kubectl cluster-info with valid output.
Now I wanted to fool around with the tutorial on Learn Kubernetes Basics site.
I have already deployed the app
kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1
and started the kubectl proxy.
Output of kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kubernetes-bootcamp 1/1 1 1 17m
My problem now is: when I try to see the output of the application using curl I get
Error trying to reach service: 'dial tcp 10.244.0.5:80: connect: connection refused'
My commands
export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/
For the sake of completeness I can run curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/ and I get valid output.
The steps from this tutorial module represent environment as if You were working on one of the cluster nodes.
And the command tries to check connectivity to service locally on the node.
However In Your case by running Your kubernetes in a docker (kind) cluster the curl command is most likely ran from the host that is serving the docker containers that have kubernetes in it.
It might be possible to use docker exec to get inside kind node and try to run curl command from there.
Hope this helps.
I'm also doing following the tutorial using kind and got it to work forwarding the port:
kubectl port-forward $POD_NAME 8001:8001
Try add :8080 after the $POD_NAME
curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:8080/proxy/
I'm trying to deploy Kubernetes with Calico (IPIP) with Kubeadm. After deployment is done I'm deploying Calico using these manifests
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
Before applying it, I'm editing CALICO_IPV4POOL_CIDR and setting it to 10.250.0.0/17 as well as using command kubeadm init --pod-cidr 10.250.0.0/17.
After few seconds CoreDNS pods (for example getting addr 10.250.2.2) starts restarting with error 10.250.2.2:8080 connection refused.
Now a bit of digging:
from any node in cluster ping 10.250.2.2 works and it reaches pod (tcpdump in pod net namespace shows it).
from different pod (on different node) curl 10.250.2.2:8080 works well
from any node to curl 10.250.2.2:8080 fails with connection refused
Because it's coredns pod it listens on 53 both udp and tcp, so I've tried netcat from nodes
nc 10.250.2.2 53 - connection refused
nc -u 10.250.2.2 55 - works
Now I've tcpdump each interface on source node for port 8080 and curl to CoreDNS pod doesn't even seem to leave node... sooo iptables?
I've also tried weave, canal and flannel, all seem to have same issue.
I've ran out of ideas by now...any pointers please?
Seems to be a problem with Calico implementation, CoreDNS Pods are sensitive on the CNI network Pods successful functioning.
For proper CNI network plugin implementation you have to include --pod-network-cidr flag to kubeadm init command and afterwards apply the same value to CALICO_IPV4POOL_CIDR parameter inside calico.yml.
Moreover, for a successful Pod network installation you have to apply some RBAC rules in order to make sufficient permissions in compliance with general cluster security restrictions, as described in official Kubernetes documentation:
For Calico to work correctly, you need to pass
--pod-network-cidr=192.168.0.0/16 to kubeadm init or update the calico.yml file to match your Pod network. Note that Calico works on
amd64 only.
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
In your case I would switched to the latest Calico versions at least from v3.3 as given in the example.
If you've noticed that you run Pod network plugin installation properly, please take a chance and update the question with your current environment setup and Kubernetes components versions with a health statuses.
I have kubeDNS set up on a bare metal kubernetes cluster. I thought that would allow me to access services as described here (http:// for those who don't want to follow the link), but when I run
curl https://monitoring-influxdb:8083
I get the error
curl: (6) Could not resolve host: monitoring-influxdb
This is true when I run curl on a service name in any namespace. Is this an error with my kubDNS setup or are there different steps I need to take in order to achieve this? I get the expected output when I run the test at the end of this article.
For reference:
kubeDNS controller yaml files
kubeDNS service yaml file
kubelet flags
output of kubectl get svc in default and kube-system namespaces
The service discovery that you're trying to is documented at https://kubernetes.io/docs/concepts/services-networking/dns-pod-service, and is for communications within one pod talking to an existing service, not from nodes (or the master) to speak to Kubernetes services.
You will want to leverage the DNS for the service in form of <servicename>.<namespace> or <servicename>.<namespace>.svc.cluster.local. To see this in operation, kick up an interactive pod with busybox (or use an existing pod of your own) with something like:
kubectl run -i --tty alpine-interactive --image=alpine --restart=Never
and within that shell that is provided there, make an nslookup command. From your example, I'm guessing you're trying to access influxDB from https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb, then it will be installed into the kube-system namespace, and the service name you'd use from another Pod internally to the cluster would be:
monitoring-influxdb.kube-system.svc.cluster.local
For example:
kubectl run -i --tty alpine --image=alpine --restart=Never
If you don't see a command prompt, try pressing enter.
/ # nslookup monitoring-influxdb.kube-system.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: monitoring-influxdb.kube-system.svc.cluster.local
Address 1: 10.102.27.233 monitoring-influxdb.kube-system.svc.cluster.local
As #Michael Hausenblas pointed out in the comments, curl http://monitoring-influxdb:8086 needs to be run from within a pod. Doing that provided the expected results