I cannot load the node information on kubernetes - kubernetes

When I ran the command below, I got the below messages
bistel#BISTelResearchDev-DN03:~$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
While in the master node, I get the information as below:
bistel#BISTelResearchDev-NN:/etc/kubernetes$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
bistelresearchdev-dn03 NotReady <none> 62s v1.19.3
bistelresearchdev-nn Ready master 57m v1.19.3
bistel#BISTelResearchDev-NN:/etc/kubernetes$
The bistelresearchdev-dn03 is the worker node and the message appears when I ran any command using kubectl as follows The connection to the server localhost:8080 was refused - did you specify the right host or port?.
I googled it a lot but any trials didn't work for me.
Thanks,

kubectl works only on master node in cluster. If you are getting this error then there is no issue.
I can see the issue here is node is NotReady status for that you can check below things.
Check kubelet is running on node bistelresearchdev-dn03 with systemctl status kubelet
Check network plugin is installed on your cluster.

The first computer you ran on is missing the kube config file.
Normally kubectl expects to find it at
~/.kube/config
If you get the one off the master node and copy it onto your machine your kubectl will see it and be able to use it.

Related

Kubernetes - Failed to Apply a yaml from a raw url, Unable to connect to the server: dial tcp: lookup raw.githubusercontent.com on: server misbehaving

Im Anddiy and im working with a kubernetes cluster deployed by Rancher.
Its important to say that all my machines don't have direct access to the internet, im using a proxy to use the internet for downloads or something, so i've setted RKE2 with this proxy during the installation steps.
Here i have a machine with an RKE2 that build up my Rancher, and from the Rancer U.I i've created my Kubernetes Cluster, here it tis:
[15:19] root#vmrmmstnodehom01 [~]:# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vmrmmstnodehom01 Ready controlplane 5d16h v1.24.8
vmrmwrknodehom01 Ready controlplane,etcd,worker 5d20h v1.24.8
vmrmwrknodehom02 Ready worker 5d19h v1.24.8
vmrmwrknodehom03 Ready worker 5d19h v1.24.8
vmrmwrknodehom04 Ready worker 5d19h v1.24.8
My cluster is a clean cluster, no applications installed on it at moment.
I've tried to install the longhorn application by using this command ( got this on longhorn documentation):
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.4.0/deploy/longhorn.yaml
But when i tried that, this error message are displayed to me:
Unable to connect to the server: dial tcp: lookup raw.githubusercontent.com on 10.129.251.125:53: server misbehaving
I've tried to check if it is my proxy that don't connect to this url or something, but my machine connected succesfully to this url, i've tried that using the CURL -V and the longhorn url to test that.
I don't know if the kubernetes api has imported the proxy configs of my rke2/rancher, so i don't know if i need to set the proxy manually internal or something, really don't know what is happening here.

Execute a command on Kubernetes node from the master

I would like to execute a command on a node from the master. For e.g let's say I have worker node: kubenode01
Now a pod (pod-test) is running on this node. Using "kubectl get pods --output=wide" on the master shows that the pod is running on this node.
Trying to execute a command on that pod from the master results into an error e.g:
kubectl exec -ti pod-test -- cat /etc/resolv.conf
The result is:
Error from server: error dialing backend: dial tcp 10.0.22.131:10250: i/o timeout
Any idea?
Thanks in advance
You can execute kubectl commands from anywhere as long as your kubeconfig is configured to point to the right cluster URL (kube-apiserver), with the right credentials and the firewall allows connecting to the kube-apiserver port.
In your case, I'd check if your 10.0.22.131:10250 is the real IP:PORT for your kube-apiserver and that you can access it.
Note that kubectl exec -ti pod-test -- cat /etc/resolv.conf runs on the Pod and not on the Node. If you'd like to run on the Node just simply use SSH.
Update:
There are two other alternatives here:
You can create a pod (or debug pod) with a nodeSelector that specifically makes that pod run on the specific node.
If you are trying to debug something on a pod already running on a specific node, you can also try creating a debug ephemeral container.
On newer versions of Kubernetes you can use a debug pod to run something on a specific node
✌️

CoreDNS only works in one host in kubernete cluster

I have a kubernetes with 3 nodes:
[root#ops001 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
azshara-k8s01 Ready <none> 143d v1.15.2
azshara-k8s02 Ready <none> 143d v1.15.2
azshara-k8s03 Ready <none> 143d v1.15.2
when after I am deployed some pods I found only one nodes azshara-k8s03 could resolve DNS, the other two nodes could not resolve DNS.this is my azshara-k8s03 host node /etc/resolv.conf:
options timeout:2 attempts:3 rotate single-request-reopen
; generated by /usr/sbin/dhclient-script
nameserver 100.100.2.136
nameserver 100.100.2.138
this is the other 2 node /etc/resolv.conf:
nameserver 114.114.114.114
should I keep the same ? what should I do to make the DNS works fine in 3 nodes?
did you try if 114.114.114.114 is actually reachable from your nodes? if not, change it to something that actually is ;-]
also check which resolv.conf your kublets actually use: it is often something else than /etc/resolv.conf: do ps ax |grep kubelet and check the value of --resolv-conf flag and see if the DNSes in that file work correctly.
update:
what names are failing to resolve on the 2 problematic nodes? are these public names or internal only? if they are internal only than 114.114.114 will not know about them. 100.100.2.136 and 100.100.2.138 are not reachable for me: are they your internal DNSes? if so try to just change /etc/resolv.conf on 2 nodes that don't work to be the same as on the one that works.
First step,your CoreDNS port are listening on port you specify,you can login Pod in other pod and try to using telnet command to make sure the DNS expose port is accesseable(current I am using alpine,centos using yum,ubuntu or debian using apt-get):
apk add busybox-extras
telnet <your coredns server ip> <your coredns listening port>
Second step: login pods on each host machine and make sure the port is accessable in each pod,if telnet port is not accessable,you should fix your cluser net first.

Unable to get kubernetes dashboard

I've installad a new cluster (version 1.13.5 of kubectl kubelet kubeadm), then I've installed flannel and add a worker node.
Now I'm trying to add kubernetes dashboard to my cluster but after i run
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
I've this situation
kubernetes-dashboard-**** 0/1 CrashLoopBackOff 1 8s
Then if I get the log i can see this
Error while initializing connection to Kubernetes apiserver...
Where I'm wrong?
It seems that the problem was on the worker, when I put the dashboard on master the pod starts.
Maybe the kube dashboard has to be installed on the master or there is something wrong with flannel and the master-node communication.
Check api-server pod is running or not and KubeDNS is working fine or not.

Minikube got stuck when creating container

I recently got started to learn Kubernetes by using Minikube locally in my Mac. Previously, I was able to start a local Kubernetes cluster with Minikube 0.10.0, created a deployment and viewed Kubernetes dashboard.
Yesterday I tried to delete the cluster and re-did everything from scratch. However, I found I cannot get the assets deployed and cannot view the dashboard. From what I saw, everything seemed to get stuck during container creation.
After I ran minikube start, it reported
Starting local Kubernetes cluster...
Kubectl is now configured to use the cluster.
When I ran kubectl get pods --all-namespaces, it reported (pay attention to the STATUS column):
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-addon-manager-minikube 0/1 ContainerCreating 0 51s
docker ps showed nothing:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
minikube status tells me the VM and cluster are running:
minikubeVM: Running
localkube: Running
If I tried to create a deployment and an autoscaler, I was told they were created successfully:
kubectl create -f configs
deployment "hello-minikube" created
horizontalpodautoscaler "hello-minikube-autoscaler" created
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hello-minikube-661011369-1pgey 0/1 ContainerCreating 0 1m
default hello-minikube-661011369-91iyw 0/1 ContainerCreating 0 1m
kube-system kube-addon-manager-minikube 0/1 ContainerCreating 0 21m
When exposing the service, it said:
$ kubectl expose deployment hello-minikube --type=NodePort
service "hello-minikube" exposed
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube 10.0.0.32 <nodes> 8080/TCP 6s
kubernetes 10.0.0.1 <none> 443/TCP 22m
When I tried to access the service, I was told:
curl $(minikube service hello-minikube --url)
Waiting, endpoint for service is not ready yet...
docker ps still showed nothing. It looked to me everything got stuck when creating a container. I tried some other ways to work around this issue:
Upgraded to minikube 0.11.0
Use the xhyve driver instead of the Virtualbox driver
Delete everything cached, like ~/.minikube, ~/.kube, and the cluster, and re-try
None of them worked for me.
Kubernetes is still new to me and I would like to know:
How can I troubleshoot this kind of issue?
What could be the cause of this issue?
Any help is appreciated. Thanks.
It turned out to be a network problem in my case.
The pod status is "ContainerCreating", and I found during container creation, docker image will be pulled from gcr.io, which is inaccessible in China (blocked by GFW). Previous time it worked for me because I happened to connect to it via a VPN.
I didn't try minikube but I use kubernetes. With the information provided it is difficult to say the cause of the issue. Your minikube has no problem in creating resources but ContainerCreating is a problem related to docker daemon or improper communication between kube-api and docker daemon or some problem with kubelet.
You can try the following command:
kubectl describe po POD_NAME
This will give you the POD's events. Maybe this will provide a path to the root cause of issue.
You may also check the logs of kubelet to get the events.
I had this problem on Windows, but it was related to an NTLM proxy. I deleted the minikube VM then recreated it with the correct proxy settings for my CNTLM installation:
minikube start \
--docker-env http_proxy=http://10.0.2.2:3128 \
--docker-env https_proxy=http://10.0.2.2:3128 \
--docker-env no_proxy=localhost,127.0.0.1,::1,192.168.99.100
See https://blog.alexellis.io/minikube-behind-proxy/
The horizontalpodautoscaler (hpa) requires heapster to use. You'll need to run heapster in minikube for that to work. You can always debug these kinds of issues with minikube logs or interactively through the dashboard found at minikube dashboard.
You can find the steps to run heapster and grafana at https://github.com/kubernetes/heapster
For me, it takes several minutes before I see the ContainerCreating problem. After executing the following command:
systemctl status kube-controller-manager.service
I get this error:
Sync "default/redis-master-2229813293" failed with unable to create pods: No API token found for service account "default", retry after the token is automatically created and added to the service account.
There are two ways to solve this:
Set the service account with token
Remove the ServiceAccount setting of KUBE_ADMISSION_CONTROL in api-server