Kubectl get nodes return "the server doesn't have a resource type "nodes"" - kubernetes

I installed the Kubernetes and performed kubeadm init and join from the worker too. But when i run kubectl get nodes it gives the following response
the server doesn't have a resource type "nodes"
What might be the problem here? COuld not see anything in the /var/log/messages
Any hints here?

In my case, I wanted to see the description of my pods.
When I used kubectl describe postgres-deployment-866647ff76-72kwf, the error said error: the server doesn't have a resource type "postgres-deployment-866647ff76-72kwf".
I corrected it by adding pod, before the pod name, as follows:
kubectl describe pod postgres-deployment-866647ff76-72kwf

It looks to me that the authentication credentials were not set correctly. Did you copy the kubeconfig file /etc/kubernetes/admin.conf to ~/.kube/config? If you used kubeadm the API server should be configured to run on 6443, not in 8080. Could you also check that the KUBECONFIG variable is not set?
It would also help to increase the verbose level using the flag --v=99. Moreover, are you accessing from the same machine where the Kubernetes master components are installed, or are you accessing from the outside?

I got this message when I was trying to play around with Docker-Desktop. I had previously been doing a few experiments with Google Cloud and run some kubectl commands for that. The result was that in my ~/.kube/config file I still had stale config related to a now non-existent GCP cluster, and my default k8s context was set to that.
Try the following:
# Find what current contexts you have
kubectl config view
I get:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://kubernetes.docker.internal:6443
name: docker-desktop
contexts:
- context:
cluster: docker-desktop
user: docker-desktop
name: docker-desktop
current-context: docker-desktop
kind: Config
preferences: {}
users:
- name: docker-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
So only one context now. If you have more than one context here, check that its the one you expect that is set to current-context. If not change it with:
# Get rid of old contexts that you don't use
kubectl config delete-context some-old-context
# Selecting the context that I have auth for
kubectl config use-context docker-desktop

Related

Kubectl commands cannot be executed from another VM

I'm having an issue when executing the "kubectl" commands. In fact, my cluster consists of one Master and one Worker node. The kubectl commands can be executed from the Master server without having an issue. But, I also have another VM which I use that VM as a Jump server to login to the master and worker nodes. I need to execute the kubectl commands from that Jump server. I created the .kube directory, and copied the kubeconfig file from the Master node to the Jump server. And also I set the context correctly as well. But the kubectl commands hangs when executing from the Jump server and it gives a timeout error.
Below are the information.
kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.240.0.30:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
ubuntu#ansible:~$ kubectl config use-context kubernetes-admin#kubernetes
Switched to context "kubernetes-admin#kubernetes".
ubuntu#ansible:~$ kubectl get pods
Unable to connect to the server: dial tcp 10.240.0.30:6443: i/o timeout
ubuntu#ansible:~$ kubectl config current-context
kubernetes-admin#kubernetes
Everything seems to be OK for me and wondering why kubectl commands hang when wxecuting from the Jump server.
Troubleshooted the issue by verifying whether the Jump VM can telnet to Kubernetes Master Node by executing the below.
telnet <ip-address-of-the-kubernetes-master-node> 6443
Since the error was a "Connection Timed Out" I had to add a firewall rule to Kubernetes Master node. Added a firewall rule as below. Note: In my case I'm using GCP.
gcloud compute firewall-rules create allow-kubernetes-apiserver \
--allow tcp:22,tcp:6443,icmp \
--network kubernetes \
--source-ranges 0.0.0.0/0
Then I was able to telnet to the Master Node without any issue. If still you can't get connected to the Master node, change the Internal IP in the kubconfig file under .kube directory to the Public IP address of the Master node.
Then change the context using below command.
kubectl config set-context <context-name>

Kubectl config within AKS

I have a cluster in Azure (AKS), in this cluster, I have 2 pools: a System one and a User one to run apps.
When executing kubectl get pod command inside a pod, I get the error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
the pod is running on the system node(not working on the user node either) in one my own namespace(let's call it cron)
But when running the same command in a pod belonging to the namespace kube-system on the system node, it's working fine.
it looks like to be link to the configuration of kubectl (kubeconfig) but I don't get how it works in the kube-system namespace and not the cron one
what did I miss in AKS to not be able to run kubectl command in pod not belonging to kube-system namespace?
Edit1:
Environment variables are different especially those linked to Kubernetes:
On the pod running under kube-system got the URL:
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
KUBE_DNS_PORT=udp://10.0.0.10:53
KUBE_DNS_PORT_53_TCP_ADDR=10.0.0.10
KUBERNETES_DASHBOARD_PORT_443_TCP_PORT=443
KUBERNETES_DASHBOARD_SERVICE_PORT=443
KUBE_DNS_SERVICE_HOST=10.0.0.10
KUBERNETES_PORT_443_TCP=tcp://*****.hcp.japaneast.azmk8s.io:443
KUBE_DNS_PORT_53_TCP_PORT=53
KUBE_DNS_PORT_53_UDP=udp://10.0.0.10:53
KUBE_DNS_PORT_53_UDP_PROTO=udp
KUBE_DNS_SERVICE_PORT_DNS=53
KUBE_DNS_PORT_53_TCP_PROTO=tcp
KUBE_DNS_PORT_53_UDP_ADDR=10.0.0.10
KUBERNETES_DASHBOARD_PORT_443_TCP_ADDR=10.0.207.97
KUBE_DNS_SERVICE_PORT_DNS_TCP=53
KUBERNETES_DASHBOARD_PORT_443_TCP=tcp://10.0.207.97:443
KUBERNETES_DASHBOARD_PORT=tcp://10.0.207.97:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBE_DNS_PORT_53_UDP_PORT=53
KUBERNETES_PORT_443_TCP_ADDR=****.hcp.japaneast.azmk8s.io
KUBERNETES_SERVICE_HOST=*****.hcp.japaneast.azmk8s.io
KUBERNETES_PORT=tcp://*****.hcp.japaneast.azmk8s.io:443
KUBERNETES_PORT_443_TCP_PORT=443
KUBE_DNS_PORT_53_TCP=tcp://10.0.0.10:53
KUBERNETES_DASHBOARD_PORT_443_TCP_PROTO=tcp
KUBE_DNS_SERVICE_PORT=53
KUBERNETES_DASHBOARD_SERVICE_HOST=10.0.207.97
In my own env cron
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
KUBERNETES_SERVICE_HOST=10.0.0.1
KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
About namespaces. I labeled my own namespace with the same labels as kube-system but no luck.
about the config: both kubectl config view are coming back empty (requested inside the pod):
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

Kubernetes RBAC default user

I'm reading myself currently into RBAC and am using Docker For Desktop with a local Kubernetes cluster enabled.
If I run kubectl auth can-i get pods which user or group or serviceaccount is used by default?
Is it the same call like:
kubectl auth can-i get pods --as docker-for-desktop --as-group system:serviceaccounts ?
kubectl config view shows:
contexts:
- context:
cluster: docker-for-desktop-cluster
namespace: default
user: docker-for-desktop
name: docker-for-desktop
...
users:
- name: docker-for-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
But simply calling kubectl auth can-i get pods --as docker-for-desktop returns NO.
Thanks,
Kim
To answer your question
If I run kubectl auth can-i get pods which user or group or serviceaccount is used by default?
As you can read on Configure Service Accounts for Pods:
When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default).
You can use kubectl get serviceaccount to see what serviceaccounts are setup in the cluster.
Try checking which contexts you have available and switching into a which ever you need:
kubectl config get-contexts
kubectl config use-context docker-for-desktop
If you are experiencing an issue with missing Role please check Referring to Resources to set they correctly for docker-for-desktop

Error while running kubectl commands

I have recently installed minikube and kubectl. However when I run kubectl get pods or any other command related to kubectl I get the error
Unable to connect to the server: unexpected EOF
Does anyone know how to fix this?I am using Ubuntu server 16.04.Thanks in advance.
The following steps can be used for further debugging.
Check the minikube local cluster status using minikube status command.
$: minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 172.0.x.y
If problem with kubectl configuratuion,then configure it using, kubectl config use-context minikube command.
$: kubectl config use-context minikube
Switched to context "minikube".
Check the cluster status, using kubectl cluster-info command.
$: kubectl cluster-info
Kubernetes master is running at ...
Heapster is running at ...
KubeDNS is running at ...
...
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Note: It can even be due to very simple reason: internet speed (it happend to me just now).
I have same problem too. I solved after change the server addr to localhost
apiVersion: v1
clusters:
- cluster:
certificate-authority: /var/lib/minikube/certs/ca.crt
server: https://localhost:8443 # check it
name: m01
...
users:
- name: m01
user:
client-certificate: /var/lib/minikube/certs/apiserver.crt
client-key: /var/lib/minikube/certs/apiserver.key
I think your kubernetes master is not setup properly. You can check that by checking the following services in master node are in active state and running.
etcd2.service
kube-apiserver.service Kubernetes API Server
kube-controller-manager.service Kubernetes Controller Manager
kube-scheduler.service Kubernetes Scheduler

How to get Kubernetes cluster name from K8s API

As stated in the title, is it possible to find out a K8s cluster name from the API? I looked around the API and could not find it.
kubectl config current-context does the trick (it outputs little bit more, like project name, region, etc., but it should give you the answer you need).
Unfortunately a cluster doesn't know its own name, or anything else that would uniquely identify it (K8s issue #44954). I wanted to know for helm issue #2055.
Update:
A common workaround is to create a ConfigMap containing the cluster name and read that when required (#2055 comment 1244537799).
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-info
namespace: kube-system
data:
cluster-name: foo
There is no way to get the name via K8s API. But here is a one-liner in case the name you have in your .kube/config file is enough for you (if you download it from your cloud provider the names should match):
kubectl config view --minify -o jsonpath='{.clusters[].name}'
Note 1: The --minify is key here so it will output the name of your current context only. There are other similar answers posted here but without the "minify" you will be listing other contexts in your config that might confuse you.
Note 2: The name in your .kube/config might not reflect the name in your cloud provider, if the file was autogenerated by the cloud provider the names should match, if you configured it manually you could have typed any name just for local config.
Note 3: Do not rely on kubectl config current-context this returns just the name of the context, not the name of the cluster.
I dont believe there is a k8s cluster name. This command could provide some nice informations
kubectl cluster-info
The question is not really well described. However, if this question is related to Google Container Engine then as coreypobrien mentioned the name of cluster is stored in custom metadata of nodes. From inside a node, run the following command and the output will be name of cluster:
curl http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google"
If you specify your use case, I might be able to extend my answer to cover it.
The kubernetes API doesn't know much about the GKE cluster name, but you can easily get the cluster name from the Google metatdata server like this
kubectl run curl --rm --restart=Never -it --image=appropriate/curl -- -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/cluster-name
It is the same as getting the current config, but the below command gives clear output:
kubectl config view
This command will Check all possible clusters, as you know .KUBECONFIG may have multiple contexts
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
And you will get output like
Cluster name Server
kubernetes https://localhost:6443
at-least for kubespray clusters, the following works for me
kubectl config current-context | cut -d '#' -f2
For clusters that were installed using kubeadm, the configuration stored in the kubeadm-config configmap has the cluster name used when installing the cluster.
$ kubectl -n kube-system get configmap kubeadm-config -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kubeadm-config
namespace: kube-system
data:
ClusterConfiguration: |
clusterName: NAME_OF_CLUSTER
For clusters that are using CoreDNS for their DNS, the "cluster name" from kubeadm is also used as the domain suffix.
$ kubectl -n kube-system get configmap coredns -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
kubernetes NAME_OF_CLUSTER.local in-addr.arpa ip6.arpa {
Well this returns precisely one thing, a cluster name
K8s:
kubectl config view -o jsonpath='{.clusters[].name}{"\n"}'
Openshift:
oc config view -o jsonpath='{.clusters[].name}{"\n"}'
$ kubectl config get-clusters --> get you the list of existing clusters
Using python k8s client. But this won't work with incluster_kubeconfig.
from kubernetes import config
cluster_context = config.kube_config.list_kube_config_contexts()
print (cluster_context)
([{'context': {'cluster': 'k01.test.use1.aws.platform.gov', 'user': 'k01-test'}, 'name': 'k01.test.use1.aws.platform.gov'}], {'context': {'cluster': 'k01.test.use1.aws.platform.gov', 'user': 'k01-test'}, 'name': 'k01.test.use1.aws.platform.gov'})
cluster_name = cluster_context[1]['context']['cluster']
print (cluster_name)
k01.test.use1.aws.platform.gov
Using kubectl command:
$ kubectl config get-clusters
NAME
kubernetes
kubectl config get-clusters
kubectl config get-contexts
There is a great tool called kubectx https://github.com/ahmetb/kubectx.
kubectx - lists all previously added clusters and highlights the currently used one. This is only one word to type instead of kubectl config current-context.
kubectx <cluster> - switches to a chosen cluster.
Moreover this tool comes also with kubens which does exactly the same for namespaces:
kubens - lists all namespaces and shows the current one,
kubens <namespace> - switches to a chosen namespace.