Kubectl commands not having right permissions to deploy pods after certification renewal - kubernetes

After certification renewal in K8 Cluster, we are getting permission error while running some kubectl commands. For example; we get this error while running the commands "kubectl get deployments" and "kubectl get pv"
"Error from server (Forbidden): pods "<>" is forbidden: User "system:node:<>" cannot create resource "pods/exec" in API group "" in the namespace "......"
But, we can able to run commands such as "Kubectl get nodes" and "kubectl get pods" without any issues.
During cert renewal process, we ran the below command and manually updated kubelet.conf in /etc/kubernetes and config file in /root/.kube directory.
Is there any other files, we need to update the new certificate related details ? Kindly help us with remediation process/steps at the earliest possible.
• sudo kubeadm alpha kubeconfig user --org system:nodes --client-name system:node:$(hostname) > kubelet.conf
Our Prod Kubernetes Cluster Information:
Kubernetes Cluster Version - 1.13.x
Master Node - 1
Worker Nodes - 11
Recently, the below mentioned certifications got renewed recently.
apiserver.crt
apiserver-kubelet-client.crt
front-proxy-client.crt
apiserver-etcd-client.crt

Apparently, you are using the kubelet certificate instead of that for the kubectl (admin cert).
Try running this command and see if it works:
sudo kubectl get pv --kubeconfig /etc/kubernetes/admin.conf
If it does, do:
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get pv
kubectl get deploy

Related

Kubernetes Deployment: Error: failed to create deployment:

Environment Details:
Kubernetes version: `v1.20.2`
Master Node: `Bare Metal/Host OS: CentOS 7`
Worker Node: `VM/Host OS: CentOS 7`
I have installed & configured the Kubernetes cluster, the Master node on the bare metal server & the worker node on windows server 2012 HyperV VM. Both master and worker nodes have the same Kubernetes version ( v1.20.2) & centos7. Successfully joined worker node to master, below is the get nodes status.
$ kubectl get nodes
**NAME STATUS ROLES AGE VERSION
k8s-worker-node1 Ready <none> 2d2h v1.20.2
master-node Ready control-plane,master 3d4h v1.20.2**
While creating a deployment on the worker node I am getting the below error message.
On worker node, I issued the following command.
$ kubectl create deployment nginx-depl --image=nginx
Error message is:
error: failed to create deployment: Post “http://localhost:8080/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create”: dial tcp: lookup localhost on 8.8.8.8:53: no such host
please help me to resolve this issue as I am not able to understand what is the problem.
May you have to run minikube start before. I’m learning and between one class and another I forgot to run this command. I hope I have helped someone.
This worked for me.
It seems that you are issuing the kubectl create deployment command on the worker node. This won't work because the kubectl command communicates with the kub-apiserver for cluster communication. Since the apiserver does not run on the worker node executing the command on it will raise an error.
Instead execute the same kubectl command on the master node as a non-root user with the following additional commands,
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl create deployment nginx-depl --image=nginx

kubectl get cs prompt Error from server (Forbidden)

When run kubectl get cs on centos 7 I got below error message.
No resources found.
Error from server (Forbidden): componentstatuses is forbidden:
User "system:node:<server-name>" cannot list componentstatuses at the cluster scope
I can confirm the api server is running kubectl cluster-info
Kubernetes master is running at https://<server-IP>:6443
KubeDNS is running at https://<server-IP>:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Also I have below in ~/.bash_profile
export http_proxy=http://<proxy-server-IP>:3128
export https_proxy=http://<proxy-server-IP>:3128
export no_proxy=$no_proxy,127.0.0.1,localhost,<server-IP>,<server-name>
export KUBECONFIG=/etc/kubernetes/kubelet.conf
Not only kubectl get cs yield the error message, kubectl apply -f kubernetes-dashboard.yaml yield similar error message
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=secrets", GroupVersionKind: "/v1, Kind=Secret"
Name: "kubernetes-dashboard-certs", Namespace: "kube-system"
Object: &{map["kind":"Secret" "metadata":map["labels":map["k8s-app":"kubernetes-dashboard"] "name":"kubernetes-dashboard-certs" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "type":"Opaque" "apiVersion":"v1"]}
from server for: "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml":
secrets "kubernetes-dashboard-certs" is forbidden:
User "system:node:<server-name>" cannot get secrets in the namespace "kube-system":
no path found to object
export KUBECONFIG=/etc/kubernetes/kubelet.conf
Is completely incorrect; you are, as the error message is cheerfully trying to inform you, attempting to perform cluster operations as a Node, not as one of the Users or ServiceAccounts. RBAC is almost explicitly designed to stop you from doing exactly what you are currently doing. You would never want a Node to be able to read sensitive credentials nor create arbitrary Pods at cluster scope.
If you want to be all caviler about it, then ssh into a master Node and use the cluster-admin credentials usually found in /etc/kubernetes/admin.conf (or a similar file -- depending on how your cluster was provisioned). If you don't already have a cluster-admin credential, then create an X.509 certificate that is signed by a CA that the apiserver trusts with an Organization (O= in X.509 parlance) of cluster-admin and then create yourself a ServiceAccount (or whatever) with a ClusterRoleBinding of cluster-admin and go from there.
Try the below snippets
1) sudo su
2) kubectl get cs
after reinstall centos 7 and follow below steps i am able to bring up the master properly
install docker-ce and add proxy
install kubeadm,kubectl,kubelet
disable firewalld and turn off swap
export no_proxy in .bash_profile
export no_proxy=$no_proxy,127.0.0.1,localhost,<master-server-name>,<master-server-ip>,10.96.0.0/12,10.244.0.0/16
kubeadm init
kubeadm init --apiserver-advertise-address=<master-server-ip> --pod-network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube
\cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
test with kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
No need to manually install etcd nor export KUBECONFIG.

Cannot connect to Kubernetes api on AWS vm's

I have deployed Kubernetes using the link Kubernetes official page
I see that Kubernetes is deployed because in the end i got this
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 172.16.32.101:6443 --token ma1d4q.qemewtyhkjhe1u9f --discovery-token-ca-cert-hash sha256:408b1fdf7a5ea5f282741db91ebc5aa2823802056ea9da843b8ff52b1daff240
when i do kubectl get pods it thorws this error
# kubectl get pods
The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port?
When I do see the cluster-info it says as follows
kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:6553
But when i see the config it shows as follows
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1EWXlNREEyTURJd04xb1hEVEk0TURZeE56QTJNREl3TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0ZXCkxWQkJoWmZCQms4bXJrV0w2MmFGd2U0cUYvZkRHekJidnE5TFpGN3M4UW1rdDJVUlo5YmtSdWxzSlBrUXV1U2IKKy93YWRoM054S0JTQUkrTDIrUXdyaDVLSy9lU0pvbjl5TXJlWnhmRFdPTno2Y3c4K2txdnh5akVsRUdvSEhPYQpjZHpuZnJHSXVZS3lwcm1GOEIybys0VW9ldytWVUsxRG5Ra3ZwSUZmZ1VjVWF4UjVMYTVzY2ZLNFpweTU2UE4wCjh1ZjdHSkhJZFhNdXlVZVpFT3Z3ay9uUTM3S1NlWHVhcUlsWlFqcHQvN0RrUmFZeGdTWlBqSHd5c0tQOHMzU20KZHJoeEtyS0RPYU1Wczd5a2xSYjhzQjZOWDB6UitrTzhRNGJOUytOYVBwbXFhb3hac1lGTmhCeGJUM3BvUXhkQwpldmQyTmVlYndSWGJPV3hSVzNjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDTFBlT0s5MUdsdFJJTjdmMWZsNmlINTg0c1UKUWhBdW1xRTJkUHFNT0ZWWkxjbDVJZlhIQ1dGeE13OEQwOG1NNTZGUTNEVDd4bi9lNk5aK1l2M1JrK2hBdmdaQgpaQk9XT3k4UFJDOVQ2S1NrYjdGTDRSWDBEamdSeE5WTFIvUHd1TUczK3V2ZFhjeXhTYVJJRUtrLzYxZjJsTGZmCjNhcTdrWkV3a05pWXMwOVh0YVZGQ21UaTd0M0xrc1NsbDZXM0NTdVlEYlRQSzJBWjUzUVhhMmxYUlZVZkhCMFEKMHVOQWE3UUtscE9GdTF2UDBDRU1GMzc4MklDa1kzMDBHZlFEWFhiODA5MXhmcytxUjFQbEhJSHZKOGRqV29jNApvdTJ1b2dHc2tGTDhGdXVFTTRYRjhSV0grZXpJRkRjV1JsZnJmVHErZ2s2aUs4dGpSTUNVc2lFNEI5QT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
**server: https://172.16.32.101:6443**
Even telnet shows that there is a process running on 6443 but not on 6553
how can change the port and how can I fix the issue??
Any help would be of great use
Thanks in advance.
It looks like your last kubectl config interferes with the previous clusters configurations.
It is possible to have settings for several different clusters in one .kube/config or in separate files.
But in some cases, you may want to manage only the cluster you've just created.
Note: After tearing down the exited cluster using kubeadm reset followed by initializing fresh cluster using kubeadm init, new certificates will be generated. To operate the new cluster, you have to update kubectl configuration or replace it with the new one.
To clean up old kubectl configurations and apply the last one, run the following commands:
rm -rf $HOME/.kube
unset KUBECONFIG
# Check if you have KUBECONFIG configured in profile dot files and comment or remove it.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
It gives you up-to-date configuration for the last cluster you've created using kubeadm tool.
Note: You should copy kubectl configuration for all users accounts which you are going to use to manage the cluster.
Here are some examples of how to manage config file using the command line.
I figured out the issue it is because of the firewall in the machine I could join nodes to the cluster once I allowed traffic via port 6443. I didn't fix the issue with this post but for beginners use this K8's on AWS for a better idea.
Thanks for the help guys...!!!

I cannot run kubectl get nodes as root. Why?

On my master node
root#k8smaster:~# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
root#k8smaster:~# exit
logout
yoda#k8smaster:~/bin$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 5d v1.9.2
k8sworker Ready <none> 51s v1.9.2
Why do I need to run kubectl as my own user ?
What Michael said is exactly accurate; kubectl looks in the current user's home directory, which for yoda will likely be /home/yoda but for root is almost certainly /root.
You can very quickly test this theory by re-running your kubectl command with an explicit --kubeconfig ~yoda/.kube/config:
kubectl --kubeconfig ~yoda/.kube/config get nodes
You can also export the shell variable KUBECONFIG to avoid having to constantly include that long --kubeconfig syntax:
export KUBECONFIG=~yoda/.kube/config
kubectl get nodes
Ensure you don't put any characters between the ~ and yoda or it will look for a yoda directory inside the current user's home directory.
kubectl needs kubeconfig at $HOME/.kube/config by default.
Kubeadm puts the original kubeconfig in /etc/kubernetes/admin.conf.
Any user (including root) can do the following to get kubeconfig in the current user's home directory at $HOME/.kube/config:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run this:
export KUBECONFIG=/etc/kubernetes/admin.conf

Why does kubectl have different behavior with sudo?

Running kubectl get pods with sudo:
sudo kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Running as a normal user:
kubectl get pods
No resources found.
By default, kubectl looks in ~/.kube/config (or the file pointed to be $KUBECONFIG) to determine what server to connect to. Your home directory and environment are different when running commands as root. When no connection info is found, kubectl defaults to localhost:8080
You would have run these commands from the normal user :
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
which would have copied config file in your normal user home directory and that is why you are able to get to the connection from the normal host and not from sudo.