Kubernetes master node's date was updated. Certifactes have now expired? - kubernetes

I had an incorrect date set (by a few years) I've modified this but now I'm getting errors with some pods:
kube-controller-manager-master
E0803 01:06:31.311871 1 leaderelection.go:234] error retrieving
resource lock kube-system/kube-controller-manager: Get
https://192.168.0.33:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager:
x509: certificate has expired or is not yet valid
kube-scheduler-master
E0803 01:06:24.507668 1 reflector.go:205]
k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130:
Failed to list *v1.StorageClass: Get
https://192.168.0.33:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0:
x509: certificate has expired or is not yet valid E0803
01:06:24.511785 1 reflector.go:205]
k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130:
Failed to list *v1beta1.ReplicaSet: Get
https://192.168.0.33:6443/apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0:
x509: certificate has expired or is not yet valid E0803
01:06:24.532539 1 reflector.go:205]
k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130:
Failed to list *v1.Node: Get
https://192.168.0.33:6443/api/v1/nodes?limit=500&resourceVersion=0:
x509: certificate has expired or is not yet valid E0803
01:06:24.543719 1 reflector.go:205]
k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130:
Failed to list *v1beta1.PodDisruptionBudget: Get
https://192.168.0.33:6443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0:
x509: certificate has expired or is not yet valid E0803
01:06:24.547678 1 reflector.go:205]
k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130:
Failed to list *v1.ReplicationController: Get
https://192.168.0.33:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: x509: certificate has expired or is not yet valid E0803
01:06:24.554880 1 reflector.go:205]
k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130:
Failed to list *v1.PersistentVolume: Get
https://192.168.0.33:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0: x509: certificate has expired or is not yet valid E0803
01:06:24.559708 1 reflector.go:205]
k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:176: Failed to list
*v1.Pod: Get https://192.168.0.33:6443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0:
x509: certificate has expired or is not yet valid
How could I fix this?

One solution was to delete the whole cluster and start again, which I had to do now...

Related

"pkg/mod/k8s.io/client-go#v0.18.5/tools/cache/reflector.go:125: Failed to list *v1.Service: Unauthorized"

I have set up a private cluster on GKE with k8s version 1.18.12-gke.1206 and Access to cluster endpoint is set to Public endpoint access enabled, authorized networks disabled. I'm running an ingress controller on this cluster of type https://kubernetes.github.io/ingress-nginx. Which uses a configMap to store configuration. But somehow any request coming to this controller, is giving an Unauthorized error with logs as:
2021-02-23 11:24:59.435 IST "pkg/mod/k8s.io/client-go#v0.18.5/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Unauthorized"
2021-02-23 11:24:45.072 IST "error retrieving resource lock sb-system/ingress-controller-leader-nginx: Unauthorized"
2021-02-23 11:24:40.727 IST "pkg/mod/k8s.io/client-go#v0.18.5/tools/cache/reflector.go:125: Failed to list *v1.ConfigMap: Unauthorized"
2021-02-23 11:24:40.132 IST "pkg/mod/k8s.io/client-go#v0.18.5/tools/cache/reflector.go:125: Failed to list *v1beta1.Ingress: Unauthorized"
2021-02-23 11:24:37.318 IST "pkg/mod/k8s.io/client-go#v0.18.5/tools/cache/reflector.go:125: Failed to list *v1.Pod: Unauthorized"
2021-02-23 11:24:37.038 IST "pkg/mod/k8s.io/client-go#v0.18.5/tools/cache/reflector.go:125: Failed to list *v1.Service: Unauthorized"
2021-02-23 11:24:29.891 IST "error retrieving resource lock sb-system/ingress-controller-leader-nginx: Unauthorized"
2021-02-23 11:24:26.263 IST "pkg/mod/k8s.io/client-go#v0.18.5/tools/cache/reflector.go:125: Failed to list *v1.Secret: Unauthorized"
2021-02-23 11:24:18.259 IST "error retrieving resource lock sb-system/ingress-controller-leader-nginx: Unauthorized"
2021-02-23 11:24:09.907 IST "error retrieving resource lock sb-system/ingress-controller-leader-nginx: Unauthorized"
2021-02-23 11:24:06.612 IST "pkg/mod/k8s.io/client-go#v0.18.5/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Unauthorized"
2021-02-23 11:24:02.078 IST "error retrieving resource lock sb-system/ingress-controller-leader-nginx: Unauthorized"
we tried to follow the steps mentioned here. and we are getting
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 233 100 233 0 0 17282 0 {-:--:-- --:--:-- --:--:-- 0
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}--:--:-- --:--:-- --:--:-- 17923
at the last Step which is: kubectl exec test -- curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $TOKEN_VALUE" https://10.0.0.1
I'm new to GCP and K8s, can't figure out what wrong I'm doing.
Did you check whether automountServiceAccountToken had been set to false on your ServiceAccount? If so, set it to true might help.
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
automountServiceAccountToken: false # set to true
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server

Error-Getsockopt: connection refused - Kubernetes apiserver

Am facing an unexpected issue today, earlier today I noticed that "kubectl get pods" was returning an "Unable to connect to the server: EOF". Upon further investigation I found out that Kubernetes apiserver is unable to connect to 127.0.0.1:443. I have been unable to resolve this problem, any assistance would be highly appreciated. Below are the logs I found.
Nov 20 18:13:30 ip-172-31-152-166.us-west-2.compute.internal kubelet[6398]: E1120 18:13:30.362106 6398 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:481: Failed to list *v1.Node: Get https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-152-166.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Nov 20 18:13:30 ip-172-31-152-166.us-west-2.compute.internal kubelet[6398]: E1120 18:13:30.362928 6398 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-152-166.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
Nov 20 18:13:30 ip-172-31-152-166.us-west-2.compute.internal kubelet[6398]: E1120 18:13:30.363719 6398 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:472: Failed to list *v1.Service: Get https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused

Kubernetes Authentication issue

I started looking at different ways of using authentication on kubernetes. Of course, I started with the simplest option, static password file. Basically, I created a file named users.csv with the following content:
mauro,maurosil,maurosil123,group_mauro
When I start minikube using this file, it hangs at the cluster components (starting cluster components). The command I use is:
minikube --extra-config=apiserver.Authentication.PasswordFile.BasicAuthFile=~/temp/users.csv start
After a while (~ 10 minutes), the minikube start command fails with the following error message:
E0523 10:23:57.391692 30932 util.go:151] Error uploading error message: : Post https://clouderrorreporting.googleapis.com/v1beta1/projects/k8s-minikube/events:report?key=AIzaSyACUwzG0dEPcl-eOgpDKnyKoUFgHdfoFuA: x509: certificate signed by unknown authority
I can see that there are several errors on the log (minikube logs):
ay 23 09:47:32 minikube kubelet[3301]: E0523 09:47:32.473157 3301 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.99.100:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
May 23 09:47:33 minikube kubelet[3301]: E0523 09:47:33.414460 3301 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.99.100:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
May 23 09:47:33 minikube kubelet[3301]: E0523 09:47:33.470604 3301 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.99.100:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
May 23 09:47:33 minikube kubelet[3301]: E0523 09:47:33.474548 3301 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.99.100:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
May 23 09:47:34 minikube kubelet[3301]: I0523 09:47:34.086654 3301 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 23 09:47:34 minikube kubelet[3301]: I0523 09:47:34.090697 3301 kubelet_node_status.go:82] Attempting to register node minikube
May 23 09:47:34 minikube kubelet[3301]: E0523 09:47:34.091108 3301 kubelet_node_status.go:106] Unable to register node "minikube" with API server: Post https://192.168.99.100:8443/api/v1/nodes: dial tcp 192.168.99.100:8443: getsockopt: connection refused
May 23 09:47:34 minikube kubelet[3301]: E0523 09:47:34.370484 3301 event.go:209] Unable to write event: 'Patch https://192.168.99.100:8443/api/v1/namespaces/default/events/minikube.15313c5b8cf5913c: dial tcp 192.168.99.100:8443: getsockopt: connection refused' (may retry after sleeping)
May 23 09:47:34 minikube kubelet[3301]: E0523 09:47:34.419833 3301 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.99.100:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
May 23 09:47:34 minikube kubelet[3301]: E0523 09:47:34.472826 3301 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.99.100:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
May 23 09:47:34 minikube kubelet[3301]: E0523 09:47:34.479619 3301 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.99.100:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
I also logged in the minikube VM (minikube ssh) and I noticed that the apiserver docker container is down. Looking at the logs of this container I see the following error:
error: unknown flag: --Authentication.PasswordFile.BasicAuthFile
Therefore, I changed my command to something like:
minikube start --extra-config=apiserver.basic-auth-file=~/temp/users.csv
It failed again but now the container shows a different error. The error is no longer related to invalid flag. Instead, it complains that the file not found (no such file or directory). I also tried to specify a file on the minikube vm (/var/lib/localkube) but I had the same issue.
The minikube version is:
minikube version: v0.26.0
When I start minikube without considering the authentication, it works fine. Are there any other steps that I need to do?
Mauro
You will need to mount the file into the docker container that runs apiserver. Pls see a hack that worked: https://github.com/kubernetes/minikube/issues/1898#issuecomment-402714802

Deploy customized kube-scheduler

I deploy kube-scheduler using https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ .
I followed the steps exactly at the beginning however it does not schedule the node using "my-scheduler" the node is pending instead.
The log of "my-scheduler" pod is
E0207 20:35:43.079477 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:serviceaccount:kube-system:default" cannot list poddisruptionbudgets.policy at the cluster scope
E0207 20:35:43.080416 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:default" cannot list services at the cluster scope
E0207 20:35:43.081490 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:serviceaccount:kube-system:default" cannot list persistentvolumes at the cluster scope
E0207 20:35:43.082515 1 reflector.go:205] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:593: Failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:default" cannot list pods at the cluster scope
E0207 20:35:43.083566 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:default" cannot list nodes at the cluster scope
E0207 20:35:43.084795 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:serviceaccount:kube-system:default" cannot list replicationcontrollers at the cluster scope
E0207 20:35:44.077899 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:serviceaccount:kube-system:default" cannot list persistentvolumeclaims at the cluster scope
E0207 20:35:44.078410 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:serviceaccount:kube-system:default" cannot list replicasets.extensions at the cluster scope
E0207 20:35:44.079496 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:serviceaccount:kube-system:default" cannot list statefulsets.apps at the cluster scope
E0207 20:35:44.080585 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:serviceaccount:kube-system:default" cannot list poddisruptionbudgets.policy at the cluster scope
E0207 20:35:44.081675 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:default" cannot list services at the cluster scope
E0207 20:35:44.082726 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:serviceaccount:kube-system:default" cannot list persistentvolumes at the cluster scope
E0207 20:35:44.083811 1 reflector.go:205] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:593: Failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:default" cannot list pods at the cluster scope
E0207 20:35:44.084887 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:default" cannot list nodes at the cluster scope
E0207 20:35:44.085921 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:serviceaccount:kube-system:default" cannot list replicationcontrollers at the cluster scope
It seems it does not have permission to access resources. I tried configured RBAC as the link says but it does not help.
Please help me if you ever tried this.
I don't know why the new scheduler use "system:serviceaccount:kube-system:default" instead of "system:kube-system".
The quick solution is:
kubectl create clusterrolebinding --user system:serviceaccount:kube-system:default kube-system-cluster-admin --clusterrole cluster-admin

Creating custom scheduler doesn't work

When I follow these instructions to create custom scheduler, the pods assigned to my-scheduler (pod annotation-second-scheduler in the example) keep status Pending and are never scheduled.
I think this is because the kube-scheduler cannot access the master from within the pod. I don't know how to get this working. How can the master be accessed from within a pod? I tried running kubectl proxy -p 8001 in the pod, but this doesn't work.
There are few issues with the instructions mentioned in https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ for local clusters that I created using the instructions mentioned in https://blog.tekspace.io/setup-kubernetes-cluster-with-ubuntu-16-04/
These errors were reported from custom scheduler container (kubect logs command):
E0628 21:05:29.128618 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list persistentvolumeclaims at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:29.129945 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list services at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:29.132968 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list storageclasses.storage.k8s.io at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:29.151367 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list persistentvolumes at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:29.152097 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list replicasets.extensions at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:29.153187 1 reflector.go:205] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:594: Failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list pods at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:29.153201 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list nodes at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:29.153300 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list replicationcontrollers at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:29.153338 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:29.153757 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list statefulsets.apps at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:30.147954 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list storageclasses.storage.k8s.io at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:30.149547 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list persistentvolumeclaims at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
E0628 21:05:30.149562 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list services at the cluster scope: clusterrole.rbac.authorization.k8s.io "kube-scheduler" not found
The issue is in the my-scheduler.yaml file: in roleref change the name field from kube-scheduler to system:kube-scheduler. Verify it using this command before changing the yaml file:
kubectl get clusterrole --all-namespaces | grep -i kube
It should list system:kube-scheduler instead of kube-scheduler only.
Then, it might print these errors in the custom scheduler container:
E0628 21:22:39.937271 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list storageclasses.storage.k8s.io at the cluster scope
E0628 21:22:40.940461 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list storageclasses.storage.k8s.io at the cluster scope
E0628 21:22:41.943323 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list storageclasses.storage.k8s.io at the cluster scope
E0628 21:22:42.946263 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list storageclasses.storage.k8s.io at the cluster scope
In this case, please append these lines:
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- watch
- list
- get
to the end of the output of this command (this opens a file for you to edit):
kubectl edit clusterrole system:kube-scheduler
From the user guide section on accessing the cluster API from a pod at kubernetes.io:
When accessing the API from a pod, locating and authenticating to the
api server are somewhat different.
The recommended way to locate the apiserver within the pod is with the
kubernetes DNS name, which resolves to a Service IP which in turn will
be routed to an apiserver.
The recommended way to authenticate to the apiserver is with a service
account credential. By kube-system, a pod is associated with a service
account, and a credential (token) for that service account is placed
into the filesystem tree of each container in that pod, at
/var/run/secrets/kubernetes.io/serviceaccount/token.
If available, a certificate bundle is placed into the filesystem tree
of each container at
/var/run/secrets/kubernetes.io/serviceaccount/ca.crt, and should be
used to verify the serving certificate of the apiserver.
Finally, the default namespace to be used for namespaced API
operations is placed in a file at
/var/run/secrets/kubernetes.io/serviceaccount/namespace in each
container.
From within a pod the recommended ways to connect to API are:
run a kubectl proxy as one of the containers in the pod, or as a background process within a container. This proxies the Kubernetes
API to the localhost interface of the pod, so that other processes in
any container of the pod can access it. See this example of using
kubectl proxy in a pod.
use the Go client library, and create a client using the client.NewInCluster() factory. This handles locating and
authenticating to the apiserver.
In each case, the credentials of the pod are used to communicate
securely with the apiserver.