Friends, I'm new to Kubernetes and recently installed Kubernetes manually through a tutorial,execute the command:kubectl exec -it -n kube-system coredns-867b8c5ddf-8xfz6 -- sh,an error occurred: "x509: certificate signed by unknown authority",kubectl log command will also report the same error,but kubectl get nodes and kubectl get podes can get node information normally,This is the step for me to configure RBAC authorization to allow the kube-api server to access the kubelet API on each worker node:
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
EOF
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
This is admin.kubeconfig content:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0t******tLQo=
server: https://127.0.0.1:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: LS0t******Cg==
client-key-data: LS0tL******LQo=
The content in "~/.kube/config" is the same as the content in admin.kubeconfig. I went to check and confirmed that my certificate has not expired. It seems that the Token authentication of the dashboard is also affected by this problem and cannot pass,my system's CentOS7.7. The kubernetes component version is 1.22.4. I hope to get help.
Related
I used below file to create service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-reader
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
name: reader-cr
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- '*'
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-only-rb
subjects:
- kind: ServiceAccount
name: sa-reader
roleRef:
kind: ClusterRole
name: reader-cr
apiGroup: rbac.authorization.k8s.io
The kubeconfig I created is something similar
apiVersion: v1
kind: Config
preferences: {}
clusters:
- name: qa
cluster:
certificate-authority-data: ca
server: https:/<server>:443
users:
- name: sa-reader
user:
as-user-extra: {}
token: <token>
contexts:
- name: qa
context:
cluster: qa
user: sa-reader
namespace: default
current-context: qa
With this kubeconfig file, I am able to access resources in the default namespace but not any other namespace. How to access resources in other namespaces as well?
You can operate on a namespace explicitly by using the -n (--namespace) option to kubectl:
$ kubectl -n my-other-namespace get pod
Or by changing your default namespace with the kubectl config command:
$ kubectl config set-context --current --namespace my-other-namespace
With the above command, all future invocations of kubectl will assume the my-other-namespace namespace.
An empty namespace in metadata, defaults to namespace: default and so, your RoleBinding is only applied to the default namespace.
See ObjectMeta.
I suspect (!) you need to apply to RoleBinding to each of the namespaces in which you want the Service Account to be permitted.
I have this strange situation, how I can solve this problem ?
ubuntu#anth-mgt-wksadmin:~$ kubectl get nodes
error: the server doesn't have a resource type "nodes"
ubuntu#anth-mgt-wksadmin:~$ kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
error: the server doesn't have a resource type "services"
ubuntu#anth-mgt-wksadmin:~$ kubectl cluster-info dump
Error from server (Forbidden): users "xxx#xxx.it" is forbidden: User system:serviceaccount:gke-connect:connect-agent-sa" cannot impersonate resource "users" in API group "" at the cluster scope
I think that the problem has been generated by this following apply searching a way to connect the admin cluster to Cloud Console but how to rollback ?
USER_ACCOUNT=foo#example.com
cat <<EOF > /tmp/impersonate.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: gateway-impersonate
rules:
- apiGroups:
- ""
resourceNames:
- ${USER_ACCOUNT}
resources:
- users
verbs:
- impersonate
- --
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRoleBinding
metadata:
name: gateway-impersonate
roleRef:
kind: ClusterRole
name: gateway-impersonate
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: connect-agent-sa
namespace: gke-connect
EOF
# Apply impersonation policy to the cluster.
kubectl apply -f /tmp/impersonate.yaml
I have copied the admin.conf file from one admin cluster node to the admin workstation and renamed to kubeconfig
root#anth-admin-host1:~# cat /etc/kubernetes/admin.conf apiVersion: v1 clusters:
I added a new user named "hello" to kind cluster with client-certificate-data and client-key-data. When I switch to its context and press the command:
kubectl get ns development-hello
I get:
Error from server (Forbidden): namespaces "development-hello" is forbidden: User "hello" cannot get resource "namespaces" in API group "" in the namespace "development-hello"
I do not have clusterrolebinding for this user.
Here is a snapshot from kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:33445
name: kind-kind
contexts:
- context:
cluster: kind-kind
user: hello
name: hello-kind-kind
- context:
cluster: kind-kind
user: kind-kind
name: kind-kind
current-context: hello-kind-kind
kind: Config
preferences: {}
users:
- name: hello
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
- name: kind-kind
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
A ClusterRole and RoleBinding need to be created for the hello user by using the admin account.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ns-role
rules:
- apiGroups: [""]
resources: ["namespace"]
verbs: ["get", "watch", "list", "create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ns-rolebinding
namespace: development-hello
subjects:
- kind: User
name: hello
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: ns-role
apiGroup: rbac.authorization.k8s.io
The kubeconfig file with admin account can be retrieved using below command
docker exec -it <kind-control-plane-node-name>
sudo cat /etc/kubernetes/admin.conf
This is how I solved it in my case:
export KUBECONFIG=/etc/kubernetes/admin.conf
I am experimenting with service accounts and user accounts.
I am able to create CA / Key for user accounts in order to be able to verify the user through the Server-API but I am failing to do the same for Service Accounts.
I have created a kubeconfig file:
apiVersion: v1
clusters:
- cluster:
certificate-authority: ca.crt
server: https://ip:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: namespace-test
user: test
name: test-kubernetes
current-context: "test-kubernetes"
kind: Config
preferences: {}
users:
- name: test
user:
client-certificate: test.crt
client-key: test.key
When I am using this kubeconfig file and based on the RBAC rules I can reach the Server-Api:
$ kubectl --kubeconfig /tmp/rbac-test/test.kubeconfig get pods
No resources found in namespace-test namespace.
Sample of file that I create the name space, service account etc.
apiVersion: v1
kind: Namespace
metadata:
name: namespace-test
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
user: test
name: test
namespace: namespace-test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
user: test
name: role-test
namespace: namespace-test
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
user: test
name: rolebinding-test
namespace: namespace-test
subjects:
- kind: User
name: test
roleRef:
kind: Role
name: role-test
apiGroup: rbac.authorization.k8s.io
When I modify the user to service account user I loose control over the namespace:
subjects:
- kind: ServiceAccount
Then I try to get the pods and I get forbitten:
$ kubectl --kubeconfig /tmp/rbac-test/test.kubeconfig get pods
Error from server (Forbidden): pods is forbidden: User "test" cannot list resource "pods" in API group "" in the namespace "namespace-test"
But when I check if the service account can fetch the pods it is valid:
$ kubectl auth can-i get pods --as system:serviceaccount:namespace-test:test -n namespace-test
yes
Is there any way to retrieve or create CAs for service account users? I want to be able to connect outside the cluster through the Server-Api and at the same time to use a service account and not a normal user.
The reason that I want to use a service account and not a user is to be able to use the Dashboard through different users with token verification.
I am having issues with service accounts. I created a service account and then created .key and .crt using this guide:
https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/
I used cluster_ca.key and cluster_ca.crt from KOPS_STATE_STORE bucket (since I used kops to create the cluster) to create user ca.crt and ca.key. Then I got token from secret.
I set the context like this:
kubectl config set-cluster ${K8S_CLUSTER_NAME} --server="${K8S_URL}" --embed-certs=true --certificate-authority=./ca.crt
kubectl config set-credentials gitlab-telematics-${CI_COMMIT_REF_NAME} --token="${K8S_TOKEN}"
kubectl config set-context telematics-dev-context --cluster=${K8S_CLUSTER_NAME} --user=gitlab-telematics-${CI_COMMIT_REF_NAME}
kubectl config use-context telematics-dev-context
When I do the deployment using that service account token I get the following error:
error: unable to recognize "deployment.yml": Get https://<CLUSTER_ADDRESS>/api?timeout=32s: x509: certificate signed by unknown authority
The Service Account, Role and RoleBinding YAML:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-telematics-dev
namespace: telematics-dev
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: telematics-dev-full-access
namespace: telematics-dev
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods", "services"]
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: telematics-dev-view
namespace: telematics-dev
subjects:
- kind: ServiceAccount
name: gitlab-telematics-dev
namespace: telematics-dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: telematics-dev-full-access
The generated kubeconfig looks fine to me:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <REDACTED>
server: https://<CLUSTER_ADDRESS>
name: <CLUSTER_NAME>
contexts:
- context:
cluster: <CLUSTER_NAME>
user: gitlab-telematics-dev
name: telematics-dev-context
current-context: telematics-dev-context
kind: Config
preferences: {}
users:
- name: gitlab-telematics-dev
user:
token: <REDACTED>
I managed to solve this. Sorry for the late answer. Posting this in case someone else is facing the same issue.
The following line is not needed:
kubectl config set-cluster ${K8S_CLUSTER_NAME} --server="${K8S_URL}" --embed-certs=true --certificate-authority=./ca.crt
As we are issuing tokens, only the token can be used.
It is hard to help you with this case. I reproduced this on my test cluster and I can't come up with any advice other than following the step by step tutorial by Bitnami and double checking the names. I was able to successfully create the user gitlab-telematics-dev list pods and then create a deployment in the telematics-dev namespace using just your manifests and linked tutorial so the problem is not in the config or names in Roles etc. This seems to me like you had to miss something in the process.
What I can advice is to first try the commands as the created user. So when you will be able to list pods and create a deployment as gitlab-telematics-dev then your deployment should also work.