Namespaces is forbidden error for a new user - kubernetes

I added a new user named "hello" to kind cluster with client-certificate-data and client-key-data. When I switch to its context and press the command:
kubectl get ns development-hello
I get:
Error from server (Forbidden): namespaces "development-hello" is forbidden: User "hello" cannot get resource "namespaces" in API group "" in the namespace "development-hello"
I do not have clusterrolebinding for this user.
Here is a snapshot from kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:33445
name: kind-kind
contexts:
- context:
cluster: kind-kind
user: hello
name: hello-kind-kind
- context:
cluster: kind-kind
user: kind-kind
name: kind-kind
current-context: hello-kind-kind
kind: Config
preferences: {}
users:
- name: hello
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
- name: kind-kind
user:
client-certificate-data: REDACTED
client-key-data: REDACTED

A ClusterRole and RoleBinding need to be created for the hello user by using the admin account.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ns-role
rules:
- apiGroups: [""]
resources: ["namespace"]
verbs: ["get", "watch", "list", "create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ns-rolebinding
namespace: development-hello
subjects:
- kind: User
name: hello
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: ns-role
apiGroup: rbac.authorization.k8s.io
The kubeconfig file with admin account can be retrieved using below command
docker exec -it <kind-control-plane-node-name>
sudo cat /etc/kubernetes/admin.conf

This is how I solved it in my case:
export KUBECONFIG=/etc/kubernetes/admin.conf

Related

kubernetes oidc login ignores groups

I am using keycloak to authenticate with kubernetes using kube-oidc-proxy and oidc-login.
I have created a client in keycloak and a mapper with the following configuration.
The kube-oidc-proxy is running with this configuration:
command: ["kube-oidc-proxy"]
args:
- "--secure-port=443"
- "--tls-cert-file=/etc/oidc/tls/crt.pem"
- "--tls-private-key-file=/etc/oidc/tls/key.pem"
- "--oidc-client-id=$(OIDC_CLIENT_ID)"
- "--oidc-issuer-url=$(OIDC_ISSUER_URL)"
- "--oidc-username-claim=$(OIDC_USERNAME_CLAIM)"
- "--oidc-signing-algs=$(OIDC_SIGNING_ALGS)"
- "--oidc-username-prefix='oidcuser:'"
- "--oidc-groups-claim=groups"
- "--oidc-groups-prefix='oidcgroup:'"
- "--v=10"
And the kube config has this configuration:
apiVersion: v1
clusters:
- cluster:
server: <KUBE_OIDC_PROXY_URL>
name: default
contexts:
- context:
cluster: default
namespace: default
user: oidc
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- -v10
- --oidc-issuer-url=<ISSUER_URL>
- --oidc-client-id=kube-oidc-proxy
- --oidc-client-secret=<CLIENT_SECRET>
- --oidc-extra-scope=email
- --grant-type=authcode
command: kubectl
env: null
provideClusterInfo: false
I can successfully get the user info with groups in the jwt token as shown below:
"name": "Test Uset",
"groups": [
"KubernetesAdmins"
],
"preferred_username": "test-user",
"given_name": "Test",
"family_name": "Uset",
"email": "testuser#test.com"
And i have created the following cluster role binding:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oidc-admin-group
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: oidcgroup:KubernetesAdmins
But I still get forbidden error as follows:
Error from server (Forbidden): pods is forbidden: User "'oidcuser:'ecc4d1ac-68d7-4158-8a58-40b469776c07" cannot list resource "pods" in API group "" in the namespace "default"
Any ideas on how to solve this issue ??
Thanks in advance,,
I figured it out.
Removing the single quotes from the user and group prefix to be like:
"--oidc-username-prefix=oidcuser:"
--oidc-groups-prefix=oidcgroup:"
This solved the issue.

Created kubeconfig file but only able to access default namespace

I used below file to create service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-reader
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
name: reader-cr
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- '*'
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-only-rb
subjects:
- kind: ServiceAccount
name: sa-reader
roleRef:
kind: ClusterRole
name: reader-cr
apiGroup: rbac.authorization.k8s.io
The kubeconfig I created is something similar
apiVersion: v1
kind: Config
preferences: {}
clusters:
- name: qa
cluster:
certificate-authority-data: ca
server: https:/<server>:443
users:
- name: sa-reader
user:
as-user-extra: {}
token: <token>
contexts:
- name: qa
context:
cluster: qa
user: sa-reader
namespace: default
current-context: qa
With this kubeconfig file, I am able to access resources in the default namespace but not any other namespace. How to access resources in other namespaces as well?
You can operate on a namespace explicitly by using the -n (--namespace) option to kubectl:
$ kubectl -n my-other-namespace get pod
Or by changing your default namespace with the kubectl config command:
$ kubectl config set-context --current --namespace my-other-namespace
With the above command, all future invocations of kubectl will assume the my-other-namespace namespace.
An empty namespace in metadata, defaults to namespace: default and so, your RoleBinding is only applied to the default namespace.
See ObjectMeta.
I suspect (!) you need to apply to RoleBinding to each of the namespaces in which you want the Service Account to be permitted.

kubectl error: x509: certificate signed by unknown authority

Friends, I'm new to Kubernetes and recently installed Kubernetes manually through a tutorial,execute the command:kubectl exec -it -n kube-system coredns-867b8c5ddf-8xfz6 -- sh,an error occurred: "x509: certificate signed by unknown authority",kubectl log command will also report the same error,but kubectl get nodes and kubectl get podes can get node information normally,This is the step for me to configure RBAC authorization to allow the kube-api server to access the kubelet API on each worker node:
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
EOF
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
This is admin.kubeconfig content:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0t******tLQo=
server: https://127.0.0.1:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: LS0t******Cg==
client-key-data: LS0tL******LQo=
The content in "~/.kube/config" is the same as the content in admin.kubeconfig. I went to check and confirmed that my certificate has not expired. It seems that the Token authentication of the dashboard is also affected by this problem and cannot pass,my system's CentOS7.7. The kubernetes component version is 1.22.4. I hope to get help.

How to create a client certificate and client key for a Service Account on k8s

I am experimenting with service accounts and user accounts.
I am able to create CA / Key for user accounts in order to be able to verify the user through the Server-API but I am failing to do the same for Service Accounts.
I have created a kubeconfig file:
apiVersion: v1
clusters:
- cluster:
certificate-authority: ca.crt
server: https://ip:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: namespace-test
user: test
name: test-kubernetes
current-context: "test-kubernetes"
kind: Config
preferences: {}
users:
- name: test
user:
client-certificate: test.crt
client-key: test.key
When I am using this kubeconfig file and based on the RBAC rules I can reach the Server-Api:
$ kubectl --kubeconfig /tmp/rbac-test/test.kubeconfig get pods
No resources found in namespace-test namespace.
Sample of file that I create the name space, service account etc.
apiVersion: v1
kind: Namespace
metadata:
name: namespace-test
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
user: test
name: test
namespace: namespace-test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
user: test
name: role-test
namespace: namespace-test
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
user: test
name: rolebinding-test
namespace: namespace-test
subjects:
- kind: User
name: test
roleRef:
kind: Role
name: role-test
apiGroup: rbac.authorization.k8s.io
When I modify the user to service account user I loose control over the namespace:
subjects:
- kind: ServiceAccount
Then I try to get the pods and I get forbitten:
$ kubectl --kubeconfig /tmp/rbac-test/test.kubeconfig get pods
Error from server (Forbidden): pods is forbidden: User "test" cannot list resource "pods" in API group "" in the namespace "namespace-test"
But when I check if the service account can fetch the pods it is valid:
$ kubectl auth can-i get pods --as system:serviceaccount:namespace-test:test -n namespace-test
yes
Is there any way to retrieve or create CAs for service account users? I want to be able to connect outside the cluster through the Server-Api and at the same time to use a service account and not a normal user.
The reason that I want to use a service account and not a user is to be able to use the Dashboard through different users with token verification.

kubectl - error: You must be logged in to the server on bare-metal

I created the csr and approved it -
$ kubectl get csr
NAME AGE REQUESTOR CONDITION
parth-csr 28m kubernetes-admin Approved,Issued
Created the certificate using kubectl only with username parth and group devs
Issuer: CN=kubernetes
Validity
Not Before: Dec 16 18:51:00 2019 GMT
Not After : Dec 15 18:51:00 2020 GMT
Subject: O=devs, CN=parth
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
Here, I want to do the authentication on the basis of group - devs.
Clusterrole.yaml is as follows -
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: devs
rules:
- apiGroups: [""]
resources: ["nodes", "pods", "secrets", "pods", "pods/log", "configmaps", "services", "endpoints", "deployments", "jobs", "crontabs"]
verbs: ["get", "watch", "list"]
Clusterrolebinding.yaml as
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: devs-clusterrolebinding
subjects:
- kind: Group
name: devs # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: devs
apiGroup: rbac.authorization.k8s.io
Kubeconfig file is as follows -
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: XXXXXXXXXXXXX
server: https://XX.XX.XX.XX:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: parth
name: dev
current-context: "dev"
kind: Config
preferences: {}
users:
- name: parth
user:
client-certificate: /etc/kubernetes/access-credentials/parth/parth.crt
client-key: /etc/kubernetes/access-credentials/parth/parth.key
As I want to do auth using group only, I am getting the following error -
$ kubectl get nodes
error: You must be logged in to the server (Unauthorized)
I am running k8s on bare-metal.
Group based auth reference from offical docs - https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding
I see you have given permission to groups and not to a user . In that case you need to use impersonation as group
kubectl get nodes --as-group=devs
After manually signing the certificate using apiserver ca, it got fixed.