Kubernetes log, User "system:serviceaccount:default:default" cannot get services in the namespace - kubernetes

Forbidden!Configured service account doesn't have access. Service account may have been revoked. User "system:serviceaccount:default:default" cannot get services in the namespace "mycomp-services-process"
For the above issue I have created "mycomp-service-process" namespace and checked the issue.
But it shows again message like this:
Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. User "system:serviceaccount:mycomp-services-process:default" cannot get services in the namespace "mycomp-services-process"

Creating a namespace won't, of course, solve the issue, as that is not the problem at all.
In the first error the issue is that serviceaccount default in default namespace can not get services because it does not have access to list/get services. So what you need to do is assign a role to that user using clusterrolebinding.
Following the set of minimum privileges, you can first create a role which has access to list services:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: service-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["services"]
verbs: ["get", "watch", "list"]
What above snippet does is create a clusterrole which can list, get and watch services. (You will have to create a yaml file and apply above specs)
Now we can use this clusterrole to create a clusterrolebinding:
kubectl create clusterrolebinding service-reader-pod \
--clusterrole=service-reader \
--serviceaccount=default:default
In above command the service-reader-pod is name of clusterrolebinding and it is assigning the service-reader clusterrole to default serviceaccount in default namespace. Similar steps can be followed for the second error you are facing.
In this case I created clusterrole and clusterrolebinding but you might want to create a role and rolebinding instead. You can check the documentation in detail here

This is only for non prod clusters
You should bind service account system:serviceaccount:default:default (which is the default account bound to Pod) with role cluster-admin, just create a yaml (named like fabric8-rbac.yaml) with following contents:
# NOTE: The service account `default:default` already exists in k8s cluster.
# You can create a new account following like this:
#---
#apiVersion: v1
#kind: ServiceAccount
#metadata:
# name: <new-account-name>
# namespace: <namespace>
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fabric8-rbac
subjects:
- kind: ServiceAccount
# Reference to upper's `metadata.name`
name: default
# Reference to upper's `metadata.namespace`
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
Then, apply it by running kubectl apply -f fabric8-rbac.yaml.
If you want unbind them, just run kubectl delete -f fabric8-rbac.yaml.

Just to add.
This can also occur when you are redeploying an existing application to the wrong Kubernetes cluster that are similar.
Ensure you check to be sure that the Kubernetes cluster you're deploying to is the correct cluster.

Related

Forbidden after enabling Google Cloud Groups RBAC in GKE

We are enabling Google Cloud Groups RBAC in our existing GKE clusters.
For that, we first created all the groups in Workspace, and also the required "gke-security-groups#ourdomain.com" according to documentation.
Those groups are created in Workspace with an integration with Active Directory for Single Sign On.
All groups are members of "gke-security-groups#ourdomain" as stated by documentation. And all groups can View members.
The cluster was updated to enabled the flag for Google Cloud Groups RBAC and we specify the value to be "gke-security-groups#ourdomain.com".
We then Added one of the groups (let's called it group_a#ourdomain.com) to IAM and assigned a custom role which only gives access to:
"container.apiServices.get",
"container.apiServices.list",
"container.clusters.getCredentials",
"container.clusters.get",
"container.clusters.list",
This is just the minimum for the user to be able to log into the Kubernetes cluster and from there being able to apply Kubernetes RBACs.
In Kubernetes, we applied a Role, which provides list of pods in a specific namespace, and a role binding that specifies the group we just added to IAM.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-role
namespace: custom-namespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-rolebinding
namespace: custom-namespace
roleRef:
kind: Role
name: test-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: group_a#ourdomain.com
Everything looks good until now. But when trying to list the pods of this namespace with the user that belongs to the group "group_a#ourdomain.com", we get:
Error from server (Forbidden): pods is forbidden: User
"my-user#ourdomain.com" cannot list resource "pods" in API group ""
in the namespace "custom-namespace": requires one of ["container.pods.list"]
permission(s).
Of course if I give container.pods.list to the group_a#ourdomain assigned role, I can list pods, but it opens for all namespaces, as this permission in GCloud is global.
What am I missing here?
Not sure if this is relevant, but our organisation in gcloud is called for example "my-company.io", while the groups for SSO are named "...#groups.my-company.io", and the gke-security-groups group was also created with the "groups.my-company.io" domain.
Also, if instead of a Group in the RoleBinding, I specify the user directly, it works.
It turned out to be an issue about case-sensitive strings and nothing related with the actual rules defined in the RBACs, which were working as expected.
The names of the groups were created in Azure AD with a camel case model. These group names where then showed in Google Workspace all lowercase.
Example in Azure AD:
thisIsOneGroup#groups.mycompany.com
Example configured in the RBACs as shown in Google Workspace:
thisisonegroup#groups.mycompany.com
We copied the names from the Google Workspace UI all lowercase and we put them in the bindings and that caused the issue. Kubernetes GKE is case sensitive and it didn't match the name configured in the binding with the email configured in Google Workspace.
After changing the RBAC bindings to have the same format, everything worked as expected.
Looks like you are trying to grant access to deployments in the extensions and apps API groups. That requires the user to specify the extensions and apps api group in your role rules:
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- '*'
- apiGroups:
- extensions
- apps
resources:
- deployments
- replicasets
verbs:
- '*'
I can recommend you to recreate role and role bindings too. You can visit the following thread as a reference too RBAC issue : Error from server (Forbidden):
Edited 012622:
Can you please confirm that you provided the credentials or configuration file (manifest, YAML)? As you may know, this information is provided by Kubernetes and the default service account. You can verify it by running:
$ kubectl auth can-i get pods
Let me tell you that the account type you need to use for your accounts is “service account”. To create a new service account with a wider set of permissions, the following is a YAML example:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-read-role
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pod-read-sa
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-read-rolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: pod-read-sa
apiGroup: ""
roleRef:
kind: Role
name: pod-read-role
apiGroup: ""
Please use the following thread as a reference.

Inadvertently deleted admin clusterrole and can't access cluster resources

I deleted my cluster-admin role via kubectl using:
kubectl delete clusterrole cluster-admin
Not sure what I expected, but now I don't have access to the cluster from my account. Any attempt to get or change resources using kubectl returns a 403, Forbidden.
Is there anything I can do to revert this change without blowing away the cluster and creating a new one? I have a managed cluster on Digital Ocean.
Not sure what I expected, but now I don't have access to the cluster from my account.
If none of the kubectl commands actually work, unfortunately you will not be able to create a new cluster role. The problem is that you won't be able to do anything without an admin role. You can try creating the cluster-admin role directly through the API (not using kubectl), but if that doesn't help you have to recreate the cluster.
Try applying this YAML to creaste the new Cluster role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
apply the YAML file changes
kubectl apply -f <filename>.yaml

Whats difference between "kubectl auth reconcile" and "kubectl apply" for working with RBAC?

I have some average yaml file defining some average role resource, all yaml should reflect my resource's desired state.
To get new average role into cluster I usually run kubectl apply -f my-new-role.yaml
but now I see this (recommended!?) alternative kubectl auth reconcile -f my-new-role.yaml
Ok, there may be RBAC relationships, ie Bindings, but shouldn't an apply do same thing?
Is there ever a case where one would update (cluster) roles but not want their related (cluster) bindings updated?
The kubectl auth reconcile command-line utility has been added in Kubernetes v1.8.
Properly applying RBAC permissions is a complex task because you need to compute logical covers operations between rule sets.
As you can see in the CHANGELOG-1.8.md:
Added RBAC reconcile commands with kubectl auth reconcile -f FILE. When passed a file which contains RBAC roles, rolebindings, clusterroles, or clusterrolebindings, this command computes covers and adds the missing rules. The logic required to properly apply RBAC permissions is more complicated than a JSON merge because you have to compute logical covers operations between rule sets. This means that we cannot use kubectl apply to update RBAC roles without risking breaking old clients, such as controllers.
The kubectl auth reconcile command will ignore any resources that are not Role, RoleBinding, ClusterRole, and ClusterRoleBinding objects, so you can safely run reconcile on the full set of manifests (see: Use 'kubectl auth reconcile' before 'kubectl apply')
I've created an example to demonstrate how useful the kubectl auth reconcile command is.
I have a simple secret-reader RoleBinding and I want to change a binding's roleRef (I want to change the Role that this binding refers to):
NOTE: A binding to a different role is a fundamentally different binding (see: A binding to a different role is a fundamentally different binding).
# BEFORE
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: secret-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secret-reader
subjects:
- kind: ServiceAccount
name: service-account-1
namespace: default
# AFTER
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: secret-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secret-creator
subjects:
- kind: ServiceAccount
name: service-account-1
namespace: default
As we know, roleRef is immutable, so it is not possible to update this secret-admin RoleBinding using kubectl apply:
$ kubectl apply -f secret-admin.yml
The RoleBinding "secret-admin" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"Role", Name:"secret-creator"}: cannot change roleRef
Instead, we can use kubectl auth reconcile. If a RoleBinding is updated to a new roleRef, the kubectl auth reconcile command handles a delete/recreate related objects for us.
$ kubectl auth reconcile -f secret-admin.yml
rolebinding.rbac.authorization.k8s.io/secret-admin reconciled
reconciliation required recreate
Additionally, you can use the --remove-extra-permissions and --remove-extra-subjects options.
Finally, we can check if everything has been successfully updated:
$ kubectl describe rolebinding secret-admin
Name: secret-admin
Labels: <none>
Annotations: <none>
Role:
Kind: Role
Name: secret-creator
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount service-account-1 default

Kubernetes OIDC: No valid group mapping

I have the problem that I can log on to my dashboard via OIDC, but then the oidc group information is not mapped correctly and I cannot access the corresponding resources.
Basic setup
K8s version: 1.19.0
K8s setup: 1 master + 2 worker nodes
Based on Debian 10 VMs
CNI: Calico
Louketo Proxy as OIDC proxy
OIDC: Keycloak Server (Keycloak X [Quarkus])
Configurations
I have configured the K8s apiserver with these parameters.
kube-apiserver.yaml
- --oidc-issuer-url=https://test.test.com/auth/realms/Test
- --oidc-client-id=test
- --oidc-username-claim=preferred_username
- --oidc-username-prefix="oidc:"
- --oidc-groups-claim=groups
- --oidc-groups-prefix="oidc:"
ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: "test-cluster-admin"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: "Test"
I used the following louketo parameters
Louketo Proxy
/usr/bin/louketo-proxy --discovery-url=$OIDC_DISCOVERY_URL --client-id=$OIDC_CLIENT_ID --client-secret=$OIDC_CLIENT_SECRET -listen=$OIDC_LISTEN_URL --encryption-key=$OIDC_ENCRYPTION_KEY --redirection-url=$OIDC_REDIRECTION_KEY --enable-refresh-tokens=true --upstream-url=$OIDC_UPSTREAM_URL --enable-metrics
I get the following error message inside the dashboard.
K8s error
replicasets.apps is forbidden: User "\"oidc:\"<user_name>" cannot list resource "replicasets" in API group "apps" in the namespace "default"
I hope you can help me with this problem, I already tried most of the manuals from the internet, but haven't found a solution yet.
PS: I have done the corresponding group mapping in the Keycloak server and also validated that the group entry is transferred.
If you are facing the same challenge as I did and you want to integrate Keycloak into your K8s cluster, share the dashboard and connect it to Keycloak, you can find my configuration below. Within my cluster I use the Louketo Proxy as interface between Kubernetes and Keycloak. The corresponding configuration of the deployment is not included in this post.
Keycloak
I want to start with the configuration of Keycloak. In the first step I created a corresponding client with the following settings.
After that I created the two group membership and audience (needed by the louketo proxy) mappers.
The exact settings of the mappers can be taken from the two images.
Group membership mapping
Audience mapping
Kubernetes
In the second step I had to update the api server manifest and create the RoleBinding and ClusterRoleBinding within the Kubernetes cluster.
Api server manifest (default path: /etc/kubernetes/manifests/kube-apiserver.yaml)
- --oidc-issuer-url=https://test.test.com/auth/realms/Test
- --oidc-client-id=test
- --oidc-username-claim=preferred_username
- --oidc-username-prefix="oidc:"
- --oidc-groups-claim=groups
- --oidc-groups-prefix="oidc:"
RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "test"
namespace: "kubernetes-dashboard"
subjects:
- kind: User
name: "\"oidc:\"Test"
namespace: "kube-system"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: "test"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: "\"oidc:\"Test"
#Community I hope I can help you with this configuration. If you have any questions, feel free to ask me.
This is a community wiki answer aimed to approach the issue from the Kubernetes side. Any one familiar with the possible Keycloak group/role mapping solution feel free to edit it.
The error you see means that the service account for OIDC doesn't have the proper privileges to list replicasets in the default namespace. The easiest way out of it would be to simply setup the ServiceAccount, ClusterRole and ClusterRoleBinding from scratch and make sure it has the proper privileges. For example, you can create a clusterrolebinding with permissions “admin” by executing:
kubectl create clusterrolebinding OIDCrolebinding - -clusterrole=admin - - group=system:serviceaccounts:OIDC
The same can be done for the ClusterRole:
kubectl create clusterrole OIDC --verb=get,list,watch --resource=replicasets --namespace=default
More examples of how to use the kubectl create in this scenario can be found here.
Here you can find a whole official guide regarding the RBAC Authorization.
EDIT:
Also, please also check if your ClusterRoleBinding for the "\"oidc:\"<user_name>" is in the "default" namespace.

Kubernetes RBAC permissions - unknown 'clusterrole' flag when attempting to grant permissions?

I am using the Mirantis kubeadm-dind-cluster repository (https://github.com/Mirantis/kubeadm-dind-cluster) as my Kubernetes install; I came across this error when attempting to run a container -
panic: customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:serviceaccount:default:default" cannot create customresourcedefinitions.apiextensions.k8s.io at the cluster scope
So I attempted to add cluster-admin permissions to my account:
kubectl create clusterrolebinding serviceaccounts-cluster-admin --clusterrole=cluster-admin --group=system:serviceaccounts
And get the following error:
Error: unknown flag: --clusterrole
Why is this? How do I fix this or get around it? I'm not sure how to convert the command into a YAML file to "kubectl create -f" to but it seems like that might be the way to go.
All three nodes are on version 1.8.6.
What version of kubectl are you using? Be sure you are using a version that includes the kubectl create clusterrolebinding command
If your version of kubectl does not support that command, you can try creating it directly via a yaml file (though I'm not sure whether 1.5.x kubectl was happy submitting versions of API objects it didn't know about):
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: serviceaccounts-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts