RBAC Error in Kubernetes - kubernetes

I have deployed kubernetes v1.8 in my workplace. I have created roles for admin and view access to namespaces 3months ago. In the initial phase RBAC is working as per the access given to the users. Now RBAC is not happening every who has access to the cluster is having clusteradmin access.
Can you suggest the errors/changes that had to be done?

Ensure the RBAC authorization mode is still being used (--authorization-mode=…,RBAC is part of the apiserver arguments)
If it is, then check for a clusterrolebinding that is granting the cluster-admin role to all authenticated users:
kubectl get clusterrolebindings -o yaml | grep -C 20 system:authenticated

Related

Grant privileges to specific namespaces for every user

I have bunch of users. Every user should be able to create/change/delete substances in namespaces like *-stage. Namespaces can be added or removed dynamically. I can create ServiceAccount in every namespace and grant privileges.
I created pod in k8s and install kubectl and ssh into it. So every user has access to this pod and can use kubectl. I know that I can mount ServiceAccount secrets to pod. As far as I have different ServiceAccounts for every namespace I don't know how to grant privileges to all *-stage namespaces for every user. I don't want to create cluster-admin ClusterRoleBinding for ServiceAccount, cause users should be able to modify only *-stage namespaces. Can you help me please?
I am posting a community wiki answer based on OP's solution for better visibility:
Actually, I have already solved problem. I create ["*"] role in every
*-stage namespace and bind it to ServiceAccount. Then I mount ServiceAccount to kubectl pod which is available over ssh. So every
user has unlimited access to *-stage namespaces.
Also I am adding links for the official docs regarding ServiceAccount and role-based access control as a supplement.

GCP K8s (kubectl) error message (Required "container.leases.get" permission)

I am getting an error message after running some kubectl commands (GCP command line - gcloud). I have a K8S cluster created in GKE.
Example:
kubectl describe node
gke_k8s_cluster_name
Error from server (Forbidden): leases.coordination.k8s.io "gke_k8s_cluster_name" is forbidden: User "MY_SERVICE_ACCOUNT" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease": Required "container.leases.get" permission.
The point is that "container.leases.get" permission is not listed in IAM (as custom permissions or regular role).
How could I grant that permission to the service account in GCP ?
thanks,
Jose
You may need to grant additional permissions to yourself on GCP IAM and GKE sides, for example:
PROJECT_ID=$(gcloud config get-value core/project)
USER_ID=$(gcloud config get-value core/account)
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member=user:${USER_ID} --role=roles/container.admin
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user ${USER_ID}
See also GCP IAM & GKE RBAC integration.

How do permissions in a GCloud IAM role get implemented in a kubernetes cluster?

I am running a Kubernetes application on GKE. In the GCP IAM console, I can see several built-in roles, e.g. Kubernetes Engine Admin. Each role has an ID and permissions associated with it— for example, Kubernetes Engine Admin has ID roles/container.admin and ~300 permissions, each something like container.apiServices.create.
In the kubernetes cluster, I can run:
kubectl get clusterrole | grep -v system: # exclude system roles
This returns the following:
NAME AGE
admin 35d
cloud-provider 35d
cluster-admin 35d
cluster-autoscaler 35d
edit 35d
gce:beta:kubelet-certificate-bootstrap 35d
gce:beta:kubelet-certificate-rotation 35d
gce:cloud-provider 35d
kubelet-api-admin 35d
view 35d
I do not see any roles in this table that reflect the roles in GCP IAM.
That being the case, how are the GCP IAM roles implemented/enforced in a cluster? Does Kubernetes talk to GCP, in addition to using RBAC, when doing permissions checks?
RBAC system lets you exercise fine-grained control over how users access the API resources running on your cluster. You can use RBAC to dynamically configure permissions for your cluster's users and define the kinds of resources with which they can interact.
Moreover, GKE also uses Cloud Identity and Access Management (IAM) to control access to your cluster.
Hope this helps!
RBAC inherits permissions from IAM, so be careful with that. If you set a cluster-admin permission, for example, in IAM, you will have no way to give less permissions through RBAC.
If you want to use RBAC, you will need to set the lowest permission for the user (given your use case), and then granularly manage the permissions through RBAC.

How to restrict default Service account from creating/deleting kubernetes resources

I am using Google cloud's GKE for my kubernetes operations.
I am trying to restrict access to the users that access the clusters using command line. I have applied IAM roles in Google cloud and given view role to the Service accounts and users. It all works fine if we use it through api or "--as " in kubectl commands but when someone tries to do a kubectl create an object without specifying "--as" object still gets created with "default" service account of that particular namespace.
To overcome this problem we gave restricted access to "default" service account but still we were able to create objects.
$ kubectl auth can-i create deploy --as default -n test-rbac
no
$ kubectl run nginx-test-24 -n test-rbac --image=nginx
deployment.apps "nginx-test-24" created
$ kubectl describe rolebinding default-view -n test-rbac
Name: default-view
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: view
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount default test-rbac
I expect users who are accessing cluster through CLI should not be able to create objects if they dont have permisssions, even if they dont use "--as" flag they should be restricted.
Please take in count that first you need to review the prerequisites to use RBAC in GKE
Also, please note that IAM roles applies to the entire Google Cloud project and all clusters within that project and RBAC enables fine grained authorization at a namespace level. So, with GKE these approaches to authorization work in parallel.
For more references, please take a look on this document RBAC in GKE
For all the haters of this question, I wish you could've tried pointing to this:
there is a file at:
~/.config/gcloud/configurations/config_default
in this there is a option under [container] section:
use_application_default_credentials
set to true
Here you go , you learnt something new.. enjoy. Wish you could have tried helping instead of down-voting.

Kubernetes RBAC authentication for default user

I am using kops in AWS to create my Kubernetes cluster.
I have created a cluster with RBAC enabled via --authorization=RBAC as described here.
I am trying to use the default service account token to interact with the cluster and getting this error:
Error from server (Forbidden): User "system:serviceaccount:default:default" cannot list pods in the namespace "default". (get pods)
Am I missing a role or binding somewhere?
I thing it is not a good idea to give the cluster-admin role to default service account in default namespace.
If you will give cluster-admin access to default user in default namespace - every app (pod) that will be deployed in cluster, in default namespace - will be able to manipulate the cluster (delete system pods/deployments or make other bad stuff).
By default the clusterrole cluster-admin is given to default service account in kube-system namespace.
You can use it for interacting with cluster.
try to give admin role and try.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=default:default