Kubernetes cannot create resource "namespaces" in API group "" at the cluster scope even after creating rolebindings - kubernetes

Im running a pipeline that creates a kubernetes namespace but when I run it I get:
Error from server (Forbidden): namespaces is forbidden: User "system:serviceaccount:gitlab-runner:default" cannot create resource "namespaces" in API group "" at the cluster scope
I created a ClusterRole and a ClusterRoleBinding to allow the service user default in the gitlab-runner namespace to create namespaces with:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: modify-namespace
rules:
- apiGroups: [""]
resources:
- namespace
verbs:
- create
and:
ind: ClusterRoleBinding
metadata:
name: modify-namespace-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: modify-namespace
subjects:
- kind: ServiceAccount
name: default
namespace: gitlab-runner
But that gives me the same error.
What am I doing wrong?

[""] in clusterrole manifest it should be just "".
because [""] will be array where apiGroups expects a string.
under resources it should be namespaces not namespace because :
kubectl api-resources | grep 'namespace\|NAME'
NAME SHORTNAMES APIVERSION NAMESPACED KIND
namespaces ns v1 false Namespace
so clusterrole manifest should be as following :
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: modify-namespace
rules:
- apiGroups: ""
resources:
- namespaces
verbs:
- create

I had this issue below:
Namespaces is forbidden: User "system:serviceaccount:openshift-operators:minio-operator" cannot create resource "namespaces" in API group "" at the cluster scope
Got solved with below yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-role-cesar-3
rules:
- apiGroups: [""]
resources:
- namespaces
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: role-binding-cesar-3
namespace: openshift-operators
subjects:
- kind: ServiceAccount
name: minio-operator
namespace: openshift-operators
roleRef:
kind: ClusterRole
name: cluster-role-cesar-3
apiGroup: rbac.authorization.k8s.io

Related

Restricted user in K8s need CRD's access

In my scenario user has access to four namespaces only, he will switch between namespaces using contexts below. How can I give him access to CRD's along with his exiting access to four namespaces.
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* dev-crd-ns-user dev dev-crd-ns-user dev-crd-ns
dev-mon-fe-ns-user dev dev-mon-fe-ns-user dev-mon-fe-ns
dev-strimzi-operator-ns dev dev-strimzi-operator-ns-user dev-strimzi-operator-ns
dev-titan-ns-1 dev dev-titan-ns-1-user dev-titan-ns-1
hifi#101common:/root$ kubectl get secret
NAME TYPE DATA AGE
default-token-mh7xq kubernetes.io/service-account-token 3 8d
dev-crd-ns-user-token-zd6xt kubernetes.io/service-account-token 3 8d
exfo#cmme101common:/root$ kubectl get crd
Error from server (Forbidden): customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:serviceaccount:dev-crd-ns:dev-crd-ns-user" cannot list resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope
Tried below two options. Option 2 is the recommendation but didn't work with either one.
Error from server (Forbidden): customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:serviceaccount:dev-crd-ns:dev-crd-ns-user" cannot list resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the **cluster scope**
Option 1: Adding CRD to existing role
role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
name: dev-ns-user-full-access
namespace: dev-crd-ns
rules:
- apiGroups:
- ""
- extensions
- apps
- networking.k8s.io
- apiextensions.k8s.io
resources:
- '*'
- customresourcedefinitions
verbs:
- '*'
- apiGroups:
- batch
resources:
- jobs
- cronjobs
verbs:
- '*'
role binding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
name: dev-crd-ns-user-view
namespace: dev-crd-ns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dev-crd-ns-user-full-access
subjects:
- kind: ServiceAccount
name: dev-crd-ns-user
namespace: dev-crd-ns
Option 2 : Adding CRD as a new role to "dev-crd-ns" namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: dev-crd-ns
name: crd-admin
rules:
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: crd-admin
namespace: dev-crd-ns
subjects:
- kind: ServiceAccount
name: dev-crd-ns-user
namespace: dev-crd-ns
roleRef:
kind: Role
name: crd-admin
apiGroup: rbac.authorization.k8s.io
You need to create Role and RoleBinding for each service account like dev-crd-ns-user.
For dev-crd-ns-user:
Update the existing Role or create a new one:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: dev-crd-ns
name: crd-admin
rules:
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
$ kubectl apply -f crd-admin-role.yaml
Update the existing RoleBinding with this new Role or create a new one:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: crd-admin
namespace: dev-crd-ns
subjects:
- kind: ServiceAccount
name: dev-crd-ns-user
namespace: dev-crd-ns
roleRef:
kind: Role
name: crd-admin
apiGroup: rbac.authorization.k8s.io
$ kubectl apply -f crd-admin-role-binding.yaml
Now, the SA dev-crd-ns-user will have all the access to customresourcedefinitions.
Follow similar steps for the rest of the service accounts.

Forbidden error while describe/scale deployment by user system:node:ip.xx

I'm trying to execute K8S kubectl cmds from inside the container(name: autodeploy).
I have configured ClusterRole, ServiceAccount and ClusterRoleBinding. But getting Forbidden error while performing Describe and Scale actions on K8S Deployments.
Error from server (Forbidden): deployments.apps "test-deployment" is
forbidden: User "system:node:ip-xx-xx-xx-xx.ec2.internal" cannot get
resource "deployments" in API group "apps" in the namespace "abc"
autodeploy container also in same namespace abc
ClusterRole:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: autodeploy
rules:
- apiGroups: ["*"]
resources: ["deployments", "deployments/scale", "pods"]
verbs: ["get", "list", "update"]
ServiceAccount:
apiVersion: v1
kind: ServiceAccount
metadata:
name: autodeploy
namespace: abc
ClusterRoleBinding:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: autodeploy
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: autodeploy
subjects:
- kind: ServiceAccount
name: autodeploy
namespace: abc

Kubernetes can not list pods as user

I am seeing this error for user jenkins when deploying.
Error: pods is forbidden: User "system:serviceaccount:ci:jenkins" cannot list pods in the namespace "kube-system"
I have created a definition for service account
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
I have created a ClusterRoleBinding
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: jenkins
namespace: kube-system
Any advice?
Without a namespace (in the ServiceAccount creation) you will automatically create it in the default namespace. The same with a Role (which is always a one-namespace resource).
What you need to do is to create a ClusterRole with the correct permissions (basically just change your role into a ClusterRole) and then set the correct namespace either on the ServiceAccount resource or in the ClusterRole binding.
You can also skip creating the Role and RoleBinding, as the ClusterRole and ClusterRoleBinding will override it either way.
--
With that said. It's always good practice to create a specific ServiceAccount and RoleBinding per namespace when it comes to deploys, so that you don't accidently create an admin account which is used in a remote CI tool like... Jenkins ;)

Kubernetes RBAC rules for PersistentVolume

I'm trying to create RBAC Role / rules for a service that needs a persistent volume and it's still failing with forbidden error.
Here is my role config:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: logdrop-user-full-access
namespace: logdrop
rules:
- apiGroups: ["", "extensions", "apps", "autoscaling"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["batch"]
resources:
- jobs
- cronjobs
verbs: ["*"]
And this is my cut down PersistentVolume manifest:
apiVersion: v1
kind: PersistentVolume
metadata:
name: logdrop-pv
namespace: logdrop
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: logdrop
name: logdrop-pvc
hostPath:
path: /efs/logdrop/logdrop-pv
When I try to apply it I get a forbidden error.
$ kubectl --kubeconfig ~/logdrop/kubeconfig-logdrop.yml apply -f pv-test.yml
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=persistentvolumes", GroupVersionKind: "/v1, Kind=PersistentVolume"
Name: "logdrop-pv", Namespace: ""
Object: &{map["apiVersion":"v1" "kind":"PersistentVolume" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "name":"logdrop-pv"] "spec":map["accessModes":["ReadWriteMany"] "capacity":map["storage":"10Gi"] "claimRef":map["name":"logdrop-pvc" "namespace":"logdrop"] "hostPath":map["path":"/efs/logdrop/logdrop-pv"] "persistentVolumeReclaimPolicy":"Retain"]]}
from server for: "pv-test.yml": persistentvolumes "logdrop-pv" is forbidden: User "system:serviceaccount:logdrop:logdrop-user" cannot get resource "persistentvolumes" in API group "" at the cluster scope
On the last line it specifically says resource "persistentvolumes" in API group "" - that's what I have allowed in the rules!
I can create the PV with admin credentials from the same yaml file and I can create any other resources (pods, services, etc) with the logdrop permissions. Just the PersistentVolume doesn't work for some reason. Any idea why?
I'm using Kubernetes 1.15.0.
Update:
This is my role binding as requested:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: logdrop-user-view
namespace: logdrop
subjects:
- kind: ServiceAccount
name: logdrop-user
namespace: logdrop
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: logdrop-user-full-access
It's not a ClusterRoleBinding as my intention is to give the user access only to one namespace (logdrop), not to all namespaces across the cluster.
PVs, namespaces, nodes and storages are cluster-scoped objects. As a best practice, to be able to list/watch those objects, you need to create ClusterRole and bind them to a ServiceAccount via ClusterRoleBinding. As an example;
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: <name of your cluster role>
rules:
- apiGroups: [""]
resources:
- nodes
- persistentvolumes
- namespaces
verbs: ["list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources:
- storageclasses
verbs: ["list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: <name of your cluster role binding>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: <name of your cluster role which should be matched with the previous one>
subjects:
- kind: ServiceAccount
name: <service account name>
I see a potential problem here.
PersistentVolumes are cluster scoped resources. They are expected to be provisioned by the administrator without any namespace.
PersistentVolumeClaims however, can be created by users within a particular namespace as they are a namespaced resources.
That's why when you use admin credentials it works but with logdrop it returns an error.
Please let me know if that makes sense.
The new role needs to be granted to a user, or group of users, with a rolebinding, e.g.:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: logdrop-rolebinding
namespace: logdrop
subjects:
- kind: User
name: logdrop-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: logdrop-user-full-access
apiGroup: rbac.authorization.k8s.io

Can I connect one service account to multiple namespaces in Kubernetes?

I have couple of namespaces - assume NS1 and NS2. I have serviceaccounts created in those - sa1 in NS1 and sa2 in NS2. I have created roles and rolebindings for sa1 to do stuff within NS1 and sa2 within NS2.
What I want is give sa1 certain access within NS2 (say only Pod Reader role).
I am wondering if that's possible or not?
You can simply reference a ServiceAccount from another namespace in the RoleBinding:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: pod-reader
namespace: ns2
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-from-ns1
namespace: ns2
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-reader
subjects:
- kind: ServiceAccount
name: ns1-service-account
namespace: ns1