Unsupported value: "rbac.authorization.k8s.io" - kubernetes

When I try to
kubectl create -f cloudflare-argo-rolebinding.yml
this RoleBinding
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cloudflare-argo-rolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: cloudflare-argo
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: cloudflare-argo-role
apiGroup: rbac.authorization.k8s.io
I get this error :
The RoleBinding "cloudflare-argo-rolebinding" is invalid: subjects[0].apiGroup: Unsupported value: "rbac.authorization.k8s.io": supported values: ""
Any idea ? I'm on DigitalOcean using their new Kubernetes service if it helps.

I think problem is using wrong apiGroup.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cloudflare-argo-rolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: cloudflare-argo
# apiGroup is ""(core/v1) for service_account
apiGroup: ""
roleRef:
kind: Role
name: cloudflare-argo-role
apiGroup: rbac.authorization.k8s.io

ServiceAccount subjects are in the v1 API, which is apiGroup ""

Related

kubectl reports namespace value required in manifest file

What is the issue with my manifest file?
cat manifests/node-exporter-clusterRoleBinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-exporter
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-exporter
subjects:
- kind: ServiceAccount
name: node-exporter
cat manifests/kube-state-metrics-clusterRoleBinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-state-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-state-metrics
subjects:
- kind: ServiceAccount
name: kube-state-metrics
If I apply these manifest with namespace I get an error as like below:
kubectl -f manifests/kube-state-metrics-clusterRoleBinding.yaml -n test-monitoring
The ClusterRoleBinding "node-exporter" is invalid: subjects[0].namespace: Required value
kubectl apply -f manifests/kube-state-metrics-clusterRoleBinding.yaml -n test-monitoring
The ClusterRoleBinding "kube-state-metrics" is invalid: subjects[0].namespace: Required value
Any issue with manifests files?
As a ServiceAccount is a namespaced object, you need to explicitly add the namespace of the ServiceAccount as:
subjects:
- kind: ServiceAccount
name: kube-state-metrics
namespace: <namespace-of-kube-state-metrics-serviceaccount>

Allow everyone in RoleBinding for a namespace

I am trying to create a public namespace public-ns which should be accessible for all the users and groups. I have defined RoleBinding as following which allows 2 group and 2 users to access the namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: allow-everyone
namespace: public-ns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pods-services
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: user-one
- apiGroup: rbac.authorization.k8s.io
kind: User
name: user-two
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: group-one
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: group-two
Now, I want to allow the access to this namespace for all the groups. I have tried giving '*' and any as following it did not work.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: allow-everyone
namespace: public-ns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pods-services
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: any ## tried '*' as well
Can anyone please suggest me how can I give permissions for everyone for this specific namespace. If this is not possible any alternatives suggested would be great.
Note: OIDC enabled on K8s with Keycloak.
Thanks in advance.
I think you should use special group system:authenticated
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#authentication-strategies

How can i add a Serviceaccout using kubectl patch to existing Clusterrolebinding

This is my existing clusterrolebinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: example-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: test-role
subjects:
- kind: ServiceAccount
name: test-sa
namespace: ns1
i am planning to add the same ServiceAccount (test-sa) in another namespace (for eg:ns2) and bind it with my ClusterRole "test-role" .
what i have tried
subjects:
- kind: ServiceAccount
name: test-sa
namespace: ns2
i tried to apply the yaml file above like
kubectl patch clusterrolebinding <clusterrolebinding-name> --type="strategic" --patch "$(cat role.yaml)"
Result
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: example-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: test-role
subjects:
- kind: ServiceAccount
name: test-sa
namespace: ns2
It is adding the ClusterRoleBinding with sa in new namespace but my existing binding in namespace ns1 got removed .. is there any way to merge the new changes instead of replace ..iam trying do it in an automated way ..like a bash script for editing this cluserrolebinding,thats why i choose kubectl patch
You can try below command. It worked. Refer here.
kubectl patch clusterrolebinding example-role --type='json' -p='[{"op": "add", "path": "/subjects/1", "value": {"kind": "ServiceAccount", "name": "test-sa","namespace": "ns2" } }]'
op - operation add
subjects/1 - add to subjects array's first position
subjects:
- kind: ServiceAccount
name: test-sa
namespace: ns1
- kind: ServiceAccount
name: test-sa
namespace: ns2

How to add a new ClusterRoleBinding with Kustomize in k8s without removing existing bindings?

When I type kubectl edit clusterrolebinding foo-role, I can see something like:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: foo-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- kind: ServiceAccount
name: foo-user
namespace: ns1
- kind: ServiceAccount
name: foo-user
namespace: ns2
I can add a new ClusterRoleBinding for namespace ns3 by appending the following config to above file:
- kind: ServiceAccount
name: foo-user
namespace: ns3
However, I want to use Kustomize to add new bindings instead of manually modifying the above file.
I tried to apply the .yaml file below:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: foo-role
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/foo-role
uid: 64a4a787-d5ab-4c83-be2b-476c1bcb6c96
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- kind: ServiceAccount
name: foo-user
namespace: ns3
It did add a new ClusterRoleBinding in the namespace ns3, but it will remove existing ClusterRoleBindings for ns1 and ns2.
Is there a way to add new ClusterRoleBinding with Kustomize without removing existing ones?
Give them different names in the metadata. You didn't make a new one, you just overwrote the same one.

Question about using apiGroup with RoleRef when creating ClusterRoleBinding

Here is a ClusterRoleBinding syntax that I found for creating Cluster role binding. Why does apiGroup needed to be specified when referring to role in roleRef? I have seen similar example in the Kubernetes docs. What is the possible explanation?
Example 1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-secrets-global
subjects:
- kind: Group
name: manager # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io
Example 2
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
All Kubernetes resources have group, version, name, so an apiGroup field is required to identify a group.
For example, if you create your Custom Resource Definition(CRD), you need setting these fields.
below is the sample controller example:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: foos.samplecontroller.k8s.io
spec:
group: samplecontroller.k8s.io
version: v1alpha1
names:
kind: Foo
plural: foos
scope: Namespaced
(ServiceAccount resource is the core group. so I think ServiceAccount could be omitted a group field.)