Patch namespace and change resource kind for a role binding - kubernetes

Using kustomize, I'd like to set namespace field for all my objects.
Here is my kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- patch: |-
- op: replace
path: /kind
value: RoleBinding
target:
group: rbac.authorization.k8s.io
kind: ClusterRoleBinding
name: manager-rolebinding
version: v1
resources:
- role_binding.yaml
namespace: <NAMESPACE>
Here is my resource file: role_binding.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: manager-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: manager-role
subjects:
- kind: ServiceAccount
name: controller-manager
namespace: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
selector:
matchLabels:
control-plane: controller-manager
replicas: 1
template:
metadata:
labels:
control-plane: controller-manager
spec:
containers:
- command:
- /manager
args:
- --enable-leader-election
image: controller:latest
name: manager
And kustomize output:
$ kustomize build
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: manager-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: manager-role
subjects:
- kind: ServiceAccount
name: controller-manager
namespace: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: <NAMESPACE>
spec:
replicas: 1
selector:
matchLabels:
control-plane: controller-manager
template:
metadata:
labels:
control-plane: controller-manager
spec:
containers:
- args:
- --enable-leader-election
command:
- /manager
image: controller:latest
name: manager
How can I patch the namespace field in the RoleBinding and set it to <NAMESPACE>? In above example, it works perfectly for the Deployment resource but not for the RoleBinding.

Here is a solution which solves the issue, using kustomize-v4.0.5:
cat <<EOF > kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- patch: |-
- op: replace
path: /kind
value: RoleBinding
- op: add
path: /metadata/namespace
value:
<NAMESPACE>
target:
group: rbac.authorization.k8s.io
kind: ClusterRoleBinding
name: manager-rolebinding
version: v1
resources:
- role_binding.yaml
- service_account.yaml
namespace: <NAMESPACE>
EOF
cat <<EOF > role_binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: manager-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: manager-role
subjects:
- kind: ServiceAccount
name: controller-manager
namespace: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
selector:
matchLabels:
control-plane: controller-manager
replicas: 1
template:
metadata:
labels:
control-plane: controller-manager
spec:
containers:
- command:
- /manager
args:
- --enable-leader-election
image: controller:latest
name: manager
EOF
cat <<EOF > service_account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: controller-manager
namespace: system
EOF
Adding the ServiceAccount resource, and the Namespace field in the RoleBinding resource allows to correctly set the subject field in the RoleBinding.

Looking directly from the code:
// roleBindingHack is a hack for implementing the namespace transform
// for RoleBinding and ClusterRoleBinding resource types.
// RoleBinding and ClusterRoleBinding have namespace set on
// elements of the "subjects" field if and only if the subject elements
// "name" is "default". Otherwise the namespace is not set.
//
// Example:
//
// kind: RoleBinding
// subjects:
// - name: "default" # this will have the namespace set
// ...
// - name: "something-else" # this will not have the namespace set
// ...
The ServiceAccount and the reference on the ClusterRoleBinding needs to have "default" as namespace or otherwise it won't be replaced.
Check the example below:
$ cat <<EOF > my-resources.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: my-cluster-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-clusterrole
subjects:
- kind: ServiceAccount
name: my-service-account
namespace: default
EOF
$ cat <<EOF > kustomization.yaml
namespace: foo-namespace
namePrefix: foo-prefix-
resources:
- my-resources.yaml
EOF
$ kustomize build
apiVersion: v1
kind: ServiceAccount
metadata:
name: foo-prefix-my-service-account
namespace: foo-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: foo-prefix-my-cluster-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-clusterrole
subjects:
- kind: ServiceAccount
name: foo-prefix-my-service-account
namespace: foo-namespace

Related

Add object to an array in yaml via Kustomize

how can I add object to array via Kustomize? As a result I would like to have two ServiceAccounts added to subjects, like so:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: name
namespace: test1
- kind: ServiceAccount
name: name
namespace: test2
I'm trying with that patch:
- op: add
path: "/subjects/0"
value:
kind: ServiceAccount
name: name
namespace: test1
And another patch for second environment:
- op: add
path: "/subjects/1"
value:
kind: ServiceAccount
name: name
namespace: test2
But in result I'm getting duplicated subjects, so of course it is wrong:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: name
namespace: test1 // the same...
- kind: ServiceAccount
name: name
namespace: test1 // ...as here
What would be a proper way to add it?
If I start with a ClusterRoleBinding that looks like this in crb.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects: []
And I create a kustomization.yaml file like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- crb.yaml
patches:
- target:
kind: ClusterRoleBinding
name: binding
patch: |
- op: add
path: /subjects/0
value:
kind: ServiceAccount
name: name
namespace: test1
- target:
kind: ClusterRoleBinding
name: binding
patch: |
- op: add
path: /subjects/1
value:
kind: ServiceAccount
name: name
namespace: test2
Then I get as output:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: name
namespace: test1
- kind: ServiceAccount
name: name
namespace: test2
Which is I think what you're looking for. Does this help? Note that instead of explicitly setting an index in the path, like:
path: /subjects/0
We can instead specify:
path: /subjects/-
Which means "append to the list", and in this case will generate the same output.

kubernetes ServiceAccount Role Verification failed

questions:
Create a service account name dev-sa in default namespace, dev-sa can create below components in dev namespace:
Deployment
StatefulSet
DaemonSet
result:
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: dev-sa
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: dev
name: sa-role
rules:
- apiGroups: [""]
resources: ["deployment","statefulset","daemonset"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: sa-rolebinding
namespace: dev
subjects:
- kind: ServiceAccount
name: dev-sa
namespace: default
roleRef:
kind: Role
name: sa-role
apiGroup: rbac.authorization.k8s.io
Validation:
kubectl auth can-i create deployment -n dev \
--as=system:serviceaccount:default:dev-sa
no
This is an exam question, but I can't pass
Can you tell me where the mistake is? thx
in Role, use * on api group, and add s on resource name.
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: dev-sa
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: dev
name: sa-role
rules:
- apiGroups: ["*"]
resources: ["deployments", "statefulsets", "daemonsets"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: sa-rolebinding
namespace: dev
subjects:
- kind: ServiceAccount
name: dev-sa
namespace: default
roleRef:
kind: Role
name: sa-role
apiGroup: rbac.authorization.k8s.io
First, the apiGroups of Deployment, daemonSet, and statefulSet is apps, not core. So, for the apiGroups value, instead of "", put "apps". (an empty string representing core)
Second, remember: resources always define in Plural of "kind". So, for resources values, you always should use plural names. e.g. instead of deployment, you use deployments
So, your file should be something like this:
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: dev-sa
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: dev
name: sa-role
rules:
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","daemonsets"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: sa-rolebinding
namespace: dev
subjects:
- kind: ServiceAccount
name: dev-sa
namespace: default
roleRef:
kind: Role
name: sa-role
apiGroup: rbac.authorization.k8s.io
For apiGroups's values, be sure to check the docs
I suggest you read this article about Users and Permissions in Kubernetes.

kubernetes role and serviceaccount grant checking

I created a serviceaccount, role and rolebinding with all grants for pods but when I checking with auth command, it is showing as 'No' to me. Any suggestion? I shared all the yamls here
λ kl get sa -n power -oyaml
apiVersion: v1
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2022-04-04T00:31:09Z"
name: default
namespace: power
resourceVersion: "26979"
uid: a0f88ce5-e036-4631-aa6d-a0b14583f6a7
secrets:
- name: default-token-dnmfs
- apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2022-04-04T00:48:48Z"
name: ngn-sa
namespace: power
resourceVersion: "28498"
uid: 3aec6beb-e3c3-4e0c-a15d-fc9b0ac5d1b8
secrets:
- name: ngn-sa-token-j7xrv
kind: List
metadata:
resourceVersion: ""
selfLink: ""
λ kl get role ngn-ro -n power -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: "2022-04-04T00:49:47Z"
name: ngn-ro
namespace: power
resourceVersion: "28583"
uid: 2facc03d-50a4-44be-bd38-dd6797615e70
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- '*'
λ kl get rolebinding ngn-rb -n power -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: "2022-04-04T00:50:30Z"
name: ngn-rb
namespace: power
resourceVersion: "28651"
uid: cd08f258-24b7-4331-9f5c-cec47d9e4fa3
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ngn-ro
subjects:
- kind: ServiceAccount
name: ngn-sa
namespace: power
λ kl auth can-i create pods --as ngn-sa -n power
no
I am expecting a 'Yes' here but it seems to not giving me correct auth value. Please suggest what I am missing here?

Kubernetes RBAC configuration w/ nodejs client

My design is: EventController lives in "default" namespace and starts Jobs in "gamespace" namespace.
I'm getting this error trying to create a Job with the Node.js kubernetes client:
jobs.batch is forbidden: User "system:serviceaccount:default:event-manager" cannot create resource "jobs" in API group "batch" in the namespace "gamespace"
from this line of code:
const job = await batchV1Api.createNamespacedJob('gamespace', kubeSpec.job)
kubeSpec.job is:
{
apiVersion: 'batch/v1',
kind: 'Job',
metadata: {
name: 'event-60da4bee95e237001d65e355',
namespace: 'gamespace',
labels: {
tier: 'event-servers',
}
},
spec: {
backoffLimit: 1,
activeDeadlineSeconds: 14400,
ttlSecondsAfterFinished: 86400,
template: { spec: [Object] }
}
}
And here's my RBAC configuration:
apiVersion: v1
kind: Namespace
metadata:
name: gamespace
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: event-manager
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: event-manager-role
rules:
- apiGroups: ['', 'batch'] # '' means "core"
resources: ['jobs', 'services']
verbs: ['get', 'list', 'watch', 'create', 'update', 'patch', 'delete']
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: event-manager-clusterrole-binding
# Specifies the namespace the ClusterRole can be used in.
namespace: gamespace
subjects:
- kind: ServiceAccount
name: event-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: event-manager-role
The container making the function call is configured like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: eventcontroller-deployment
labels:
app: eventcontroller
spec:
selector:
matchLabels:
app: eventcontroller
replicas: 1
template:
metadata:
labels:
app: eventcontroller
spec:
# see accounts-namespaces.yaml
serviceAccountName: 'event-manager'
imagePullSecrets:
- name: registry-credentials
containers:
- name: eventcontroller
image: eventcontroller
imagePullPolicy: Always
envFrom:
- configMapRef:
name: eventcontroller-config
ports:
- containerPort: 3003
I'm not sure if I'm using the client incorrectly (why is namespace needed in the spec and the function call?), or if I've configured RBAC incorrectly.
Any suggestions?
Thank you!
I can't comment so I will share some Ideas what could be an Issue and how i would further debug the problem.
For your Deployment you have no Namespace defined, could it be the case that the Pod is running in a different Namespace (!= gamespace), but your Service Account only applies for gameplay?
A RoleBinding grants permissions within a specific namespace1
If this is not the error you might want to try to use a service account that is already created and gives all permissions for the start to rule out other errors.
Here an Example Manifest
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: all-permissions
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: service-account-all-permissions
namespace: gameplay
Set the Service Account to 'service-account-all-permissions' in your Deployment and see if you still get an permission error from the Kubernetes API
Source.

Kubernetes - User "system:serviceaccount:management:gitlab-admin" cannot get resource "serviceaccounts" in API >group "" in the namespace "services"

I am getting this error -
Error: rendered manifests contain a resource that already exists. Unable to continue with >install: could not get information about the resource: serviceaccounts "simpleapi" is forbidden: >User "system:serviceaccount:management:gitlab-admin" cannot get resource "serviceaccounts" in API >group "" in the namespace "services"
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: gitlab-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: gitlab
namespace: kube-system
- kind: ServiceAccount
name: gitlab
namespace: services
I am using this for RBAC as cluster-admin. Why am I getting this . I also tried the below but still got the same issue . Can someone explain what is that I am doing wrong here -
apiVersion: rbac.authorization.k8s.io/v1
kind: "ClusterRole"
metadata:
name: gitlab-admin
labels:
app: gitlab-admin
rules:
- apiGroups: ["*"] # also tested with ""
resources:
[
"replicasets",
"pods",
"pods/exec",
"secrets",
"configmaps",
"services",
"deployments",
"ingresses",
"horizontalpodautoscalers",
"serviceaccounts",
]
verbs: ["get", "list", "watch", "create", "patch", "delete", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: "ClusterRoleBinding"
metadata:
name: gitlab-admin-global
labels:
app: gitlab-admin
roleRef:
apiGroup: "rbac.authorization.k8s.io"
kind: "ClusterRole"
name: cluster-admin
subjects:
- kind: ServiceAccount
name: gitlab-admin
namespace: management
- kind: ServiceAccount
name: gitlab-admin
namespace: services
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-admin
namespace: management
labels:
app: gitlab-admin
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-admin
namespace: services
labels:
app: gitlab-admin
So here is what happened . I needed to run this as inside the namespace i.e
I changed the config to run from the namespace management itself .
kubectl config set-context --current --namespace=management
And then
kubectl apply -f gitlab-admin.yaml