Kubernetes RBAC configuration w/ nodejs client - kubernetes

My design is: EventController lives in "default" namespace and starts Jobs in "gamespace" namespace.
I'm getting this error trying to create a Job with the Node.js kubernetes client:
jobs.batch is forbidden: User "system:serviceaccount:default:event-manager" cannot create resource "jobs" in API group "batch" in the namespace "gamespace"
from this line of code:
const job = await batchV1Api.createNamespacedJob('gamespace', kubeSpec.job)
kubeSpec.job is:
{
apiVersion: 'batch/v1',
kind: 'Job',
metadata: {
name: 'event-60da4bee95e237001d65e355',
namespace: 'gamespace',
labels: {
tier: 'event-servers',
}
},
spec: {
backoffLimit: 1,
activeDeadlineSeconds: 14400,
ttlSecondsAfterFinished: 86400,
template: { spec: [Object] }
}
}
And here's my RBAC configuration:
apiVersion: v1
kind: Namespace
metadata:
name: gamespace
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: event-manager
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: event-manager-role
rules:
- apiGroups: ['', 'batch'] # '' means "core"
resources: ['jobs', 'services']
verbs: ['get', 'list', 'watch', 'create', 'update', 'patch', 'delete']
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: event-manager-clusterrole-binding
# Specifies the namespace the ClusterRole can be used in.
namespace: gamespace
subjects:
- kind: ServiceAccount
name: event-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: event-manager-role
The container making the function call is configured like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: eventcontroller-deployment
labels:
app: eventcontroller
spec:
selector:
matchLabels:
app: eventcontroller
replicas: 1
template:
metadata:
labels:
app: eventcontroller
spec:
# see accounts-namespaces.yaml
serviceAccountName: 'event-manager'
imagePullSecrets:
- name: registry-credentials
containers:
- name: eventcontroller
image: eventcontroller
imagePullPolicy: Always
envFrom:
- configMapRef:
name: eventcontroller-config
ports:
- containerPort: 3003
I'm not sure if I'm using the client incorrectly (why is namespace needed in the spec and the function call?), or if I've configured RBAC incorrectly.
Any suggestions?
Thank you!

I can't comment so I will share some Ideas what could be an Issue and how i would further debug the problem.
For your Deployment you have no Namespace defined, could it be the case that the Pod is running in a different Namespace (!= gamespace), but your Service Account only applies for gameplay?
A RoleBinding grants permissions within a specific namespace1
If this is not the error you might want to try to use a service account that is already created and gives all permissions for the start to rule out other errors.
Here an Example Manifest
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: all-permissions
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: service-account-all-permissions
namespace: gameplay
Set the Service Account to 'service-account-all-permissions' in your Deployment and see if you still get an permission error from the Kubernetes API
Source.

Related

kubernetes ServiceAccount Role Verification failed

questions:
Create a service account name dev-sa in default namespace, dev-sa can create below components in dev namespace:
Deployment
StatefulSet
DaemonSet
result:
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: dev-sa
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: dev
name: sa-role
rules:
- apiGroups: [""]
resources: ["deployment","statefulset","daemonset"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: sa-rolebinding
namespace: dev
subjects:
- kind: ServiceAccount
name: dev-sa
namespace: default
roleRef:
kind: Role
name: sa-role
apiGroup: rbac.authorization.k8s.io
Validation:
kubectl auth can-i create deployment -n dev \
--as=system:serviceaccount:default:dev-sa
no
This is an exam question, but I can't pass
Can you tell me where the mistake is? thx
in Role, use * on api group, and add s on resource name.
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: dev-sa
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: dev
name: sa-role
rules:
- apiGroups: ["*"]
resources: ["deployments", "statefulsets", "daemonsets"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: sa-rolebinding
namespace: dev
subjects:
- kind: ServiceAccount
name: dev-sa
namespace: default
roleRef:
kind: Role
name: sa-role
apiGroup: rbac.authorization.k8s.io
First, the apiGroups of Deployment, daemonSet, and statefulSet is apps, not core. So, for the apiGroups value, instead of "", put "apps". (an empty string representing core)
Second, remember: resources always define in Plural of "kind". So, for resources values, you always should use plural names. e.g. instead of deployment, you use deployments
So, your file should be something like this:
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: dev-sa
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: dev
name: sa-role
rules:
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","daemonsets"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: sa-rolebinding
namespace: dev
subjects:
- kind: ServiceAccount
name: dev-sa
namespace: default
roleRef:
kind: Role
name: sa-role
apiGroup: rbac.authorization.k8s.io
For apiGroups's values, be sure to check the docs
I suggest you read this article about Users and Permissions in Kubernetes.

Grant a pod access to create new Namespaces

I'm trying to provision emepheral environments via automation leveraging Kubernetes namespaces. My automation workers deployed in Kubernetes must be able to create Namespaces. So far my experimentation with this led me nowhere. Which binding do I need to attach to the Service Account to allow it to control Namespaces? Or is my approach wrong?
My code so far:
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-deployer
namespace: tooling
labels:
app: k8s-deployer
spec:
replicas: 1
selector:
matchLabels:
app: k8s-deployer
template:
metadata:
name: k8s-deployer
labels:
app: k8s-deployer
spec:
serviceAccountName: k8s-deployer
containers: ...
rbac.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-deployer
namespace: tooling
---
# this lets me view namespaces, but not write
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: administer-cluster
subjects:
- kind: ServiceAccount
name: k8s-deployer
namespace: tooling
roleRef:
kind: ClusterRole
name: admin
apiGroup: rbac.authorization.k8s.io
To give a pod control over something in Kubernetes you need at least four things:
Create or select existing Role/ClusterRole (you picked administer-cluster, which rules are unknown to me).
Create or select existing ServiceAccount (you created k8s-deployer in namespace tooling).
Put the two together with RoleBinding/ClusterRoleBinding.
Assign the ServiceAccount to a pod.
Here's an example that can manage namespaces:
# Create a service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-deployer
namespace: tooling
---
# Create a cluster role that allowed to perform
# ["get", "list", "create", "delete", "patch"] over ["namespaces"]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k8s-deployer
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list", "create", "delete", "patch"]
---
# Associate the cluster role with the service account
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-deployer
# make sure NOT to mention 'namespace' here or
# the permissions will only have effect in the
# given namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: k8s-deployer
subjects:
- kind: ServiceAccount
name: k8s-deployer
namespace: tooling
After that you need to mention the service account name in pod spec as you already did. More info about RBAC in the documentation.

Patch namespace and change resource kind for a role binding

Using kustomize, I'd like to set namespace field for all my objects.
Here is my kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- patch: |-
- op: replace
path: /kind
value: RoleBinding
target:
group: rbac.authorization.k8s.io
kind: ClusterRoleBinding
name: manager-rolebinding
version: v1
resources:
- role_binding.yaml
namespace: <NAMESPACE>
Here is my resource file: role_binding.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: manager-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: manager-role
subjects:
- kind: ServiceAccount
name: controller-manager
namespace: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
selector:
matchLabels:
control-plane: controller-manager
replicas: 1
template:
metadata:
labels:
control-plane: controller-manager
spec:
containers:
- command:
- /manager
args:
- --enable-leader-election
image: controller:latest
name: manager
And kustomize output:
$ kustomize build
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: manager-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: manager-role
subjects:
- kind: ServiceAccount
name: controller-manager
namespace: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: <NAMESPACE>
spec:
replicas: 1
selector:
matchLabels:
control-plane: controller-manager
template:
metadata:
labels:
control-plane: controller-manager
spec:
containers:
- args:
- --enable-leader-election
command:
- /manager
image: controller:latest
name: manager
How can I patch the namespace field in the RoleBinding and set it to <NAMESPACE>? In above example, it works perfectly for the Deployment resource but not for the RoleBinding.
Here is a solution which solves the issue, using kustomize-v4.0.5:
cat <<EOF > kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- patch: |-
- op: replace
path: /kind
value: RoleBinding
- op: add
path: /metadata/namespace
value:
<NAMESPACE>
target:
group: rbac.authorization.k8s.io
kind: ClusterRoleBinding
name: manager-rolebinding
version: v1
resources:
- role_binding.yaml
- service_account.yaml
namespace: <NAMESPACE>
EOF
cat <<EOF > role_binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: manager-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: manager-role
subjects:
- kind: ServiceAccount
name: controller-manager
namespace: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
selector:
matchLabels:
control-plane: controller-manager
replicas: 1
template:
metadata:
labels:
control-plane: controller-manager
spec:
containers:
- command:
- /manager
args:
- --enable-leader-election
image: controller:latest
name: manager
EOF
cat <<EOF > service_account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: controller-manager
namespace: system
EOF
Adding the ServiceAccount resource, and the Namespace field in the RoleBinding resource allows to correctly set the subject field in the RoleBinding.
Looking directly from the code:
// roleBindingHack is a hack for implementing the namespace transform
// for RoleBinding and ClusterRoleBinding resource types.
// RoleBinding and ClusterRoleBinding have namespace set on
// elements of the "subjects" field if and only if the subject elements
// "name" is "default". Otherwise the namespace is not set.
//
// Example:
//
// kind: RoleBinding
// subjects:
// - name: "default" # this will have the namespace set
// ...
// - name: "something-else" # this will not have the namespace set
// ...
The ServiceAccount and the reference on the ClusterRoleBinding needs to have "default" as namespace or otherwise it won't be replaced.
Check the example below:
$ cat <<EOF > my-resources.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: my-cluster-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-clusterrole
subjects:
- kind: ServiceAccount
name: my-service-account
namespace: default
EOF
$ cat <<EOF > kustomization.yaml
namespace: foo-namespace
namePrefix: foo-prefix-
resources:
- my-resources.yaml
EOF
$ kustomize build
apiVersion: v1
kind: ServiceAccount
metadata:
name: foo-prefix-my-service-account
namespace: foo-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: foo-prefix-my-cluster-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-clusterrole
subjects:
- kind: ServiceAccount
name: foo-prefix-my-service-account
namespace: foo-namespace

Kubernetes - User "system:serviceaccount:management:gitlab-admin" cannot get resource "serviceaccounts" in API >group "" in the namespace "services"

I am getting this error -
Error: rendered manifests contain a resource that already exists. Unable to continue with >install: could not get information about the resource: serviceaccounts "simpleapi" is forbidden: >User "system:serviceaccount:management:gitlab-admin" cannot get resource "serviceaccounts" in API >group "" in the namespace "services"
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: gitlab-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: gitlab
namespace: kube-system
- kind: ServiceAccount
name: gitlab
namespace: services
I am using this for RBAC as cluster-admin. Why am I getting this . I also tried the below but still got the same issue . Can someone explain what is that I am doing wrong here -
apiVersion: rbac.authorization.k8s.io/v1
kind: "ClusterRole"
metadata:
name: gitlab-admin
labels:
app: gitlab-admin
rules:
- apiGroups: ["*"] # also tested with ""
resources:
[
"replicasets",
"pods",
"pods/exec",
"secrets",
"configmaps",
"services",
"deployments",
"ingresses",
"horizontalpodautoscalers",
"serviceaccounts",
]
verbs: ["get", "list", "watch", "create", "patch", "delete", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: "ClusterRoleBinding"
metadata:
name: gitlab-admin-global
labels:
app: gitlab-admin
roleRef:
apiGroup: "rbac.authorization.k8s.io"
kind: "ClusterRole"
name: cluster-admin
subjects:
- kind: ServiceAccount
name: gitlab-admin
namespace: management
- kind: ServiceAccount
name: gitlab-admin
namespace: services
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-admin
namespace: management
labels:
app: gitlab-admin
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-admin
namespace: services
labels:
app: gitlab-admin
So here is what happened . I needed to run this as inside the namespace i.e
I changed the config to run from the namespace management itself .
kubectl config set-context --current --namespace=management
And then
kubectl apply -f gitlab-admin.yaml

Forbidden: "system:serviceaccount:default:default" cannot create resource. How to add permissions?

When I try to create a resource from a node.js application via http request I get this error.
{
kind: 'Status',
apiVersion: 'v1',
metadata: {},
status: 'Failure',
message: 'prometheusrules.monitoring.coreos.com is forbidden: User ' +
'"system:serviceaccount:default:default" cannot create resource ' +
'"prometheusrules" in API group "monitoring.coreos.com" in the ' +
'namespace "default"',
reason: 'Forbidden',
details: { group: 'monitoring.coreos.com', kind: 'prometheusrules' },
code: 403
}
How do I add permissions to system:serviceaccount:default:default?
I have tried with the following ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: sla-manager-service-role
labels:
app: sla-manager-app
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["services", "pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
But it is not working. The service for my node.js application looks like this
apiVersion: v1
kind: Service
metadata:
name: sla-manager-service
labels:
app: sla-manager-app
monitoring: "true"
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "6400"
spec:
selector:
app: issue-manager-app
ports:
- protocol: TCP
name: http
port: 80
targetPort: 6400
You need a Role to define the permissions.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: sla-manager-service-role
namespace: default
labels:
app: sla-manager-app
rules:
- apiGroups: ["monitoring.coreos.com"] # "" indicates the core API group
resources: ["prometheusrules"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
Then assign the above Role to the service account using a RoleBinding. This will give the permissions to the service account.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: sla-manager-service-role
subjects:
- kind: ServiceAccount
name: default
namespace: default
Verify the service account's permission using below command
kubectl auth can-i create prometheusrules --as=system:serviceaccount:default:default -n default
You application node.js is using default service account which does not have any create permission. That is creating this issue.To solve this issue you have to create another service account with necessary permission and add this service account to your container spec.
For example lets create cluster admin service account which has all permission.You can create your own based on your requirement.
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-app
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: node-app
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: node-app
namespace: kube-system
Now add this service account in container spec in your deployment.yaml.
For example:
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /var/run/secrets/tokens
name: vault-token
serviceAccountName: node-app