kubernetes RBAC role verbs to exec to pod - kubernetes

I my 1.9 cluster created this deployment role for the dev user. Deployment works as expected. Now I want to give exec and logs access to developer. What role I need to add for exec to the pod?
kind: Role
name: deployment-manager
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
Error message:
kubectl exec nginx -it -- sh
Error from server (Forbidden): pods "nginx" is forbidden: User "dev" cannot create pods/exec in the namespace "dev"
Thanks
SR

The RBAC docs say that
Most resources are represented by a string representation of their name, such as “pods”, just as it appears in the URL for the relevant API endpoint. However, some Kubernetes APIs involve a “subresource”, such as the logs for a pod. [...] To represent this in an RBAC role, use a slash to delimit the resource and subresource.
To allow a subject to read both pods and pod logs, and be able to exec into the pod, you would write:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-and-pod-logs-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
Some client libraries may do an http GET to negotiate a websocket first, which would require the "get" verb. kubectl sends an http POST instead, that's why it requires the "create" verb in that case.

Related

setting up build pod: Timed out while waiting for ServiceAccount/<service_account_name> to be present in the cluster

I am using helm charts to deploy Gitlab Runner into Kubernetes cluster. I want that the created pods when runner is triggered to have a costume services account instead of the default one. I did create role and cluster role and did the role bindings.
However, I am getting the following error when running a CI job
From Gitlab CI
Running with gitlab-runner 15.0.0 (cetx4b)
on initial-runner -P-d1RhT
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: namespace_test
Using Kubernetes executor with image registry.gitlab.com/docker-images/ubuntu-base:latest ...
Using attach strategy to execute scripts...
Preparing environment
00:05
ERROR: Job failed (system failure): prepare environment: setting up build pod: Timed out while waiting for ServiceAccount/gitlab-runner to be present in the cluster. Check https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading for more information
list roles and services accounts
# get rolebindings & clusterrolebindings
kubectl get rolebindings,clusterrolebindings -n namespace_test | grep gitlab-runner
# output
# rolebinding.rbac.authorization.k8s.io/gitlab-runner Role/gitlab-runner
# clusterrolebinding.rbac.authorization.k8s.io/gitlab-runner ClusterRole/gitlab-runner
---
# get serviceaccounts
kubectl get serviceaccounts -n namespace_test
# output
# NAME SECRETS AGE
# default 1 6h50m
# gitlab-runner 1 24m
# kubernetes-dashboard 1 6h50m
# mysql 2 6h49m
helm values
runners:
concurrent: 8
name: initial-runner
config: |
[[runners]]
[runners.kubernetes]
namespace = "namespace_test"
image = "registry.gitlab.com/docker-images/ubuntu-base:latest"
service_account = "gitlab-runner"
tags: base
rbac:
create: false
serviceAccountName: gitlab-runner
any ideas on how to solve this issue?
In my case, I forgot to give the "gitlab-runner" cluster role the right permissions on "serviceaccounts" resource.
Ensure the role that is attached to your Gitlab runner has the following specification:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: gitlab-runner
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "get", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["list", "get", "create", "delete", "update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["list", "get", "create", "delete", "update"]
- apiGroups: [""]
resources: ["pods/attach"]
verbs: ["list", "get", "create", "delete", "update"]
- apiGroups: [""]
resources: ["serviceaccounts"]
verbs: ["list", "get", "create", "delete", "update"]

What kubernetes permissions does GitLab runner kubernetes executor need?

I've installed GitLab runner on a kubernetes cluster under a namespace gitlab-runner. Like so
# cat <<EOF | kubectl create -f -
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"name": "gitlab-runner",
"labels": {
"name": "gitlab-runner"
}
}
}
# helm repo add gitlab https://charts.gitlab.io
# cat <<EOF|helm install --namespace gitlab-runner gitlab-runner -f - gitlab/gitlab-runner
gitlabUrl: https://gitlab.mycompany.com
runnerRegistrationToken: "c................Z"
The GitLab runner properly registers with the GitLab project but all jobs fail.
A quick look into the GitLab runner logs tells me that the service account used by the GitLab runner lack the proper permissions:
# kubectl logs --namespace gitlabrunner gitlab-runner-gitlab-runner-xxxxxxxxx
ERROR: Job failed (system failure): pods is forbidden: User "system:serviceaccount:gitlabrunner:default" cannot create resource "pods" in API group "" in the namespace "gitlab-runner" duration=42.095493ms job=37482 project=yyy runner=xxxxxxx
What permission does the gitlab runner kubernetes executor need?
I couldn't find in the GitLab runner documentation a list of permissions but I try adding permissions one by one and I compiled a list of the permission required for basic functioning.
The gitlab runner will use the service account system:serviceaccount:gitlab-runner:default so we need to create a role and assign that role to that service account.
# cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: gitlab-runner
namespace: gitlab-runner
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "get", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
# kubectl create rolebinding --namespace=gitlab-runner gitlab-runner-binding --role=gitlab-runne r --serviceaccount=gitlab-runner:default
With that role assigned to the service account, GitLab runner will be able to create, execute and delete the pod and also access the logs.
Unfortunately I couldn't find this in the official docs either just like #RubenLaguna stated. However, the default values.yaml of the kubernetes gitlab runner helm chart lets you define these RBAC rules nicely and does list some examples which I started with.
In my case I had to add a few and went with the following:
rbac:
create: true
rules:
- apiGroups: [""]
resources: ["pods", "secrets", "configmaps"]
verbs: ["get", "list", "watch", "create", "patch", "delete", "update"]
- apiGroups: [""]
resources: ["pods/exec", "pods/attach"]
verbs: ["create", "patch", "delete"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
If you installed gitlab runners using helm chart then gitlab runners does not use the service account created by the helm chart instead it uses the default service account. There is a bug related to this. https://gitlab.com/gitlab-org/charts/gitlab-runner/-/issues/353 To work around this problem we created a ClusterRoleBinding as below.
kind: ClusterRoleBinding
metadata:
name: gitlab-runner-role-default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: <helm-created-cluster-role-here>
subjects:
- kind: ServiceAccount
name: default
namespace: gitlab-runner

RBAC role to manage single pod with dynamic name

I need to grant access to one deployment and all pods of this deployment using RBAC.
I've managed to configure Role and RoleBinding for the deploymet, and it's working fine:
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: <my-namespace>
name: <deployment>-manager-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments"]
resourceNames: ["<deployment>"]
verbs: ["get", "list", "watch", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: <deployment>-manager-binding
namespace: <my-namespace>
subjects:
- kind: User
name: <username>
apiGroup: ""
roleRef:
kind: Role
name: <deployment>-manager-role
apiGroup: ""
Using this role user can access, update and patch the deployment. This deployment creates pods with dynamic names (like <deployment>-5594cbfcf4-v4xx8). I tried to allow this user to access these pods (get, list, watch, read logs, exec, delete) using deployment name and using deployment name + wildcard char *:
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: <my-namespace>
name: <deployment>-pods-manager-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["pods"]
resourceNames: ["<deployment>*"]
verbs: ["get", "list", "watch", "update", "patch", "exec", "delete"]
I also updated the role binding. But when I try to get the pod:
kubectl --context=<username>-ctx -n <namespace> get pods <deployment>-5594cbfcf4-v4xx8
I'm getting error:
Error from server (Forbidden): pods "<deployment>-5594cbfcf4-v4xx8" is forbidden: User "<username>" cannot get resource "pods" in API group "" in the namespace "<namespace>"
If I add <deployment>-5594cbfcf4-v4xx8 to the list of resourceNames, user can access this pod.
Is it possible to grant access to the specific pods based on deployment name?
In Kubernetes, pods are considered as an ephemeral "cattle", they come and go. You shouldn't try to manage RBAC per pod.
In your use case, there is unfortunately no way to grant a role over a set of pods matching a certain name, because the resourceNames field doesn't support patterns like prefixes/suffixes. Don't get confused: a single asterisk character ('*') has a special meaning that means "all", but it's not a pattern. So, 'my-app-* in resourceNames will not work. There were tickets opened for this feature, but it wasn't implemented:
https://github.com/kubernetes/kubernetes/issues/56582
There was also a request to be able to manage RBAC over labels, but that feature isn't implemented neither:
https://github.com/kubernetes/kubernetes/issues/44703
Therefore, you probably need to change your model to grant roles to users to manage all pods in a certain namespace. Your deployment should be the only "source of pods" in that namespace. That way, you will not need to specify any resource names.

Kubernetes Dashboard v1.8.3 deployment

I am just trying to deploy kubernetes Dashboard in a namespace called "test".
https://raw.githubusercontent.com/kubernetes/dashboard/v1.8.3/src/deploy/recommended/kubernetes-dashboard.yaml
I just replaced namespace from kube-system to test from the above yaml file and executed as below.
kubectl apply -f kubernetes-dashboard.yaml -n test
But, it is still trying to do something with namespace kube-system and getting the below error.
Image:
gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.3
Error:-
2018/05/31 16:56:55 Starting overwatch
2018/05/31 16:56:55 Using in-cluster config to connect to apiserver
2018/05/31 16:56:55 Using service account token for csrf signing
2018/05/31 16:56:55 No request provided. Skipping authorization
2018/05/31 16:56:55 Successful initial request to the apiserver, version: v1.10.2
2018/05/31 16:56:55 Generating JWE encryption key
2018/05/31 16:56:55 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2018/05/31 16:56:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2018/05/31 16:56:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: unexpected object: &Secret{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string][]byte{},Type:,StringData:map[string]string{},}
2018/05/31 16:56:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.
2018/05/31 16:56:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2018/05/31 16:56:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout
2018/05/31 16:56:59 Storing encryption key in a secret
panic: secrets is forbidden: User "system:serviceaccount:test:dashboard" cannot create secrets in the namespace "kube-system"
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/auth/jwe.(*rsaKeyHolder).init(0xc420254e00)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:131 +0x2d3
github.com/kubernetes/dashboard/src/app/backend/auth/jwe.NewRSAKeyHolder(0x1a7ee00, 0xc42037a5a0, 0xc42037a5a0, 0x127b962)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:170 +0x83
main.initAuthManager(0x1a7e300, 0xc4201e2240, 0xc42066dc68, 0x1)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:183 +0x12f
main.main()
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:101 +0x28c
I created Secret, Rolebinding, Serviceaccount, deployment, Service & Ingress in the namesapce "test". Removed namespace from the yaml file and supplied thru -n "test" while creating.
That happened because you created the ServiceAccount on a different namespace, namely test but as it says, it needs to be deployed in kube-system in order to be able to function.
You can find a nice walkthrough and possibly some clarifications here
However, if you still want to deploy on a different namespace, you would have to add the following role and rolebinding to your cluster:
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: test
---
I am afraid there is no other way around, you have to allow the service account to create secrets in kube-system namespace.

Kubernetes cluster role error: at least one verb must be specified

I have the following clusterrole
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: test
rules:
- apiGroups: [""]
resources: ["crontabs"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
with the following error on create
jonathan#sc-test-cluster:~$ kubectl create clusterrole role.yml
error: at least one verb must be specified
You either create it from file using -f or by specifying the options using clusterrole, see also the docs, but not both. Try the following:
$ kubectl create -f role.yml