Kubernetes Dashboard v1.8.3 deployment - kubernetes

I am just trying to deploy kubernetes Dashboard in a namespace called "test".
https://raw.githubusercontent.com/kubernetes/dashboard/v1.8.3/src/deploy/recommended/kubernetes-dashboard.yaml
I just replaced namespace from kube-system to test from the above yaml file and executed as below.
kubectl apply -f kubernetes-dashboard.yaml -n test
But, it is still trying to do something with namespace kube-system and getting the below error.
Image:
gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.3
Error:-
2018/05/31 16:56:55 Starting overwatch
2018/05/31 16:56:55 Using in-cluster config to connect to apiserver
2018/05/31 16:56:55 Using service account token for csrf signing
2018/05/31 16:56:55 No request provided. Skipping authorization
2018/05/31 16:56:55 Successful initial request to the apiserver, version: v1.10.2
2018/05/31 16:56:55 Generating JWE encryption key
2018/05/31 16:56:55 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2018/05/31 16:56:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2018/05/31 16:56:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: unexpected object: &Secret{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string][]byte{},Type:,StringData:map[string]string{},}
2018/05/31 16:56:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.
2018/05/31 16:56:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2018/05/31 16:56:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout
2018/05/31 16:56:59 Storing encryption key in a secret
panic: secrets is forbidden: User "system:serviceaccount:test:dashboard" cannot create secrets in the namespace "kube-system"
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/auth/jwe.(*rsaKeyHolder).init(0xc420254e00)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:131 +0x2d3
github.com/kubernetes/dashboard/src/app/backend/auth/jwe.NewRSAKeyHolder(0x1a7ee00, 0xc42037a5a0, 0xc42037a5a0, 0x127b962)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:170 +0x83
main.initAuthManager(0x1a7e300, 0xc4201e2240, 0xc42066dc68, 0x1)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:183 +0x12f
main.main()
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:101 +0x28c
I created Secret, Rolebinding, Serviceaccount, deployment, Service & Ingress in the namesapce "test". Removed namespace from the yaml file and supplied thru -n "test" while creating.

That happened because you created the ServiceAccount on a different namespace, namely test but as it says, it needs to be deployed in kube-system in order to be able to function.
You can find a nice walkthrough and possibly some clarifications here
However, if you still want to deploy on a different namespace, you would have to add the following role and rolebinding to your cluster:
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: test
---
I am afraid there is no other way around, you have to allow the service account to create secrets in kube-system namespace.

Related

Kubernetes Role should grant access to all resources but it ignores some resources

The role namespace-limited should have full access to all resources (of the specified API groups) inside of a namespace. My Role manifest looks like this:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: namespace-limited
namespace: restricted-xample
rules:
- apiGroups:
- core
- apps
- batch
- networking.k8s.io
resources: ["*"] # asterisk to grant access to all resources of the specified api groups
verbs: ["*"]
I associated the Role to a ServiceAccount using a RoleBinding but unfortunately this ServiceAccount has no access to Pod, Service, Secret, ConfigMap and Endpoint Resources. These resources are all part of the core API group. All the other common Workloads work though. Why is that?
The core group, also referred to as the legacy group, is at the REST path /api/v1 and uses apiVersion: v1
You need to use "" for core API group.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: restricted-xample
name: namespace-limited
rules:
- apiGroups: ["", "apps", "batch", "networking.k8s.io"] # "" indicates the core API group
resources: ["*"]
verbs: ["*"]
To test the permission of the service account use below commands
kubectl auth can-i get pods --as=system:serviceaccount:restricted-xample:default -n restricted-xample
kubectl auth can-i get secrets --as=system:serviceaccount:restricted-xample:default -n restricted-xample
kubectl auth can-i get configmaps --as=system:serviceaccount:restricted-xample:default -n restricted-xample
kubectl auth can-i get endpoints --as=system:serviceaccount:restricted-xample:default -n restricted-xample
Just figured out, that it works when I omit the core keyword, like in this example. Following Role manifest works:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: namespace-limited
namespace: restricted-xample
rules:
- apiGroups: ["", "apps", "batch", "networking.k8s.io"]
resources: ["*"]
verbs: ["*"]
But why it does not work if I specify the core API group is a mystery to me.

What kubernetes permissions does GitLab runner kubernetes executor need?

I've installed GitLab runner on a kubernetes cluster under a namespace gitlab-runner. Like so
# cat <<EOF | kubectl create -f -
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"name": "gitlab-runner",
"labels": {
"name": "gitlab-runner"
}
}
}
# helm repo add gitlab https://charts.gitlab.io
# cat <<EOF|helm install --namespace gitlab-runner gitlab-runner -f - gitlab/gitlab-runner
gitlabUrl: https://gitlab.mycompany.com
runnerRegistrationToken: "c................Z"
The GitLab runner properly registers with the GitLab project but all jobs fail.
A quick look into the GitLab runner logs tells me that the service account used by the GitLab runner lack the proper permissions:
# kubectl logs --namespace gitlabrunner gitlab-runner-gitlab-runner-xxxxxxxxx
ERROR: Job failed (system failure): pods is forbidden: User "system:serviceaccount:gitlabrunner:default" cannot create resource "pods" in API group "" in the namespace "gitlab-runner" duration=42.095493ms job=37482 project=yyy runner=xxxxxxx
What permission does the gitlab runner kubernetes executor need?
I couldn't find in the GitLab runner documentation a list of permissions but I try adding permissions one by one and I compiled a list of the permission required for basic functioning.
The gitlab runner will use the service account system:serviceaccount:gitlab-runner:default so we need to create a role and assign that role to that service account.
# cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: gitlab-runner
namespace: gitlab-runner
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "get", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
# kubectl create rolebinding --namespace=gitlab-runner gitlab-runner-binding --role=gitlab-runne r --serviceaccount=gitlab-runner:default
With that role assigned to the service account, GitLab runner will be able to create, execute and delete the pod and also access the logs.
Unfortunately I couldn't find this in the official docs either just like #RubenLaguna stated. However, the default values.yaml of the kubernetes gitlab runner helm chart lets you define these RBAC rules nicely and does list some examples which I started with.
In my case I had to add a few and went with the following:
rbac:
create: true
rules:
- apiGroups: [""]
resources: ["pods", "secrets", "configmaps"]
verbs: ["get", "list", "watch", "create", "patch", "delete", "update"]
- apiGroups: [""]
resources: ["pods/exec", "pods/attach"]
verbs: ["create", "patch", "delete"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
If you installed gitlab runners using helm chart then gitlab runners does not use the service account created by the helm chart instead it uses the default service account. There is a bug related to this. https://gitlab.com/gitlab-org/charts/gitlab-runner/-/issues/353 To work around this problem we created a ClusterRoleBinding as below.
kind: ClusterRoleBinding
metadata:
name: gitlab-runner-role-default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: <helm-created-cluster-role-here>
subjects:
- kind: ServiceAccount
name: default
namespace: gitlab-runner

RBAC role to manage single pod with dynamic name

I need to grant access to one deployment and all pods of this deployment using RBAC.
I've managed to configure Role and RoleBinding for the deploymet, and it's working fine:
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: <my-namespace>
name: <deployment>-manager-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments"]
resourceNames: ["<deployment>"]
verbs: ["get", "list", "watch", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: <deployment>-manager-binding
namespace: <my-namespace>
subjects:
- kind: User
name: <username>
apiGroup: ""
roleRef:
kind: Role
name: <deployment>-manager-role
apiGroup: ""
Using this role user can access, update and patch the deployment. This deployment creates pods with dynamic names (like <deployment>-5594cbfcf4-v4xx8). I tried to allow this user to access these pods (get, list, watch, read logs, exec, delete) using deployment name and using deployment name + wildcard char *:
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: <my-namespace>
name: <deployment>-pods-manager-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["pods"]
resourceNames: ["<deployment>*"]
verbs: ["get", "list", "watch", "update", "patch", "exec", "delete"]
I also updated the role binding. But when I try to get the pod:
kubectl --context=<username>-ctx -n <namespace> get pods <deployment>-5594cbfcf4-v4xx8
I'm getting error:
Error from server (Forbidden): pods "<deployment>-5594cbfcf4-v4xx8" is forbidden: User "<username>" cannot get resource "pods" in API group "" in the namespace "<namespace>"
If I add <deployment>-5594cbfcf4-v4xx8 to the list of resourceNames, user can access this pod.
Is it possible to grant access to the specific pods based on deployment name?
In Kubernetes, pods are considered as an ephemeral "cattle", they come and go. You shouldn't try to manage RBAC per pod.
In your use case, there is unfortunately no way to grant a role over a set of pods matching a certain name, because the resourceNames field doesn't support patterns like prefixes/suffixes. Don't get confused: a single asterisk character ('*') has a special meaning that means "all", but it's not a pattern. So, 'my-app-* in resourceNames will not work. There were tickets opened for this feature, but it wasn't implemented:
https://github.com/kubernetes/kubernetes/issues/56582
There was also a request to be able to manage RBAC over labels, but that feature isn't implemented neither:
https://github.com/kubernetes/kubernetes/issues/44703
Therefore, you probably need to change your model to grant roles to users to manage all pods in a certain namespace. Your deployment should be the only "source of pods" in that namespace. That way, you will not need to specify any resource names.

RBAC For kubernetes Dashboard

I have a User "A". I have namespaces X,Y,Z. I have created a RBAC user role and role binding for user "A" who has access to Namespace "X".
I wanted to give the user "A" access to kubernetes dashboard (which is a role and role binding for Kube-System). But when I give the access for dashboard, user "A" is able to see all the namespaces.
But I want him to see only namespace X which he has access).
How could I go about this?
What's the version of your Dashboard? As far as I know, from 1.7 on, Dashboard has used more secure setup, It means, that by default it has the minimal set of privileges, that are required to make Dashboard work.
Anyway, you can check the privileges of the sa that used by Dashboard, make sure it has the minimal privileges, like this:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
Then, create RBAC rules to give the full privileges for namespace X to A:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: user-A-admin
namespace: X
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: A
Make sure user A doesn't have any other RBAC rules.

kubernetes RBAC role verbs to exec to pod

I my 1.9 cluster created this deployment role for the dev user. Deployment works as expected. Now I want to give exec and logs access to developer. What role I need to add for exec to the pod?
kind: Role
name: deployment-manager
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
Error message:
kubectl exec nginx -it -- sh
Error from server (Forbidden): pods "nginx" is forbidden: User "dev" cannot create pods/exec in the namespace "dev"
Thanks
SR
The RBAC docs say that
Most resources are represented by a string representation of their name, such as “pods”, just as it appears in the URL for the relevant API endpoint. However, some Kubernetes APIs involve a “subresource”, such as the logs for a pod. [...] To represent this in an RBAC role, use a slash to delimit the resource and subresource.
To allow a subject to read both pods and pod logs, and be able to exec into the pod, you would write:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-and-pod-logs-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
Some client libraries may do an http GET to negotiate a websocket first, which would require the "get" verb. kubectl sends an http POST instead, that's why it requires the "create" verb in that case.