Setting up Helm RBAC Per Namespace - kubernetes

I'm following the official Helm documentation for "Deploy Tiller in a namespace, restricted to deploying resources only in that namespace". Here is my bash script:
Namespace="$1"
kubectl create namespace $Namespace
kubectl create serviceaccount "tiller-$Namespace" --namespace $Namespace
kubectl create role "tiller-role-$Namespace" /
--namespace $Namespace /
--verb=* /
--resource=*.,*.apps,*.batch,*.extensions
kubectl create rolebinding "tiller-rolebinding-$Namespace" /
--namespace $Namespace /
--role="tiller-role-$Namespace" /
--serviceaccount="$Namespace:tiller-$Namespace"
helm init /
--service-account "tiller-$Namespace" /
--tiller-namespace $Namespace
--override "spec.template.spec.containers[0].command'='{/tiller,--storage=secret}"
--upgrade
--wait
Running helm upgrade gives me the following error:
Error: UPGRADE FAILED: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
Is there a bug in the official documentation? Have I read it wrong?

I'm not sure about --resource flag correct syntax in your script, whether asterisk symbols "*" are allowed here, look at this issue reported on GitHub.
$ kubectl create role "tiller-role-$Namespace" \
--namespace $Namespace \
--verb=* \
--resource=*.,*.apps,*.batch,*.extensions
the server doesn't have a resource type "*"
But you can check this role object in your cluster:
kubectl get role tiller-role-$Namespace -n $Namespace -o yaml
Otherwise, try to create the role for tiller within yaml file as guided in the documentation:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-manager
namespace: tiller-world
rules:
- apiGroups: ["", "batch", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
Moreover, keep in mind that if you have installed tiller in the non-default namespace (default), it is necessary to specify namespace where tiller resides on, when you invoke Helm command:
$ helm --tiller-namespace $Namespace version
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}

Related

letsencrypt kubernetes: How can i include ClusterIssuer in cert-manager using helm chart instead of deploying it as a separate manifest?

I would like to add ssl support to my web app (wordpress) deployed on kubernetes. for that i deployed cert-manager using helm like following:
helm upgrade \
cert-manager \
--namespace cert-manager \
--version v1.9.1 \
--set installCRDs=true \
--set ingressShim.defaultIssuerName=letsencrypt-prod \
--set ingressShim.defaultIssuerKind=ClusterIssuer \
--set ingressShim.defaultIssuerGroup=cert-manager.io \
--create-namespace \
jetstack/cert-manager --install
Then i deployed wordpress using helm as well, while values.yml look like :
#Change default svc type
service:
type: ClusterIP
#ingress resource
ingress:
enabled: true
hostname: app.benighil-mohamed.com
path: /
annotations:
#kubernetes.io/ingress.class: azure/application-gateway
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: letsencrypt-prod
extraTls:
- hosts:
- "{{ .Values.ingress.hostname }}" # ie: app.benighil-mohamed.com
secretName: "{{ .Release.Name }}-{{ .Values.ingress.hostname }}" #ie: wp-app.benighil-mohamed.com
However, when i check certifiactes and certificaterequests i got the following:
vscode ➜ /workspaces/flux/ingress $ kubectl get certificate -n app -owide
NAME READY SECRET ISSUER STATUS AGE
wp-benighil.benighil-mohamed.com False wp-benighil.benighil-mohamed.com letsencrypt-prod Issuing certificate as Secret does not exist 25m
vscode ➜ /workspaces/flux/ingress
vscode ➜ /workspaces/flux/ingress $ kubectl get certificaterequests -n app -owide
NAME APPROVED DENIED READY ISSUER REQUESTOR STATUS AGE
wp-benighil.benighil-mohamed.com-45d6s True False letsencrypt-prod system:serviceaccount:cert-manager:cert-manager Referenced "ClusterIssuer" not found: clusterissuer.cert-manager.io "letsencrypt-prod" not found 27m
vscode ➜ /workspaces/flux/ingress
Any idea please ?

how to move prometheus adapter to another namespace?

For now I have prometheus and prometheus adapter in different namespaces:
I tried to configure adapter YML but I was not successful:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: "2020-01-30T08:49:05Z"
generation: 2
labels:
app: prometheus-adapter
chart: prometheus-adapter-2.0.1
heritage: Tiller
release: prometheus-adapter
name: prometheus-adapter
namespace: my-custom-namespace
resourceVersion: "18513075"
selfLink: /apis/apps/v1/namespaces/my-custom-namespace/deployments/prometheus-adapter
...
But I see error:
the namespace of the object (my-custom-namespace) does not match the namespace on the request (default)
How to fix it ?
You can not edit an existing resource to change namespace.You need to delete the existing deployment first and then recreate the deployment in another namespace.
Edit:
With Helm2 you need to delete the release first helm delete --purge release-name and then deploy it to different namespace as helm install stable/prometheus-adapter --namespace namespace-name
With helm 3 since there is no --namespace flag you need to delete the existing deployment and then redeploy it to a different namespace as below example to deploy metrics server.
$ helm install metricserver stable/metrics-server
Error: the namespace from the provided object "kube-system" does not match the namespace "default". You must pass '--namespace=kube-system' to perform this operation.
$ helm install metricserver stable/metrics-server --namespace=kube-system
Error: the namespace from the provided object "kube-system" does not match the namespace "default". You must pass '--namespace=kube-system' to perform this operation.
$ kubectl config set-context kube-system --cluster=kubernetes --user=kubernetes-admin --namespace=kube-system
Context "kube-system" created.
$ kubectl config use-context kube-system
Switched to context "kube-system".
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kube-system kubernetes kubernetes-admin kube-system
kubernetes-admin#kubernetes kubernetes kubernetes-admin
metallb kubernetes kubernetes-admin metallb
nfstorage kubernetes kubernetes-admin nfstorage
$ helm install metricserver stable/metrics-server
NAME: metricserver
LAST DEPLOYED: 2019-05-26 14:37:45.582245559 -0700 PDT m=+2.942929639
NAMESPACE: kube-system
STATUS: deployed
For helm 2 you can install the chart in any namespace you want by using:
helm install stable/prometheus-adapter --name my-release --namespace foo
Keep in mind that you need to remove the previous one.
This can be done using helm delete --purge my-release
Also there is a really nice article regarding changes in Helm3 Breaking Changes in Helm 3 (and How to Fix Them).

Cannot list resource "configmaps" in API group when deploying Weaviate k8s setup on GCP

When running (on GCP):
$ helm upgrade \
--values ./values.yaml \
--install \
--namespace "weaviate" \
"weaviate" \
weaviate.tgz
It returns;
UPGRADE FAILED
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "ku
be-system"
Error: UPGRADE FAILED: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in t
he namespace "kube-system"
UPDATE: based on solution
$ vim rbac-config.yaml
Add to the file:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
Run:
$ kubectl create -f rbac-config.yaml
$ helm init --service-account tiller --upgrade
Note: based on Helm v2.
tl;dr: Setup Helm with the appropriate authorization settings for your cluster, see https://v2.helm.sh/docs/using_helm/#role-based-access-control
Long Answer
Your experience is not specific to the Weaviate Helm chart, rather it looks like Helm is not setup according to the cluster authorization settings. Other Helm commands should fail with the same or a similar error.
The following error
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "ku
be-system"
means that the default service account in the kube-system namespace is lacking permissions. I assume you have installed Helm/Tiller in the kube-system namespace as this is the default if no other arguments are specified on helm init. Since you haven't created a specific Service Account for Tiller to use it defaults to the default service account.
Since you are mentioning that you are running on GCP, I assume this means you are using GKE. GKE by default has RBAC Authorization enabled. In an RBAC setting no one has any rights by default, all rights need to be explicitly granted.
The helm docs list several options on how to make Helm/Tiller work in an RBAC-enabled setting. If the cluster has the sole purpose of running Weaviate you can choose the simplest option: Service Account with cluster-admin role. The process described there essentially creates a dedicated service account for Tiller, and adds the required ClusterRoleBinding to the existing cluster-admin ClusterRole. Note that this effectively makes Helm/Tiller an admin of the entire cluster.
If you are running a multi-tenant cluster and/or want to limit Tillers permissions to a specific namespace, you need to choose one of the alternatives.

how to control access for pods/exec only in kubernetes rbac without pods create binded?

I checked the kubernetes docs, find that pods/exec resources has no verb,
and do not know how to only control access for it? Since I create a pod, someone else need to access it use 'exec' but cannot create anything in my cluster.
How to implement this?
Since pods/exec is a subresource of pods, If you want to exec a pod, you first need to get the pod, so here is my role definition.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
Maybe you can try this kubectl plugin: https://github.com/zhangweiqaz/go_pod
kubectl go -h
kubectl exec in pod with username. For example:
kubectl go pod_name
Usage:
go [flags]
Flags:
-c, --containerName string containerName
-h, --help help for go
-n, --namespace string namespace
-u, --username string username, this user must exist in image, default: dev

RBAC - Limit access for one service account

I want to limit the permissions to the following service account, created it as follows:
kubectl create serviceaccount alice --namespace default
secret=$(kubectl get sa alice -o json | jq -r .secrets[].name)
kubectl get secret $secret -o json | jq -r '.data["ca.crt"]' | base64 -d > ca.crt
user_token=$(kubectl get secret $secret -o json | jq -r '.data["token"]' | base64 -d)
c=`kubectl config current-context`
name=`kubectl config get-contexts $c | awk '{print $3}' | tail -n 1`
endpoint=`kubectl config view -o jsonpath="{.clusters[?(#.name == \"$name\")].cluster.server}"`
kubectl config set-cluster cluster-staging \
--embed-certs=true \
--server=$endpoint \
--certificate-authority=./ca.crt
kubectl config set-credentials alice-staging --token=$user_token
kubectl config set-context alice-staging \
--cluster=cluster-staging \
--user=alice-staging \
--namespace=default
kubectl config get-contexts
#kubectl config use-context alice-staging
This has permission to see everything with:
kubectl --context=alice-staging get pods --all-namespaces
I try to limit it with the following, but still have all the permissions:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: no-access
rules:
- apiGroups: [""]
resources: [""]
verbs: [""]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: no-access-role
subjects:
- kind: ServiceAccount
name: alice
namespace: default
roleRef:
kind: ClusterRole
name: no-access
apiGroup: rbac.authorization.k8s.io
The idea is to limit access to a namespace to distribute tokens for users, but I do not get it ... I think it may be for inherited permissions but I can not disabled for a single serviceacount.
Using: GKE, container-vm
THX!
Note that service accounts are not meant for users, but for processes running inside pods (https://kubernetes.io/docs/admin/service-accounts-admin/).
In Create user in Kubernetes for kubectl you can find how to create a user account for your K8s cluster.
Moreover, I advise you to check whether RBAC is actually enabled in your cluster, which could explain that a user can do more operations that expected.