ClusterRole exists and cannot be imported into the current release? - kubernetes

I am trying to install the same chart two times in the same cluster in two different namespaces. However I am getting this error:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "nfs-provisioner" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "namespace2": current value is "namespace1"
As I understood cluster roles suposed to be independet from the namespace, so I found this contradictory. We are using helm3

I decided to provide a Community Wiki answer that may help other people facing a similar issue.
I assume you want to install the same chart multiple times but get the following error:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "<CLUSTERROLE_NAME>" in namespace "" exists and cannot be imported into the current release: ...
First, it's important to decide if we really need ClusterRole instead of Role.
As we can find in the Role and ClusterRole documentation:
If you want to define a role within a namespace, use a Role; if you want to define a role cluster-wide, use a ClusterRole.
Second, we can use the variable name for ClusterRole instead of hard-coding the name in the template:
For example, instead of:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: clusterrole-1
...
Try to use something like:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Values.clusterrole.name }}
...
Third, we can use the lookup function and the if control structure to skip creating resources if they already exist.
Take a look at a simple example:
$ cat clusterrole-demo/values.yaml
clusterrole:
name: clusterrole-1
$ cat clusterrole-demo/templates/clusterrole.yaml
{{- if not (lookup "rbac.authorization.k8s.io/v1" "ClusterRole" "" .Values.clusterrole.name) }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Values.clusterrole.name }}
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
{{- end }}
In the example above, if ClusterRole clusterrole-1 already exits, it won’t be created.

ClusterRole sets permission across your Kubernetes cluster, not for particular namespace. It think you misunderstand with Role. You can see further information of the differences between ClusterRole and Role here, Role and ClusterRole.
A Role always sets permissions within a particular namespace; when you create a Role, you have to specify the namespace it belongs in.
ClusterRole, by contrast, is a non-namespaced resource. The resources have different names (Role and ClusterRole) because a Kubernetes object always has to be either namespaced or not namespaced; it can't be both.

Related

Forbidden after enabling Google Cloud Groups RBAC in GKE

We are enabling Google Cloud Groups RBAC in our existing GKE clusters.
For that, we first created all the groups in Workspace, and also the required "gke-security-groups#ourdomain.com" according to documentation.
Those groups are created in Workspace with an integration with Active Directory for Single Sign On.
All groups are members of "gke-security-groups#ourdomain" as stated by documentation. And all groups can View members.
The cluster was updated to enabled the flag for Google Cloud Groups RBAC and we specify the value to be "gke-security-groups#ourdomain.com".
We then Added one of the groups (let's called it group_a#ourdomain.com) to IAM and assigned a custom role which only gives access to:
"container.apiServices.get",
"container.apiServices.list",
"container.clusters.getCredentials",
"container.clusters.get",
"container.clusters.list",
This is just the minimum for the user to be able to log into the Kubernetes cluster and from there being able to apply Kubernetes RBACs.
In Kubernetes, we applied a Role, which provides list of pods in a specific namespace, and a role binding that specifies the group we just added to IAM.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-role
namespace: custom-namespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-rolebinding
namespace: custom-namespace
roleRef:
kind: Role
name: test-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: group_a#ourdomain.com
Everything looks good until now. But when trying to list the pods of this namespace with the user that belongs to the group "group_a#ourdomain.com", we get:
Error from server (Forbidden): pods is forbidden: User
"my-user#ourdomain.com" cannot list resource "pods" in API group ""
in the namespace "custom-namespace": requires one of ["container.pods.list"]
permission(s).
Of course if I give container.pods.list to the group_a#ourdomain assigned role, I can list pods, but it opens for all namespaces, as this permission in GCloud is global.
What am I missing here?
Not sure if this is relevant, but our organisation in gcloud is called for example "my-company.io", while the groups for SSO are named "...#groups.my-company.io", and the gke-security-groups group was also created with the "groups.my-company.io" domain.
Also, if instead of a Group in the RoleBinding, I specify the user directly, it works.
It turned out to be an issue about case-sensitive strings and nothing related with the actual rules defined in the RBACs, which were working as expected.
The names of the groups were created in Azure AD with a camel case model. These group names where then showed in Google Workspace all lowercase.
Example in Azure AD:
thisIsOneGroup#groups.mycompany.com
Example configured in the RBACs as shown in Google Workspace:
thisisonegroup#groups.mycompany.com
We copied the names from the Google Workspace UI all lowercase and we put them in the bindings and that caused the issue. Kubernetes GKE is case sensitive and it didn't match the name configured in the binding with the email configured in Google Workspace.
After changing the RBAC bindings to have the same format, everything worked as expected.
Looks like you are trying to grant access to deployments in the extensions and apps API groups. That requires the user to specify the extensions and apps api group in your role rules:
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- '*'
- apiGroups:
- extensions
- apps
resources:
- deployments
- replicasets
verbs:
- '*'
I can recommend you to recreate role and role bindings too. You can visit the following thread as a reference too RBAC issue : Error from server (Forbidden):
Edited 012622:
Can you please confirm that you provided the credentials or configuration file (manifest, YAML)? As you may know, this information is provided by Kubernetes and the default service account. You can verify it by running:
$ kubectl auth can-i get pods
Let me tell you that the account type you need to use for your accounts is “service account”. To create a new service account with a wider set of permissions, the following is a YAML example:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-read-role
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pod-read-sa
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-read-rolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: pod-read-sa
apiGroup: ""
roleRef:
kind: Role
name: pod-read-role
apiGroup: ""
Please use the following thread as a reference.

Whats difference between "kubectl auth reconcile" and "kubectl apply" for working with RBAC?

I have some average yaml file defining some average role resource, all yaml should reflect my resource's desired state.
To get new average role into cluster I usually run kubectl apply -f my-new-role.yaml
but now I see this (recommended!?) alternative kubectl auth reconcile -f my-new-role.yaml
Ok, there may be RBAC relationships, ie Bindings, but shouldn't an apply do same thing?
Is there ever a case where one would update (cluster) roles but not want their related (cluster) bindings updated?
The kubectl auth reconcile command-line utility has been added in Kubernetes v1.8.
Properly applying RBAC permissions is a complex task because you need to compute logical covers operations between rule sets.
As you can see in the CHANGELOG-1.8.md:
Added RBAC reconcile commands with kubectl auth reconcile -f FILE. When passed a file which contains RBAC roles, rolebindings, clusterroles, or clusterrolebindings, this command computes covers and adds the missing rules. The logic required to properly apply RBAC permissions is more complicated than a JSON merge because you have to compute logical covers operations between rule sets. This means that we cannot use kubectl apply to update RBAC roles without risking breaking old clients, such as controllers.
The kubectl auth reconcile command will ignore any resources that are not Role, RoleBinding, ClusterRole, and ClusterRoleBinding objects, so you can safely run reconcile on the full set of manifests (see: Use 'kubectl auth reconcile' before 'kubectl apply')
I've created an example to demonstrate how useful the kubectl auth reconcile command is.
I have a simple secret-reader RoleBinding and I want to change a binding's roleRef (I want to change the Role that this binding refers to):
NOTE: A binding to a different role is a fundamentally different binding (see: A binding to a different role is a fundamentally different binding).
# BEFORE
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: secret-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secret-reader
subjects:
- kind: ServiceAccount
name: service-account-1
namespace: default
# AFTER
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: secret-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secret-creator
subjects:
- kind: ServiceAccount
name: service-account-1
namespace: default
As we know, roleRef is immutable, so it is not possible to update this secret-admin RoleBinding using kubectl apply:
$ kubectl apply -f secret-admin.yml
The RoleBinding "secret-admin" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"Role", Name:"secret-creator"}: cannot change roleRef
Instead, we can use kubectl auth reconcile. If a RoleBinding is updated to a new roleRef, the kubectl auth reconcile command handles a delete/recreate related objects for us.
$ kubectl auth reconcile -f secret-admin.yml
rolebinding.rbac.authorization.k8s.io/secret-admin reconciled
reconciliation required recreate
Additionally, you can use the --remove-extra-permissions and --remove-extra-subjects options.
Finally, we can check if everything has been successfully updated:
$ kubectl describe rolebinding secret-admin
Name: secret-admin
Labels: <none>
Annotations: <none>
Role:
Kind: Role
Name: secret-creator
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount service-account-1 default

Setup a Kubernetes Namespace and Roles Using Helm

I am trying to setup Kuberentes for my company. In that process I am trying to learn Helm.
One of the tasks I have is to setup automation to take a supplied namespace name parameter, and create a namespace and setup the correct permissions in that namespace for the deployment user account.
I can do this simply with a script that uses kubectl apply like this:
kubectl create namespace $namespaceName
kubectl create rolebinding deployer-edit --clusterrole edit --user deployer --namespace $namespaceName
But I am wondering if I should set up things like this using Helm charts. As I look at Helm charts, it seems that everything is a deployment. I am not sure that this fits the model of "deploying" things. It is more just a general setup of a namespace that will then allow deployments into it. But I want to try it out as a Helm chart if it is possible.
How can I create a Kubernetes namespace and rolebinding using Helm?
A Namespace is a Kubernetes object and it can be described in YAML, so Helm can create one. #mdaniel's answer describes the syntax for doing it for a single Namespace and the corresponding RoleBinding.
There is a chicken-and-egg problem if you are trying to use this syntax to create the Helm installation namespace, though. In Helm 3, metadata about the installation is stored in Kubernetes objects, usually in the same namespace you're installing into
helm install release-name ./a-chart-that-creates-a-namespace --namespace ns
If the namespace doesn't already exist, then Helm can't retrieve the installation metadata; or, if it does, then the declaration of the Namespace object in the chart will conflict with an existing object in the cluster. You can create other objects this way (like RoleBindings) but Namespaces themselves are a problem.
But! You can create other namespaces safely. You can also use Helm's templating constructs to create multiple objects based on what's present in the .Values configuration. So if your values.yaml file (possibly environment-specific) has
namespaces: [service-a, service-b]
clusterRole: edit
user: deploy
Then you can write a template file like
{{- $top := . }}
{{- range $namespace := .Values.namespaces -}}
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ $namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: {{ $namespace }}
name: deployer-edit
roleRef:
apiGroup: ""
kind: ClusterRole
name: {{ $top.Values.clusterRole }}
subjects:
- apiGroup: ""
kind: User
name: {{ $top.Values.user }}
{{ end -}}
This will create two YAML documents for each item in .Values.namespaces. Since the range looping construct overwrites the . special variable, we save its value in a $top local variable before we start, and then use $top.Values where we'd otherwise need to reference .Values. We also need to make sure to explicitly name the metadata: { namespace: } of each object we create, since we're not using the default installation namespace.
You need to make sure the helm install --namespace name isn't any of the namespaces you're managing with this chart.
This would let you have a single chart that manages all of the per-service namespaces. If you needed to change the set of services, you can just update the chart values and helm update. The one other caution is that this will happily delete namespaces with no warning if you remove a value from the .Values.namespaces list, and also take everything in that namespace with it (notably, any PersistentVolumeClaims that have data you might need).
Almost any chart for an install that needs to interact with kubernetes itself will include RBAC resources, so it is for sure not just Deployments
# templates/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: {{ .Release.Namespace }}
name: {{ .Values.bindingName }}
roleRef:
apiGroup: ""
kind: ClusterRole
name: {{ .Values.clusterRole }}
subjects:
- apiGroup: ""
kind: User
name: {{ .Values.user }}
then a values.yaml isn't strictly required, but helps folks know what values could be provided:
# values.yaml
bindingName: deployment-edit
clusterRole: edit
user: deployer
Helm v3 has --create-namespace which will create the provided --namespace if it doesn't already exist, which isn't very declarative but does achieve the end result just like the kubectl version
It's also theoretically possible to have the chart create the Namespace but I would not guess that helm remove the-namespaced-rolebinding will do the right thing, since the order of item removal matters a lot:
# templates/00namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.theNamespace }}
and then run helm --namespace kube-system ... or any NS other than the real one, since it doesn't yet exist

How to secure kubernetes secrets?

I am trying to avoid kubernetes secrets view-able by any user.
I tried sealed secrets, but that is just hiding secrets to be stored in version control.
As soon as I apply that secret, I can see the secret using the below command.
kubectl get secret mysecret -o yaml
This above command is still showing base64 encoded form of secret.
How do I avoid someone seeing the secret ( even in base64 format) with the above simple command.
You can use Hashicrop Vault or kubernetes-external-secrets (https://github.com/godaddy/kubernetes-external-secrets).
Or if you just want to restrict only, then you should create a read-only user and restrict the access for the secret for the read-only user using role & role binding.
Then if anyone tries to describe secret then it will throw access denied error.
Sample code:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: test-secrets
namespace: default
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
- create
- delete
- apiGroups:
- ""
resources:
- pods/exec
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-secrets
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: test-secrets
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: demo
The above role has no access to secrets. Hence the demo user gets access denied.
There is no way to accomplish this with Kubernetes internal tools. You will always have to rely on a third-party tool.
Hashicorps Vault is one very often used solution, which is very powerful and supports some very nice features, like Dynamic Secrets or Envelope Encryption. But it can also get very complex in terms of configuration. So you need to decide for yourself what kind of solution you need.
I would recommend you using Sealed-Secrets. It encrypts your secrets and you can push the encrypted secrets safely in your repository. It has not such a big feature list, but it does exactly what you described.
You can Inject Hashicrop Vault secrets into Kubernetes pods via Init containers and keep them up to date with a sidecar container.
More details here https://www.hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar/

Kubernetes log, User "system:serviceaccount:default:default" cannot get services in the namespace

Forbidden!Configured service account doesn't have access. Service account may have been revoked. User "system:serviceaccount:default:default" cannot get services in the namespace "mycomp-services-process"
For the above issue I have created "mycomp-service-process" namespace and checked the issue.
But it shows again message like this:
Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. User "system:serviceaccount:mycomp-services-process:default" cannot get services in the namespace "mycomp-services-process"
Creating a namespace won't, of course, solve the issue, as that is not the problem at all.
In the first error the issue is that serviceaccount default in default namespace can not get services because it does not have access to list/get services. So what you need to do is assign a role to that user using clusterrolebinding.
Following the set of minimum privileges, you can first create a role which has access to list services:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: service-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["services"]
verbs: ["get", "watch", "list"]
What above snippet does is create a clusterrole which can list, get and watch services. (You will have to create a yaml file and apply above specs)
Now we can use this clusterrole to create a clusterrolebinding:
kubectl create clusterrolebinding service-reader-pod \
--clusterrole=service-reader \
--serviceaccount=default:default
In above command the service-reader-pod is name of clusterrolebinding and it is assigning the service-reader clusterrole to default serviceaccount in default namespace. Similar steps can be followed for the second error you are facing.
In this case I created clusterrole and clusterrolebinding but you might want to create a role and rolebinding instead. You can check the documentation in detail here
This is only for non prod clusters
You should bind service account system:serviceaccount:default:default (which is the default account bound to Pod) with role cluster-admin, just create a yaml (named like fabric8-rbac.yaml) with following contents:
# NOTE: The service account `default:default` already exists in k8s cluster.
# You can create a new account following like this:
#---
#apiVersion: v1
#kind: ServiceAccount
#metadata:
# name: <new-account-name>
# namespace: <namespace>
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fabric8-rbac
subjects:
- kind: ServiceAccount
# Reference to upper's `metadata.name`
name: default
# Reference to upper's `metadata.namespace`
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
Then, apply it by running kubectl apply -f fabric8-rbac.yaml.
If you want unbind them, just run kubectl delete -f fabric8-rbac.yaml.
Just to add.
This can also occur when you are redeploying an existing application to the wrong Kubernetes cluster that are similar.
Ensure you check to be sure that the Kubernetes cluster you're deploying to is the correct cluster.