Helm install fails with "rendered manifests contain a resource that already exists" (ClusterRole) - kubernetes

I'm creating a Helm chart that can be installed in multiple namespaces. Among other resources it includes a ClusterRole.
Once the chart is installed correctly in one namespace I try to install it in another one but fails complaining about the already existing ClusterRole:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: ClusterRole, namespace: , name: config-reader
helm.go:76: [debug] existing resource conflict: kind: ClusterRole, namespace: , name: config-reader
What's the workaround here? Is there a way to force Helm to ignore this existing resources?

According to the documentation
ClusterRole, by contrast, is a non-namespaced resource.
If you want to define a role within a namespace, use a Role; if you want to define a role cluster-wide, use a ClusterRole.
So, Either you can use the variable name for ClusterRole or use lookup to check if the resource already exists.

Related

Deploying helm chart in Kubernetes with Jenkins

I installed my jenkins using this guide:
https://www.jenkins.io/doc/book/installing/kubernetes/#install-jenkins-with-helm-v3
And also I created the service account according to the article:
kubectl apply -f jenkins-sa.yaml
My pipelines is in it github repo:
https://github.com/joedayz/node-k8s-cicd/blob/main/Jenkinsfile
But When I execute my pipeline I got it error:
helm upgrade --install --wait --set 'image.tag=22' node-app-chart ./k8s/node-app-chart
Release "node-app-chart" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists.
Unable to continue with install: could not get information about the
resource ServiceAccount "node-app-chart" in namespace "jenkins":
serviceaccounts "node-app-chart" is forbidden: User
"system:serviceaccount:jenkins:jenkins" cannot get resource
"serviceaccounts" in API group "" in the namespace "jenkins"
script returned exit code 1
I am using minikube, helm v3, docker hub, github. The repo is public.
Any idea?
Thanks in advance.
"could not get information" ... "permission denied". Your error message suggests your jenkins ServiceAccount does not have privileges getting resources within its own namespace.
And given the name of the ServiceAccount jenkins tries to get is some "node-app-something", we should probably assume that beyond getting objects, Jenkins would eventually need to create them as well.
Anyway, you would want to create a RoleBinding, in your jenkins namespace:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins-admin
namespace: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- namespace: webapps
kind: ServiceAccount
name: jenkins
Using the admin ClusterRole is probably not the best pick. Still, if you are managing ServiceAccounts for your app, your pipeline may eventually create RoleBindings as well granting that ServiceAccount with privileges over your cluster. If you need to create RBAC configurations, admin would work for sure.
If you do not want Jenkins to manage privileges delegations within its own namespace, you may instead use the edit ClusterRole.
If you do not want Jenkins to create objects within its own namespace, you could go with the view ClusterRole.
Note that Jenkins may need edit privileges, assuming you did setup Jenkins Kubernetes cluster integration (and dynamic agents provisioning, running your Pipelines)
Going with admin: maybe you should consider deploying your applications into a separate namespace: don't grant Jenkins with admin privileges over its own namespace, rather grant those privileges over whichever namespace should host your jenkins-managed applications.
I could do that works!!
In values.yamls
serviceAccount: # Specifies whether a service account should be
created create: false
No create service account. Because it's trying to create a service account with the name of the chart.
In jenkinsfile change the command to:
sh "helm upgrade --install --namespace jenkins ${HELM_APP_NAME} --set
image.tag=${BUILD_NUMBER} ./${HELM_CHART_DIRECTORY}"
Thanks for your time.

Cannot deploy virtual-server on Minikube

I am just exploring and want to helm my k8dash, but got the weird error since I have been able to deploy on AWS EKS.
I am running them on my Minikube V1.23.2
My helm version is v3.6.2
Kubernetes kubectl version is v1.22.3
Basically if I do helm template, the VirtualServer would be like this:
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: k8dash
namespace: k8dash
spec:
host: namahost.com
routes:
- action:
pass: RELEASE-NAME
path: /
upstreams:
- name: RELEASE-NAME
port: 80
service: RELEASE-NAME
and I got this error:
Error: unable to build Kubernetes objects from release manifest: unable to recognize "": no matches for kind "VirtualServer" in version "k8s.nginx.org/v1"
It's weird, deploying this one on AWS EKS just fine but locally got this error and I could not find any clue while Googling. Does it has something to do with my tools version?
You have to install additional CRDs as both VirtualServer and VirtualServerRoute are not oob, but nginx resources.
CustomResourceDefinitions:
The CustomResourceDefinition API resource allows you to define custom
resources. Defining a CRD object creates a new custom resource with a
name and schema that you specify. The Kubernetes API serves and
handles the storage of your custom resource. The name of a CRD object
must be a valid DNS subdomain name.
This frees you from writing your own API server to handle the custom
resource, but the generic nature of the implementation means you have
less flexibility than with API server aggregation.
Nginx Create Custom Resources
Note: By default, it is required to create custom resource definitions
for VirtualServer, VirtualServerRoute, TransportServer and Policy.
Otherwise, the Ingress Controller pods will not become Ready. If you’d
like to disable that requirement, configure -enable-custom-resources
command-line argument to false and skip this section.
Create custom resource definitions for VirtualServer and VirtualServerRoute, TransportServer and Policy resources.
You can find crds under https://github.com/nginxinc/kubernetes-ingress/tree/master/deployments/common/crds:
$ git clone https://github.com/nginxinc/kubernetes-ingress/
$ cd kubernetes-ingress/deployments
$ git checkout v2.0.3 (or latest, as you wish)
$ kubectl apply -f common/crds/k8s.nginx.org_virtualservers.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_virtualserverroutes.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_transportservers.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_policies.yaml
After successful applying you will be able to create both VirtualServer and VirtualServerRoute

install dapr helm chart on a second namespace while already installed on another namespace in same cluster

I am trying to install a second dapr helm chart on namespace "test" while it is already installed on namespace "dev" in same cluster.
helm upgrade -i --namespace $NAMESPACE \
dapr-uat dapr/dapr
already installed config exists whith following name:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
dapr dev 1 2021-10-06 21:16:27.244997 +0100 +01 deployed dapr-1.4.2 1.4.2
I get the following error
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "dapr-operator-admin" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "dapr-uat": current value is "dapr"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "test": current value is "dev"
Tried specifying a different version for the installation but with no success
helm upgrade -i --namespace $NAMESPACE \
dapr-uat dapr/dapr \
--version 1.4.0
Starting to think the current chart does not allow for multiple instances (development and testing ) on the same cluster.
Has anyone faced the same issue ?
thank you,
Existing dapr chart applies cluster-wide ressources where names are given with no namespace name consideration. So, when trying to install a second configuration, a cluster-wide ressource name conflict occurs with pre-existing cluster-wide ressource:
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: ClusterRole "dapr-operator-admin" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "dapr-uat": current value is "dapr-dev"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "uat": current value is "dev"
I had to edit the chart:
git clone https://github.com/dapr/dapr.git
I edited RBAC ressources in subchart dapr_rbac where ressource name now considers namespace name in dapr_rbac/templates/ClusterRoleBinding.yaml
previous file :
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dapr-operator
...
Edit now consists of metadata name on all ressources:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dapr-operator-{{ .Release.Namespace }}
...
Same logic have been applied to MutatingWebhookConfiguration in subchart dapr_sidecar_injector in file dapr_sidecar_injector/templates/dapr_sidecar_injector_webhook_config.yaml
For full edits, please see forked repo in :
https://github.com/redaER7/dapr/tree/DEV/charts/dapr

kubed syncing secret to more than one namespace

I have kubed running in kubernetes for syncing secret to multiple namespace.
With
annotations:
kubed.appscode.com/sync: "cert-manager-tls=dev"
I was able to sync secret to dev namespace. Now I want to copy same secret to more than one namespace. I tried following
1.
annotations:
kubed.appscode.com/sync: "cert-manager-tls=dev,cert-manager-tls=dev2"
annotations:
kubed.appscode.com/sync: "cert-manager-tls=dev,dev2"
this didn't worked at all.
3
annotations:
kubed.appscode.com/sync: "cert-manager-tls=dev"
kubed.appscode.com/sync: "cert-manager-tls=dev2"
This worked for namespace dev2, but not for namespace dev
How can I get this working for two or more namespaces ?
You may try kubed.appscode.com/sync: "" according to https://appscode.com/products/kubed/0.6.0-rc.0/guides/config-syncer/intra-cluster/
Say, you are using some Docker private registry. You want to keep its image pull secret synchronized across all namespaces of a Kubernetes cluster. Kubed can do that for you. If a ConfigMap or a Secret has the annotation kubed.appscode.com/sync: "", Kubed will create a copy of that ConfigMap/Secret in all existing namespaces. Kubed will also create this ConfigMap/Secret, when you create a new namespace.
Generally, to replicate the secret to multiple (but not all) namespaces, you would need to add a label to the destination namespaces:
metadata:
labels:
cert-manager-tls: dev
So, the label is used by kubed to identify the destination namespaces.
You can see examples here:
https://appscode.com/products/kubed/v0.11.0/guides/config-syncer/intra-cluster/#namespace-selector
However, I can see that there is a typo in the explanation. It says to add an annotation. This should be a label (as the following code also shows)
UseCase: Let's imagine we want to synchronize an image-pull-secret that is managed in kube-system to other namespaces. (Pull secrets are namespace specific)
Option 1 is to sync the secret by default to ALL namespaces. So you need to add this annotation to the secret:
annotations:
kubed.appscode.com/sync: ""
Option 2 is to sync the secret to one or more (!!) specific namespaces. In this case you need to add custom value (it is up to you which value you use):
annotations:
kubed.appscode.com/sync: "pullsecret=bitbucket-dev"
For option 1 you don't need to do anything else on the namespace side, it is simply copied to all of them.
For option 2 you need to label all namespaces where this secret should be available with your defined annotation value:
metadata:
labels:
pullsecret: bitbucket-dev
You can label multiple namespaces with this label. To each of them the secret is copied from kube-system.
Edit: TechnoCowboy is correct. I clarified my answer to avoid any confusion.

ClusterRole exists and cannot be imported into the current release?

I am trying to install the same chart two times in the same cluster in two different namespaces. However I am getting this error:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "nfs-provisioner" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "namespace2": current value is "namespace1"
As I understood cluster roles suposed to be independet from the namespace, so I found this contradictory. We are using helm3
I decided to provide a Community Wiki answer that may help other people facing a similar issue.
I assume you want to install the same chart multiple times but get the following error:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "<CLUSTERROLE_NAME>" in namespace "" exists and cannot be imported into the current release: ...
First, it's important to decide if we really need ClusterRole instead of Role.
As we can find in the Role and ClusterRole documentation:
If you want to define a role within a namespace, use a Role; if you want to define a role cluster-wide, use a ClusterRole.
Second, we can use the variable name for ClusterRole instead of hard-coding the name in the template:
For example, instead of:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: clusterrole-1
...
Try to use something like:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Values.clusterrole.name }}
...
Third, we can use the lookup function and the if control structure to skip creating resources if they already exist.
Take a look at a simple example:
$ cat clusterrole-demo/values.yaml
clusterrole:
name: clusterrole-1
$ cat clusterrole-demo/templates/clusterrole.yaml
{{- if not (lookup "rbac.authorization.k8s.io/v1" "ClusterRole" "" .Values.clusterrole.name) }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Values.clusterrole.name }}
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
{{- end }}
In the example above, if ClusterRole clusterrole-1 already exits, it won’t be created.
ClusterRole sets permission across your Kubernetes cluster, not for particular namespace. It think you misunderstand with Role. You can see further information of the differences between ClusterRole and Role here, Role and ClusterRole.
A Role always sets permissions within a particular namespace; when you create a Role, you have to specify the namespace it belongs in.
ClusterRole, by contrast, is a non-namespaced resource. The resources have different names (Role and ClusterRole) because a Kubernetes object always has to be either namespaced or not namespaced; it can't be both.