Installing helm chart under custom namespace - kubernetes

I am trying to install some helm charts into custom namespaces on a Microk8s cluster but get the following errors. What am I doing wrong? The cluster is fresh and none of these namespaces or resources exist.
> helm install loki grafana/loki-stack -f values/loki-stack.yaml --namespace loki --create-namespace
W0902 08:08:35.878632 1610435 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Error: rendered manifests contain a resource that already exists. Unable to continue with install:
PodSecurityPolicy "loki-grafana" in namespace "" exists and cannot be imported into the current release:
invalid ownership metadata; annotation validation error:
key "meta.helm.sh/release-namespace" must equal "loki": current value is "default"
> helm install traefik traefik/traefik -f values/traefik.yml --namespace traefik --create-namespace
Error: rendered manifests contain a resource that already exists. Unable to continue with install:
ClusterRole "traefik" in namespace "" exists and cannot be imported into the current release:
invalid ownership metadata; annotation validation error:
key "meta.helm.sh/release-namespace" must equal "traefik": current value is "default"

The resource (loki) to be deployed already exists in the specified namespace.
Please check with helm list -n loki.
So before you install it, you need to uninstall it first.
helm uninstall loki -n loki
helm install loki grafana/loki-stack -f values/loki-stack.yaml --namespace loki
Or you can try to update it directly:
helm upgrade loki grafana/loki-stack -f values/loki-stack.yaml --namespace loki

Related

ways to change annotation 'meta.helm.sh/release-name'

I am using helm to deploy a namespace resource but each time helm adds annotations automatically:
meta.helm.sh/release-name=my-pr-111
which creates troubles for subsequent helm deployments using the same namespace
For example, I first deployed the namespace using command:
helm upgrade my-pr-111 ./path/helm/chart --install --atomic
Then the namespace is deployed with annotations:
meta.helm.sh/release-name=my-pr-111
Then if I do another deployment for the same namespace using command:
helm upgrade my-pr-222 ./path/helm/chart --install --atomic
Then I got an error:
Unable to continue with install: Namespace "my-space" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "my-pr-222": current value is "my-pr-111"
Anyone knows how to handle this situation?

helm upgrade mongodb fails with error "unable to build kubernetes objects"

In process of project initialization I have a command:
helm upgrade mongodb mongodb/mongodb --install --set replicaSet.enabled=true.
Which fails with error:
Release "mongodb" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: [resource mapping not found for name: "mongodb-arbiter" namespace: "" from "": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"
ensure CRDs are installed first, resource mapping not found for name: "mongodb-secondary" namespace: "" from "": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"
ensure CRDs are installed first]
Can you please suggest what to do?
The version of Kubernetes you are using is too new for your helm charts.
Within the latest version of Kubernetes PodDisruptionBudget lives in policy/v1 and not policy/v1beta1 where your chart is looking for it.

install dapr helm chart on a second namespace while already installed on another namespace in same cluster

I am trying to install a second dapr helm chart on namespace "test" while it is already installed on namespace "dev" in same cluster.
helm upgrade -i --namespace $NAMESPACE \
dapr-uat dapr/dapr
already installed config exists whith following name:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
dapr dev 1 2021-10-06 21:16:27.244997 +0100 +01 deployed dapr-1.4.2 1.4.2
I get the following error
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "dapr-operator-admin" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "dapr-uat": current value is "dapr"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "test": current value is "dev"
Tried specifying a different version for the installation but with no success
helm upgrade -i --namespace $NAMESPACE \
dapr-uat dapr/dapr \
--version 1.4.0
Starting to think the current chart does not allow for multiple instances (development and testing ) on the same cluster.
Has anyone faced the same issue ?
thank you,
Existing dapr chart applies cluster-wide ressources where names are given with no namespace name consideration. So, when trying to install a second configuration, a cluster-wide ressource name conflict occurs with pre-existing cluster-wide ressource:
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: ClusterRole "dapr-operator-admin" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "dapr-uat": current value is "dapr-dev"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "uat": current value is "dev"
I had to edit the chart:
git clone https://github.com/dapr/dapr.git
I edited RBAC ressources in subchart dapr_rbac where ressource name now considers namespace name in dapr_rbac/templates/ClusterRoleBinding.yaml
previous file :
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dapr-operator
...
Edit now consists of metadata name on all ressources:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dapr-operator-{{ .Release.Namespace }}
...
Same logic have been applied to MutatingWebhookConfiguration in subchart dapr_sidecar_injector in file dapr_sidecar_injector/templates/dapr_sidecar_injector_webhook_config.yaml
For full edits, please see forked repo in :
https://github.com/redaER7/dapr/tree/DEV/charts/dapr

Helm: Conditional deployment of a dependent chart, only install if it has not been installed before

While installing a helm chart a condition for the dependency works well as the following Chart.yaml file. But it doesn't allow to apply the condition based on the existing Kubernetes resource.
# Chart.yaml
apiVersion: v1
name: my-chart
version: 0.3.1
appVersion: 0.4.5
description: A helm chart with dependency
dependencies:
- name: metrics-server
version: 2.5.0
repository: https://artifacts.myserver.com/v1/helm
condition: metrics-server.enabled
I did a local install of the chart (my-chart) in a namespace(default), then I try to install the same chart in another namespace(pb) I get the following error which says the resource already exists. This resource, "system:metrics-server-aggregated-reader", has been installed cluster wide as previous dependency (metrics-server). Following is the step to reproduce.
user#hostname$helm install my-chart -n default --set metrics-server.enabled=true ./my-chart
NAME: my-chart
LAST DEPLOYED: Wed Nov 25 16:22:52 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
My Cluster
user#hostname$helm install my-chart -n pb --set metrics-server.enabled=true ./my-chart
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "system:metrics-server-aggregated-reader" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "pb": current value is "default"
There is a way to modify the template inside the metrics-server chart to conditionally generate the manifest files as described in Helm Conditional Templates. In order to do this I have to modify and maintain the metrics-server chart in internal artifact which will restrict me using the most recent charts.
I am looking for an approach to query the existing Kubernetes resource, "system:metrics-server-aggregated-reader", and only install the dependency chart if such resource do not exists.

Deploying helm release forcefully when same name deployments, svcs, etc. are running in the same namespace

How to deploy the helm release for the first time when there's already the deployment, svc, etc. running with the same name.
Is there's any way to import the config running, which is not being handled by helm?
Or deleting the same name objects is the only solution to deploy the helm release first time?(As I don't want to change the release names because it will break the communication between the microservices)
Deleting the objects will cause downtime and I want to avoid that.
Error getting while deploying with the same name:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Service "abc" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "abc"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"
Is their any other approach?
Thanks
Addressing the error message and part of the question:
How to deploy the helm release for the first time when there's already the deployment, svc, etc. running with the same name.
You can't deploy resources with Helm that weren't created by Helm. It will give you the same message as you've encountered. You can annotate the existing resources that were not added by Helm to "import" the existing resources and act on them. Please try to run your workload on a test environment first before trying it as it could redeploy some resources.
There is already similar answer on how to annotate resources:
Stackoverflow.com: Answers: Use Helm 3 for existing resources deployed with kubectl
see this feature of helm3 Adopt resources into release with correct instance and managed-by labels
Helm will no longer error when attempting to create a resource that already exists in the target cluster if the existing resource has the correct meta.helm.sh/release-name and meta.helm.sh/release-namespace annotations, and matches the label selector app.kubernetes.io/managed-by=Helm. This facilitates zero-downtime migrations to Helm 3 for managing existing deployments, and allows Helm to "adopt" existing resources that it previously created.
In order to allow an existing resource to be adopted by Helm, add release metadata and the managed-by label:
KIND=deployment
NAME=my-app-staging
RELEASE=staging
NAMESPACE=default
kubectl annotate $KIND $NAME meta.helm.sh/release-name=$RELEASE
kubectl annotate $KIND $NAME meta.helm.sh/release-namespace=$NAMESPACE
kubectl label $KIND $NAME app.kubernetes.io/managed-by=Helm
Assuming following situation:
Deployment created outside of Helm (example below).
Helm Chart with equivalent templated Deployment in templates/ (example below).
Creating below Deployment without Helm:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Assuming that above file is used with kubectl apply and it's also residing in templates/ directory (templated) of your Chart, you will get the following error (when you try to run $ helm install release_name .):
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Deployment "nginx" in namespace "default" exists and cannot be imported into the current release: ...
By running the script that was mentioned in the answer I linked, you can annotate and label your resources for Helm to not produce mentioned error message.
After that you can run $ helm install release_name . and provision your resources with desired changes.
Additional resources:
Jacky-jiang.medium.com: Import existing resources in Helm3
A nice oneliner to annotate all resources in a helm release to be adopted by the new release:
x=`mktemp` && helm -n $NAMESPACE get manifest $RELEASE >$x && kubectl annotate -f $x --overwrite "meta.helm.sh/release-name"=$NEW_RELEASE && rm -rf "$x"
Or, if you also moved the release to a new namespace:
x=`mktemp` && helm -n $NAMESPACE get manifest $RELEASE >$x && kubectl annotate -f $x --overwrite "meta.helm.sh/release-name"=$NEW_RELEASE "meta.helm.sh/release-namespace"=$NEW_NAMESPACE && rm -rf "$x"
A more common approach is to use the combination of the two labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
As can be seen in different Helm chart providers (for example Bitnami charts, External-Dns , Nginx ingress controller and more).
(*) Read more on the K8s Recommended Labels and Helm standard labels sections.