Helm Grafana/Loki chart installation error. Rendered manifests contain a resource that already exists - kubernetes

I need to install Grafana Loki with Prometheus in my Kubernetes cluster. So I followed the below to install them. It basically uses Helm to install it. Below is the command which I executed to install it.
helm upgrade --install loki grafana/loki-stack --set grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false,loki.persistence.enabled=true,loki.persistence.storageClassName=standard,loki.persistence.size=5Gi -n monitoring --create-namespace
I followed the official Grafana website in this case.
But when I execute the above helm command, I get the below error. In fact, I'm new to Helm.
Release "loki" does not exist. Installing it now.
W0307 16:54:55.764184 1474330 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Error: rendered manifests contain a resource that already exists. Unable to continue with install: PodSecurityPolicy "loki-grafana" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "loki": current value is "loki-grafana"
I don't see any Grafana chart installed.
helm list -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cert-manager cert-manager 1 2021-11-26 13:07:26.103036078 +0000 UTC deployed cert-manager-v0.16.1 v0.16.1
ingress-nginx ingress-basic 1 2021-11-18 12:23:28.476712359 +0000 UTC deployed ingress-nginx-4.0.8 1.0.5

Well, I was able to get through my issue. The issue was "PodSecurityPolicy". I deleted the existing Grafana PodSecurityPolicy and it worked.

try this to get all releases in all namespaces, use --all-namespaces flag with helm ls.

Problem is here:
rendered manifests contain a resource that already exists. Unable to continue with install: PodSecurityPolicy "loki-grafana" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "loki": current value is "loki-grafana"
Deleting PodSecurityPolicy could be a solution, but better you can change annotation key meta.helm.sh/release-name from loki-grafana to loki.
Additionally I can see you are using deprecated API:
policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
To solve it look at this documentation:
The policy/v1beta1 API version of PodDisruptionBudget will no longer be served in v1.25.
Migrate manifests and API clients to use the policy/v1 API version, available since v1.21.
All existing persisted objects are accessible via the new API
Notable changes in policy/v1:
- an empty spec.selector ({}) written to a policy/v1 PodDisruptionBudget selects all pods in the namespace (in policy/v1beta1 an empty spec.selector selected no pods). An unset spec.selector selects no pods in either API version.
PodSecurityPolicy
PodSecurityPolicy in the policy/v1beta1 API version will no longer be served in v1.25, and the PodSecurityPolicy admission controller will be removed.
PodSecurityPolicy replacements are still under discussion, but current use can be migrated to 3rd-party admission webhooks now.
See also this documentation for more information about Dynamic Admission Control.

Related

NGINX Controller Upgrade Using Helm

I installed NGINX Controller 2 years ago using Helm 2 in our AKS clusters, and it pulled the image from quay.io at the time:
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.27.0
We are now looking to upgrade our NGINX ingress controllers, and in our new clusters I see the image repo is gcr.io:
k8s.gcr.io/ingress-nginx/controller:v1.20.0#sha256:8xxxxxxxxxxxxxxxxxxxxxxxxxxxx3
I ran the following command using Helm 3 to upgrade Kubernetes NGINX Controller to no avail in our old cluster with controller from quay.io:
helm upgrade awesome-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx -f nginx-reuse-values-file.yaml
Error: UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for this kubernetes version and it is therefore unable to build the kubernetes objects for performing the diff. error from kubernetes: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"
The K8s version is 1.20.9.
The current quay.io NGINX ingress controller manifest shows following version:
apiVersion: apps/v1
Well, figured it out:
https://github.com/helm/helm-mapkubeapis
The Helm mapkubeapis plugin for the win. I had to update deprecated APIs as evident in the error message in my original post. Ran Helm upgrade after updating to the latest APIs for my K8s version successfully.

How to install istio mutating webhook and istiod first ahead of other pods in Helm?

I am trying to use Helm 3 to install Kubeflow 1.3 with Istio 1.9 on Kubernetes 1.16. Kubeflow does not provide official Helm chart so I figured it out by myself.
But Helm does not guarantee order. Pods of other deployments and statefulsets could be up before Istio mutating webhook and istiod are up. For example, if A pod is up earlier without istio-proxy, B pod is up later with a istio-proxy, they cannot communicate with each other.
Are there any simple best practices so I can work this out as expected each time I deploy? That is say, make sure my installation with Helm is atomic?
Thank you in advance.
UPDATE:
I tried for three ways:
mark resources as pre-install, post-install, etc.
using subcharts
decouple one chart into several charts
And I adopted the third. The issue of the first is that helm hook is designed for Job, a resource could be marked as helm hook but it would not be deleted when using helm uninstall since a resource cannot hold two helm hooks at the same time(key conflict in annotations). The issue of the second is that helm installs subcharts and charts at the same time, helm call hooks of subcharts and charts at the same time as well.
Helm does not guarantee order.
Not completely. Helm collects all of the resources in a given Chart and it's dependencies, groups them by resource type, and then installs them in the following order:
Namespace
NetworkPolicy
ResourceQuota
LimitRange
PodSecurityPolicy
PodDisruptionBudget
ServiceAccount
Secret
SecretList
ConfigMap
StorageClass
PersistentVolume
PersistentVolumeClaim
CustomResourceDefinition
ClusterRole
ClusterRoleList
ClusterRoleBinding
ClusterRoleBindingList
Role
RoleList
RoleBinding
RoleBindingList
Service
DaemonSet
Pod
ReplicationController
ReplicaSet
Deployment
HorizontalPodAutoscaler
StatefulSet
Job
CronJob
Ingress
APIService
Additionally:
That is say, make sure my installation with Helm is atomic
you should to know that:
Helm does not wait until all of the resources are running before it exits.
You generally have no control over the order if you are using Helm. You can try to use Init Containers to validate your pods to check if they have all dependencies before they run. You can read more about it here. Another workaround will be to install a health check to make sure everything is okay. If not, it will restart until it is successful.
See also:
this article about checking your helm deployments.
question Helm Subchart order of execution in an umbrella chart with good explanation
this question
related topic on github

How do I tell helm to create internal secrets in namespace

When trying to run helm install to deploy an application to a private K8S cluster, I get the following error:
helm install myapp ./myapp
Error: create: failed to create: secrets is forbidden: User "u-user1"
cannot create resource "secrets" in API group "" in the namespace "default"
exit status 1
I know that this is happening because helm creates secrets behind the scene to hold information that it needs for managing the deployment. See Handling Secrets:
As of Helm v3, the release definition is stored as a Kubernetes Secret resource by default, as opposed to a ConfigMap.
The problem is that helm is trying to create the secrets in the default namespace, and I'm working in a private cloud and not allowed to create resources in the default namespace.
How can I tell helm to use a namespace when creating the internal secrets that it needs to use?
Searching for a solution
A search on the helm site found:
https://helm.sh/docs/faq/ - which says
In Helm 3, information about a particular release is now stored in the same namespace as the release itself
But I've set the deployment to be in the desired namespace. My myapp/templates/deployment.yaml file has:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: myapp-namespace
So I'm not sure how to tell helm to create it's internal secrets in this myapp-namespace.
Other Searches
Helm Charts create secrets in different namespace - Is asking a different question about how to create user defined secrets in different namespaces.
Helm upgrade is creating multiple secrets - Different question, and no answer (yet).
Secret management in Helm Charts - is asking a different question.
Update 1)
When searching for a solution I tried adding the --namespace myapp-namespace argument to the helm install command (see below).
helm install --namespace myapp-namespace myapp ./myapp
Error: create: failed to create: secrets is forbidden: User "u-user1"
cannot create resource "secrets" in API group "" in the namespace "myapp-namespace"
exit status 1
Notice that the namespace is now myapp-namespace, so I believe that helm is now creating the internal secrets in my desired namespace, so I think this answers my original question.
I think I now have a permissions issue that I need to ask the K8S admins to address.
You must use the --namespace option in order to tell helm install what namespace you are using. The syntax you specified is correct.
helm install --namespace myapp-namespace myapp ./myapp
You could also put --namespace at the end of the command as below:
helm install myapp ./myapp --namespace myapp-namespace
With this syntax, helm will create the internal secrets in the namespace you've specified.
Doing this will prevent the default namespace from being polluted.
The following command is then needed to see the install.
helm list --namespace myapp-namespace
helm list --all-namespaces

Failed to create nodeport error, after deploying ingress

Failed to create NodePort error, after deploying ingress
I have an ingress defined as in the screenshot:
Screenshot
The 2 replicas of an Ingress server are not spinning due to the Failed to create NodePort error. Please advice
Just like the error says. You are missing the NodePortPods CRD. It looks like that CRD existed at some point in time. But I don't see it anymore in the repo. You didn't specify how you deployed the ingress operator but you can make sure you install the latest.
helm repo add appscode https://charts.appscode.com/stable/
helm repo update
helm search repo appscode/voyager --version v13.0.0
# Generate the template to check or use helm install
helm template voyager-operator appscode/voyager --version v13.0.0 --namespace kube-system --no-hooks --set cloudProvider=baremetal 👈 Use the right cloud provider
✌️

cannot helm install rabbitmq servers (helm 2.16.9) : namespaces "rabbit" is forbidden

helm install --name my-rabbitserver stable/rabbitmq --namespace rabbit
Error: release my-rabbitserver failed: namespaces "rabbit" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "rabbit"
I have tried with (and without a rabbit namespace created before the install attempt)
I am using helm 2.16.9, so I need to qualify the name of my installation with a --name
I am using this against a Google Cloud kubernetes cluster
It looks as though the Helm tiller pod did not have sufficient priveldeges.
I found this similar issue:
https://support.sumologic.com/hc/en-us/articles/360037704393-Kubernetes-Helm-install-fails-with-Error-namespaces-sumologic-is-forbidden-User-system-serviceaccount-kube-system-default-cannot-get-resource-namespaces-in-API-group-in-the-namespace-sumologic-
Basically I have to stop the tiller deployment, set up tiller ServiceAccount yaml and run it to give tiller access to the kube-system. And then execute helm init again with the new service account.
The helm rabbitmq installs then appear work as advertised
I thought helm was supposed to make life easier, but it still has its own limitations and additional yaml files to get it to work as advertised.