Helm installation throws an error for cluster-wide objects - kubernetes

I have a problem with helm chart that I would like to use to deploy multiple instances of my app in many namespaces. Let's say it would be ns1, ns2 and ns3.
I run helm upgrade with --install option and it goes well for ns1 but when I want to run it second time for ns2 I get an error:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: PodSecurityPolicy "my-psp" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "ns2": current value is "ns1"
I found many topics about this problem but everytime the only answer is to delete the old object and install it again with helm. I don't want to do that - I would like to get two or more instances of my app that uses k8s objects that are common for many namespaces.
What can I do in this situation? I know I could change names of those objects with every deployment but that would be really messy. Second idea is to move those object to another chart and deploy it just once but sadly there's like a ton of work to do that so I would like to avoid it. Is it possible to ignore this error somehow and still make this work?

Found out the solution. The easiest way is to add lookup block into your templates:
{{- if not (lookup "policy/v1beta1" "PodSecurityPolicy" "" "my-psp") }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: my-psp
...
{{- end }}
With this config the object will be created only if case that the object with the same name does not exist.
It may not be perfect solution but if you know what you do you can save a lot of time.

Related

How to run multiple IngressController with same IngressClass?

Is it possible to run multiple IngressController in the same Namespace with the same IngressClass?
I have multiple IngressController with different LoadBalancer IP Addresses and would like to continue with this setup.
I upgraded the first IngressController to the latest version.
Updating the second/third/.. IngressController fails because of:
rendered manifests contain a resource that already exists. Unable to continue with update: IngressClass "nginx" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "nginx-ingress-lb-02": current value is "nginx-ingress-lb-01"
Any idea how to fix this?
The issue you mention here is mainly with Helm, preventing you from overwriting some resources - your IngressClass - that belongs to another helm deployment.
One way to work around this may be to use helm "--dry-run" option. Once you have the list of objects written into a file: remove the IngressClass, then apply that file.
Another way may be to patch the chart deploying your controller. As a contributor to the Traefik helm chart, I know that we would install IngressClasses named after the Traefik deployment we operate. The chart you're using, for Nginx, apparently does not implement support for that scenario. Which doesn't mean it shouldn't work.
Now, answering your first question, is it possible to run multiple IngressController in the same Namespace with the same IngressClass: yes.
You may have several Ingress Controllers, one that watches for Ingresses in namespace A, another in namespace B, both ingresses referencing the same class. Deploying those ingresses into the same namespace is possible - although implementing NetworkPolicies, isolating your controllers into their own namespace would help in distinguishing who's who.
An options that works for me, when deploying multiple ingress controllers with Helm, is setting controller.ingressClassResource.enabled: false in every Helm deployment, except the first.
The comments in the default values.yaml aren't entirely clear on this, but after studying the chart I found that controller.ingressClassResource.enabled is only evaluated to determine whether or not to create the IngressClass, not whether or not to attach the controller.ingressClassResource.controllerValue to the controller. (This is true at least for helm-chart-4.0.13).
So, the first Helm deployment, if you don't override any of the default controller.ingressClassResource settings, the following values will be used to create an IngressClass, and attach the controllerValue to the controller:
controller:
ingressClassResource:
name: nginx
enabled: true
default: false
controllerValue: "k8s.io/ingress-nginx"
For all other controllers that you want to run with the same IngressClass, simply set controller.ingressClassResource.enabled: false, to prevent Helm from trying (and failing) to create the IngressClass again.

Rendered manifests contain a resource that already exists. Could not get information about the resource: resource name may not be empty

I Installed Helm 3 on my windows laptop where i have kube config configured as well. But when i try to install my local helm chart, i;m getting the below error.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: could not get information about the resource: resource name may not be empty
I tried helm ls --all --all-namespaces but i don't see anything. Please help me!
I think you have to check if you left off any resource without - name:
I had the same issue. In values.yaml I had
name:
and in deployment.yaml I tried to access this "name" via {{ .Values.name }}. I found out, that {{ .Values.name }} doesn't work for me at all. I had to use {{ Chart.Name }} in deployment.yaml as the built-in object. ref: https://helm.sh/docs/chart_template_guide/builtin_objects/
If you want to access the "name", you can put it into values.yaml for example like this:
something:
name:
and them access it from deployment.yaml (for example) like {{ .Values.something.name }}
Had same error message. I solved the problem using helm lint on the folder of a dependecy charts that I just added. That pointed me to some bad assignment of values.
Beware: helm lint on the parent folder didn't highlight any problem in the dependencies folders.
I suppose there is already the same resource existing in your namespace where you are trying to install or in your helm chart you are trying to create the same resource twice.
Try to create a new namespace and try helm install if you still face the issue then definitely there is some issue with your helm install.
I faced the same error, the fix was to correct the sub-chart name in values.yaml file of main chart.
Best bet would be to run helm template . in the chart directory and verify that name or namespace fields are not empty. This was the case with me atleast.
Most likely, one of the deployments you removed left behind a clusterole.
Check if you have one with kubectl get clusterole
Once you find it, you can delete it with kubectl delete clusterrole <clusterrolename>

Helm sub-chart used by multiple instances of the parent

I have followed the Helm Subchart documentation to create a parent chart with a sub-chart.
When I install the parent chart, the sub-chart comes along for the ride. Great!
However, if I try to install another instance of the parent chart with a different name, I get an error saying the sub-chart already exists.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: Service, namespace: my-namespace, name: my-subcharts-service
I had expected the sub-chart to be installed if it did not already exist. And if it did exist I expected everything to be ok. Eg I thought it would work like a package management system eg pip/yum/npm
Is there a way to share a sub chart (or other helm construct) between multiple instances of a parent chart?

Error: selector does not match template labels

My task is to add a label named "app" to all deployments, daemonsets, and cronjobs so that it's easier to query our apps across the stack in our monitoring tools. This way, we can build dashboards that use a single selector, namely app.
To avoid downtime I've decided to resolve this issue in the following steps:
Add labels to dev, test & stage environments.
Add labels to prod env's.
Deploy (1)
Deploy (2)
Delete old labels & update the services of dev to use the new labels. Then test & deploy. (currently on this step)
Repeat (5) for stage.
Repeat (5) for prod.
When using $ kubectl apply to update the resources I've added the "app" label to/replaced "service" label with "app" labels to, I run into the following error:
Error from server (Invalid): error when applying patch:
{longAssPatchWhichIWon'tIncludeButYaGetThePoint} to: &{0xc421b02f00
0xc420803650 default provisioning
manifests/prod/provisioning-deployment.yaml 0xc 42000c6f8 3942200
false} for: "manifests/prod/provisioning-deployment.yaml":
Deployment.apps "provisioning" is invalid: s
pec.template.metadata.labels: Invalid value:
map[string]string{"app":"provisioning", "component" :"marketplace"}:
selector does not match template labels
I need some insights on why it's throwing this error.
It seems you are in trouble. Check this section: Label selector updates
Note: In API version apps/v1, a Deployment’s label selector is immutable after it gets created.
So, this line say you can not update selector once deployment is created. Selector can not be changed for any API version except apps/v1beta1 and extension/v1beta1. Ref: TestDeploymentSelectorImmutability.
One possible workaround might be to keep the old labels and adding new labels along with old ones. This way, you don't have to update selector. Deployment will select pods using old labels but your dashboard can select using new labels. This might not meet your requirement but I don't see any better way.
This error is hard to read but it means that the labels specified in spec.template.metadata.labels of your Deployment definition do not match those of spec.selector.matchLabels within the same definition. Upload your YAML if you require further assistance.
Best!
There are a few ways to resolve this from what I can tell. One way is to delete the deployment and re-apply the deployment with a key/value that works on your deployment:
spec:
selector:
matchLabels:
app: app_name
template:
metadata:
labels:
app: app_name
-- whatever else --
This obviously incurs downtime but should be permanent. Your other option is to edit the deployment selector:
kubectl -n namespace edit deployment app-deployment
Then run your apply command again. This may or may not be permanent as I don't know what changed the selector to begin with.
If your pod doesn't even exist yet to do modifications, this error might be legit. You may have a pod with the same name in the same namespace.
Someone came to me with this issue and it turned out that they had typed "matadata" instead of "metadata" so as far as kubernetes was concerned the label wasn't defined, which lead to this error message.

How to make an environment variable different across two pods of the same deployment in kubernetes?

Based on this it is possible to create environment variables that are the same across all the pods of the deployment that you define.
Is there a way to instruct Kubernetes deployment to create pods that have different environment variables?
Use case:
Let's say that I have a monitoring container and i want to create 4 replicas of it. This container has a service that is mailing if an environment variables defines so. Eg, if the env var IS_MASTER is true, then the service proceeds to send those e-mails.
apiVersion: v1
kind: Deployment
metadata:
...
spec:
...
replicas: 4
...
template:
...
spec:
containers:
-env:
-name: IS_MASTER
value: <------------- True only in one of the replicas
(In my case I'm using helm, but the same thing can be without helm as well)
What you are looking for is, as far as I know, more like an anti-pattern than impossible.
From what I understand, you seem to be looking to deploy a scalable/HA monitoring platform that wouldn't mail X times on alerts, so you can either make a sidecar container that will talk to its siblings and "elect" the master-mailer (a StatefulSet will make it easier in this case), or just separate the mailer from the monitoring and make them talk to each other through a Service. That would allow you to load-balance both monitoring and mailing separately.
monitoring-1 \ / mailer-1
monitoring-2 --- > mailer.svc -- mailer-2
monitoring-3 / \ mailer-3
Any mailing request will be handled by one and only one mailer from the pool, but that's assuming your Monitoring Pods aren't all triggered together on alerts... If that's not the case, then regardless of your "master" election for the mailer, you will have to tackle that first.
And by tackling that first I mean adding a master-election logic to your monitoring platform, to orchestrate master fail-overs on events, there are a few ways to do so, but it really depends on what your monitoring platform is and can do...
Although, if your replicas are just there to extend compute power somehow and your master is expected to be static, then simply use a StatefulSet, and add a one liner at runtime doing if hostname == $statefulset-name-0 then MASTER, but I feel like it's not the best idea.
By definition, each pod in a deployment is identical to its other replicas. This is not possible in the yaml definition.
An optional solution will be to override the pod command and have it process and calculate the value of the variable, set the variable (export IS_MASTER=${resolved_value}) and trigger the default entrypoint for the container.
It means you'll have to figure out a logic to implement this (i.e. how does the pod know it should be IS_MASTER=true?). This is an implementation detail that can be done with a DB or other shared common resource used as a flag or semaphore.
All the Pod replicas in the deployment will have the same environment variables and no unique value to identify a particular Pod. Creating multiple Deployments is a better solution.
Not sure why, the OP is for only one Deployment. One solution is to use StatefulSets. The node names would be like web-0, web1, web-2 and so on. In the code check for the host name, if it is web-0 then send emails or else do something else.
It's a dirty solution, but I can't think of a better solution than creating multiple deployments.
One other solution is to use the same Helm Chart for both cases and run one helm deployment for each case. You can overwrite env variables with helm (using --set .Values.foo.deployment.isFirst= "0" or "1")
Please note that Helm/K8s will not allow you to POST the very same configuration twice.
So you will have to conditionally apply some Kubernetes specific configuration (Secrets, ConfigMaps, Secrets etc) on the first deployment only.
{{- if eq .Values.foo.deployment.isFirst "1" }}
...
...
{{- end }}