Helm Chart A is a library chart and defines a template function f1
Chart B adds A as a dependency
Chart C adds both A and B as dependencies.
Requirement here is that App B can be deployed by itself, but App C needs B to be deployed as well.
Now A.tgz is present in C/charts as well as in B/charts directory.
In such case I have noticed that template function f1 from C/charts/A.tgz gets executed.
Are there any best practices for this situation?
Related
I have an application "A" which requires postgres database. Once I deploy the helm chart it deploys the child dependent chart postgres.
helm install application_A -n mynamspace ./applicationA-0.1.0.tgz
Now I have another application "B" which also requires postgres database. I wish to deploy the application B in the same namespace but it should not deploy new postgres POD as it is already available from Application A's deployment.
helm install application_B -n mynamspace ./applicationB-0.1.0.tgz
It fails with the following error -
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists
I want helm to recognize that the dependency of application 'B' is already deployed with the desired version and hence it should automatically avoid deploying the dependent chart.
I am aware of conditional deployment of subchart. It requires me to find out what is already available using shell script to toggle the condition while deploying Application B.
Is there any way in helm to automatically avoid deploying subchart if it is already deployed?
helm version - v3.8.0
Ideally there should not be any relation between two helm release installations. Each helm installation would always try to install their components. Here, it could be failing because of the hardcoded name of resource. Just for your use case, you can use helm in-built lookup function to see if resource exist do not create.
link:
https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function
I am having multiple project in rundeck (Version : Rundeck 2.11.4-1)
Project A -> JOB A
Project B -> JOB B
Project C -> JOB C which will call JOB A and Job B.
Project A,B,C having different resource xml .Hence having different value against same properties in resource xml.
So when I am running the JOB C from Project C . Its look up the resource xml of Project C.
What I am looking for is how can I ensure JOB A use Project A resource xml and JOB B use Project B
resource xml even if its actually call from Project C.
I replicated your scenario, and the solution is to update to the latest Rundeck version (3.4.7 at this moment) to use the "Use references job's nodes." capability in the job reference step, this option isn't available on 2.11.x.
In that way, you can achieve your goal.
Alternatively (and as a "dirty" solution) is to call the JobA and JobB individually via Rundeck API from JobC using the inline-script step.
Due to the big gap between 2.11.4 and 3.4.6 I recommend you to create a fresh 3.4.6 instance and import all projects, keys, and nodes. Take a look at this.
Adding key of nodes used in JOB A & JOB B to project C may solve this issue.
First save the keys using "Key Storage Option" then under project settings > Default Node Executors > SSH Key Storage Path Browse to those keys and save.
It may be because of Nodes of job A & B are not accesible by project C.
I am using helm built in object 'Release.isUpgrade' to ensure an init-container is only run at upgrade.
I want to only run the init-container when upgrading from a specific Chart version.
Is it possible to get the 'from' Chart version in a helm upgrade ?
It doesn't look like this information is published either in the .Release object or through information available to a hook job.
You probably want a pre-upgrade hook and not an init container. If you have multiple replicas on your deployments, the init container will run on all of them; even if you have just one, if the node it's on fails and is replaced, the replacement will re-run the init container. A pre-upgrade hook will run just once, regardless of how the corresponding deployments are configured.
That hook will be a separate pod (and will require writing code), so within that you can do whatever you want. You can give it read access to the Kubernetes API to get the definition of an existing deployment, for example, and then look at its labels or container image tag to find out what version of the chart/application is running now. (There are standard labels that can help with this.) You could also make the upgrade step just look for its own outputs: if object X is supposed to exist, create it if it's not there, without focusing on specific versions.
I have just started experimenting with Helm kubernetes package manager.
But chart vs template topic seems a bit confusing to me.
I understand that by template I will create kubernetes yaml, which will create the objects and install them.
However the same is true for charts as well, but this latter is an abstraction over the yamls. And ./Charts containns standalone charts, while ./templates is valid only for the base chart. So I know that. But when should I include an other chart or just create a template?
Looking for different kind of charts through the web I still don't know which to use.
Say I have a project called MyApp, which has one component named MyServer which will communicate to MySql.
So I created a chart and put in it MyServer as a template :
./MyApp/templates/MyServer.yaml
What should I do with MySql?
I have seen both solutions in different projects, one just creates an other template:
./MyApp/templates/MySQL.yaml
on other project I saw a chart for MySql from a chart repository:
./MyApp/charts/mysql-version.tgz
On the top of that I have seen a bigdata project (hdfs,kafka,zookeeper,ELK,oracle db..etc) and one component was included as chart in ./charts other was created as a template in./templates.
This whole decision between chart and template seems random and confusing to me.
Could you explain it please when to use which?
A chart is a collection of templates, plus a little extra information like the metadata in the Chart.yaml file and the default values.yaml. In your example, MyApp is itself a chart.
For well-known dependencies (particularly things in the Helm charts repository and especially the stable charts) you're probably better off using the external chart; declare the dependency in your requirements.yaml or (Helm v3) Chart.yaml file and run helm dependency update. This lets you import the chart with two lines, rather than reproducing the StatefulSet, PersistentVolumeClaim, etc. that are included in the chart.
So, is it possible to share the same pod among helm packages with a common reference. Example:
Scenario:
Package A
...
- requirements.yml
require: C
Package B
...
- requirements.yml
require: C
When I run:
helm install A
helm install B
These two pods for A and B project use the same C pod.
Is it possible? Theres a documentation to help me with that?
PS: The C package in my case is a broker, but both A & B package can be deployed separately.
This should work fine with Helm. A little bit of background here. One key aspect here is update:
created/updated in that order.
When you update an object, i.e, kubectl apply on a Pod/Deployment/Service/etc if the object exists it won't be changed, so you'll end up with the same object in the end.
Also, Kubernetes objects with the same name use the idempotency principle:
All objects will have a unique name to allow idempotent creation and retrieval
In your example:
helm install stable/packageA => which also installs PackageC
helm install stable/packageB => will update PackageC, but it's already present and won't change.
You have to make sure that the dependencies for PackageA for PackageB are exactly the same version of PackageC.