How to structure Helm chart with different environments? - kubernetes

I plan to use Helm for deploying purposes. I have three applications/pods p1,p2,p3 and each of these has 2 enivronments dev, prod and in each environment there is a configmap.yml and deployment.yml.
I plan on using helm, however how can I structure these. Do I need three helm charts?, one per application or is it possible to pack everything in one helm, considering the constraints.
I thought of the following structure.
+-- charts
| \-- my-chart
| +-- Chart.yaml # Helm chart metadata
| +-- templates
| \-- p1
+-- configmap1.yml
+-- dep1.yaml
............................ similiary for p2,p3
| +-- values.yaml # default values
| +-- values.dev.p1.yaml # development override values
| +-- values.dev.p2.yaml
| +-- values.dev.p3.yaml
| +-- values.prod.p1.yaml # production override values
| +-- values.prod.p2.yaml
| +-- values.prod.p3.yaml
Now if I want to deploy p1 in prod , then I simply
helm install -f values.prod.p1.yaml helm-app
Would this work is this the general convention?

You can use the single helm chart to manage all the deployment and config map.
Create the tpl for deployment and service so this single tpl (template) will use to generate the multiple deployment YAML configs.
So you will get the 3 YAML deployment file as output while you will be managing a single template file.
For configmap also you can follow same and keep in single helm chart is it's working fine for you.
For different environment you can mange the different values into values.yaml file like dev-values.yaml & prod-values.yaml
helm install -f values.prod.p1.yaml helm-app

Related

get name of all resources from helm release

I want get name of resources from helm release. But I don't understand how.
I don't find anything
helm get all RELEASE_NAME --template '{{range .items}}{{.metadata.name}}{{end}}'
helm get all has a somewhat constrained set of options for its --template option; it gets passed in a single .Release object, and the set of created Kubernetes options is stored in text in a .Release.Manifest field.
helm get all RELEASE_NAME --template '{{ .Release.Manifest }}'
# returns the manifest as a string
There's a dedicated helm get manifest subcommand that returns the manifest as YAML.
Once you have the YAML, you need to extract the resource names from it. One approach is to use a tool like yq that can do generic queries over YAML:
helm get manifest RELEASE_NAME | yq eval '.metadata.name' -

helm template output showing values not being resolved

I'm new to helm charts and K8s, so forgive me. I'm working on a project that deploys an application project with several apps as part of it. The previous dev that put the charts together was using a "find-and-replace" technique to fill in values for things like the image repository, tags, etc. This is making our CICD pipeline development tricky and not scalable. I'm trying to update the charts to use variables and values.yml files. Most of it seems to be working, values are getting passed down to the templates except for one part and I can't figure out why. Its a large project so I won't copy all the chart files. I'll try to lay out the important parts:
Folder structure:
helm
project1
dev
charts
app1
templates
*template files
Chart.yaml
values.yaml
app2
*same subfolders
app3
*same subfolders
Chart.yml
values.yml
Base Values.yml
artifactory_base_url: company.repo.io/repo_folder
imageversions:
app1_tag: 6.1.2-alpine-edge
app2_tag: 8.1.0.0-edge
app3_tag: 8.1.0.0-alpine-edge
app4_tag: 10.1.1-alpine-edge
initcontainer: latest
App Values.yml file
app:
image:
repository: "{{ .Values.artifactory_base_url }}/pingaccess"
tag: "{{ .Values.pa_tag }}"
deployment.yml template file
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.app.image }}"
I'm running the following helm template command to confirm that I'm getting the proper output for at least the app1 part before actually trying to deploy to the k8s cluster.
helm template app1 --set date="$EPOCHSECONDS" --set namespace='porject_namespace' --values helm/project1/dev/values.yaml helm/project1/dev/charts/app1
Most of the resulting yaml looks great, and it looks like the values I have defined in the base values.yml file are getting passed through in other areas like this example:
initContainers:
- name: appinitcontainer
image: "company.repo.io/repo_folder/initcontainer:latest"
But there is one portion that is populated from the deployment.yml template file that is still showing the curly braces for variables
containers:
- name: app1
image: "map[repository:{{ .Values.image_repo_base_url }}/app1 tag:{{ .Values.app1_tag }}]"
imagePullPolicy: Always
I've tried making variations in all the files mentioned above to remove quotes, use single quotes, etc. In those attempts I usually get a variation of the following errors:
"error converting yaml to json. did not find expected key"
"error mapping values"
I haven't been able to find a solution. I'm assuming that the "helm template" command should not contain any braces like that, all variables and values should be resolved. I'm hoping somebody can provide some tips of things I might be missing.
You're hitting two issues here. First, .Values.app.image is a map containing the two keys repository and tag; that's why you get the weird map[repository:... tag:...] syntax in the output. Second, string values in values.yaml aren't reinterpreted for Helm template syntax; that's why the {{ ... }} markup gets passed through to the output.
This in turn means you need to do two things. To resolve the map, construct the string from the contents of the dictionary; and to resolve the templating markup inside the string values, use Helm's tpl function.
{{- $repository := tpl .Values.app.image.repository . }}
{{- $tag := tpl .Values.app.image.tag . }}
image: "{{ $repository }}:{{ $tag }}"
(You may find it useful to separate "repository", "registry" or "image", and "tag" into three separate parts, since probably all of your images are coming from the same repository; that would let you configure the repository in one place and customize the image name per component. The bitnami/postgresql chart is one example of this setup.)

Github action: stored .env file content into github secrets and in pipeline want to put secret content in .env file

I stored production .env file content into github secrets (in single variable), wants to create the .env file in pipeline and put the secret content into .env file.
Tried following methods
...
env:
ENV_CONTENT: ${{ secrets.ENV_DEV }}
...
run: |
touch .env
echo $ENV_CONTENT
echo $ENV_CONTENT >> .env
cat .env
...
run: |
echo ${{ secrets.ENV_DEV }} >> .env
cat .env
...
Output: variable is env file is not getting defined.
> demo#1.0.0 deploy:dev /home/runner/work/lvld-api/lvld-api
> NODE_ENV=dev serverless deploy --stage dev
Serverless: Deprecation warning: Detected ".env" files. Note that Framework now supports loading variables from those files when "useDotenv: true" is set (and that will be the default from next major release)
More Info: https://www.serverless.com/framework/docs/deprecations/#LOAD_VARIABLES_FROM_ENV_FILES
Serverless: DOTENV: Loading environment variables from .env:
Serverless: - STAGE
Serverless Warning --------------------------------------
A valid environment variable to satisfy the declaration 'env:REGION' could not be found.
Serverless Warning --------------------------------------
Main.yml: https://drive.google.com/file/d/1PK4SlyXkC7xRn_eM2SQO1rkWjoJaOYaZ/view?usp=sharing
GithubAction Log: https://drive.google.com/file/d/1YvBfdle1GpomJpyuqneQt0PK-OYShXZC/view?usp=sharing

Helm pass values to the templates for multiple environment

I am very newbie on Helm and struggling to configure deployment.yaml.Mychart tree structure looks like below. But how should I pass the values for dev and prod to the deployment.yaml?
For example if I would like to use different replicas for prod should I add another values such as below or deployment.yaml always keep as it is and use mutlipe values.yaml like as below.
replicas: {{ .Values.replicaCount .values.dev.replicacount }}
Or use only below tag is enough. Let's say if git branch equals to master then use below command
helm install . -f values.production.yaml
If git branch equal to development then use following command
helm install . -f values.dev.yaml
+-- charts
| \-- my-chart
| +-- Chart.yaml # Helm chart metadata
| +-- templates
| | \-- ...
| +-- values.yaml # default values
| +-- values.dev.yaml # development override values
| +-- values.prod.yaml # production override values
You should have a values.yaml file per environment.
That means that in your templates/deployment.yaml you'll have
replicas: {{ .Values.replicaCount }}
And then, for each environment you'll have a specific values.yaml. Like:
+-- values.yaml # default values
+-- values.dev.yaml # development override values
+-- values.prod.yaml # production override values
It really depends on the differences between your environments.
As being described in helm docs:
There are three potential sources of values:
A chart's values.yaml file
A values file supplied by helm install -f or helm upgrade -f on helm install or helm upgrade
The values passed to a --set or --set-string flag
At a high level, you might want consider the two approaches below:
If your environments have major differences between them then passing a different values.yaml file is a option and worth the extra maintenance.
On the other hand, if the differences are only with a few fields - consider using just one base values.yaml file with default values and override just the specific fields with the --set flag.
(*) In your specific case, you divided the DEV/PROD configuration into different files which is a good practice but there are cases where the difference can be only with one or two urls (and maybe some secrets that you'll want to pass as inline values anyway). So you can save yourself the extra maintenance.

Override values of Subcharts inHelm

We have created common helm charts.
Using the common charts, we have derived HelloWorld helm chart
Charts
Common
templates
> _deployment.yaml
> _configmap.yaml
> _service.yaml
Chart.yaml
HelloWorld
templates
> deployment.yaml
> configmap.yaml
> service.yaml
Chart.yaml
values.yaml
values-dev.yaml
We wanted to override values specified values.yaml (subchart) using values-dev.yaml , We understand we can override the values from the subchart.
The values can be overrided.
However, we wanted to override the values for chart level instead of app level. Below is the structure.
Charts
Common
templates
> _deployment.yaml
> _configmap.yaml
> _service.yaml
Chart.yaml
HelloWorld1
templates
> deployment.yaml
> configmap.yaml
> service.yaml
Chart.yaml
values-HelloWorld1.yaml
values-dev.yaml
HelloWorld2
templates
> deployment.yaml
> configmap.yaml
> service.yaml
Chart.yaml
values-HelloWorld2.yaml
values-qa.yaml
values.yaml
Is it possible to override the values from values.yaml?
I'm not 100% sure what you're asking, but in general you can override subchart values at any point by putting them under a key matching the charts name. So something like:
Common:
foo: bar