Glob with helm and ArgoCD isn't working as expected - kubernetes

I am testing this piece of code with Helm/ArgoCD/Kubernetes. I have the chart in a repo and it has foo directory with some files in it and I want to create a configmap with files in that foo directory.
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: conf
data:
{{ (.Files.Glob "foo/*").AsConfig | indent 2 }}
When I test this on a Linux server (helm template ./) it's finding the files properly and shows the correct manifests. But when I try it from my Mac, it is not finding any files, thus configmap has no data in it. I have pushed the chart along with foo directory into a Git repo and configured ArgoCD to deploy to a k8s cluster. And it is also not finding the files and thus creating a configmap with no data incorrectly.
I have been going through Helm Documentation and ChatGPT and all. No luck finding an answer.
What I am doing wrong here?
here is the directory layout:
mychart/
|-- Chart.yaml
|-- values.yaml
|-- templates/
| |-- configmap.yaml
|-- foo/
| |-- random.jks
| |-- random.xml

Related

Include configmap with non-managed helm chart

I was wondering if it is possible to include a configmap with its own values.yml file with a helm chart repository that I am not managing locally. This way, I can uninstall the resource with the name of the chart.
Example:
I am using New Relics Helm chart repository and installing the helm charts using their repo name. I want to include a configmap used for infrastructure settings with the same helm deployment without having to use a kubectl apply to add it independently.
I also want to avoid having to manage the repo locally as I am pinning the version and other values separately from the help upgrade install set triggers.
What you could do is use Kustomize. Let me show you with an example that I use for my Prometheus installation.
I'm using the kube-prometheus-stack helm chart, but add some more custom resources like a SecretProviderClass.
kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: kube-prometheus-stack
repo: https://prometheus-community.github.io/helm-charts
version: 39.11.0
releaseName: prometheus
namespace: prometheus
valuesFile: values.yaml
includeCRDs: true
resources:
- secretproviderclass.yaml
I can then build the Kustomize yaml by running kustomize build . --enable-helm from within the same folder as where my kustomization.yaml file is.
I use this with my gitops setup, but you can use this standalone as well.
My folder structure would look something like this:
.
├── kustomization.yaml
├── secretproviderclass.yaml
└── values.yaml
Using only Helm without any 3rd party tools like kustomize there are two solutions:
Depend on the configurability of the Chart you are using as described by #Akshay in the other answer
Declare the Chart you are looking to add a ConfigMap to as a dependency
You can manage the Chart dependencies in the Chart.yaml file:
# Chart.yaml
dependencies:
- name: nginx
version: "1.2.3"
repository: "https://example.com/charts"
With the dependency in place, you can add your own resource files (e.g., the ConfigMap) to the chart. During Helm install, all dependencies and your custom files will be merged into a single Helm deployment.
my-nginx-chart/:
values.yaml # defines all values including the dependencies
Chart.yaml # declares the dependencies
templates/ # custom resources to be added on top of the dependencies
configmap.yaml # the configmap you want to add
To configure values for a dependency, you need to prefix the parameters in your values.yaml:
my-configmap-value: Hello World
nginx: #<- refers to "nginx" dependency
image: ...

ArgoCD multiple files into argocd-rbac-cm configmap data

Is it possible to pass a csv file to the data of the "argocd-rbac-cm" config map? Since I've deployed argo-cd through gitops (with the official argo-cd helm chart), I would not like to hardcode a large csv file inside the configmap itseld, I'd prefer instead reference a csv file direct from the git repository where the helm chart is located.
And, is it also possible to pass more than one file-like keys?
Example:
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
namespace: argocd
data:
policy.default: role:readonly
policy.csv: |
<<< something to have this append many files
<<< https://gitlab.custom.net/proj_name/-/blob/master/first_policy.csv # URL from the first csv file in the git repository >>
<<< https://gitlab.custom.net/proj_name/-/blob/master/second_policy.csv # URL from the second csv file in the git repository >>
Thanks in advance!
Any external evaluation in a policy.csv would lead to some unpredictable behaviour in cluster and would complicate argocd codebase without obvious gains. That's why this configmap should be set statically before deploying anything
You basically have two options:
Correctly set .server.rbacConfig as per https://github.com/argoproj/argo-helm/blob/master/charts/argo-cd/templates/argocd-configs/argocd-rbac-cm.yaml - create your configuration with some bash scripts and assign to a variable, i.e. RBAC_CONFIG then pass it in your CI/CD pipeline as helm upgrade ... --set "server.rbacConfig=$RBAC_CONFIG"
Extend chart with your own template and use .Files.Get function to create cm from files that already exists in your repository, see https://helm.sh/docs/chart_template_guide/accessing_files/ , with something like
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
data:
policy.csv: |-
{{- $files := .Files }}
{{- range tuple "first_policy.csv" "first_policy.csv" }}
{{ $files.Get . }}
{{- end }}

kubernetes config map data value externalisation

I'm installing fluent-bit in our k8s cluster. I have the helm chart for it on our repo, and argo is doing the deployment.
Among the resources in the helm chart is a config-map with data value as below:
apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit
labels:
app: fluent-bit
data:
...
output-s3.conf: |
[OUTPUT]
Name s3
Match *
bucket bucket/prefix/random123/test
region ap-southeast-2
...
My question is how can I externalize the value for the bucket so it's not hardcoded (please note that the bucket value has random numbers)? As the s3 bucket is being created by a separate app that gets ran on the same master node, the randomly generated s3 bucket name is available as environment variable, e.g. doing "echo $s3bucketName" on the node would give the actual value).
I have tried doing below on the config map but it didn't work and is just getting set as it is when inspected on pod:
bucket $(echo $s3bucketName)
Using helm, I know it can be achieved something like below and then can populate using scripting something like helm --set to set the value from environment variable. But the deployment is happening auto through argocd so it's not like there is a place to do helm --set command or please let me know if otherwise.
bucket {{.Values.s3.bucket}}
TIA
Instead of using helm install you can use helm template ... --set ... > out.yaml to locally render your chart in a yaml file. This file can then be processed by Argo.
Docs
With FluentBit you should be able to use environment variables such as:
output-s3.conf: |
[OUTPUT]
Name s3
Match *
bucket ${S3_BUCKET_NAME}
region ap-southeast-2
You can then set the environment variable on your Helm values. Depending on the chart you are using and how values are passed you may have to perform a different setup, but for example using the official FluentBit charts with a values-prod.yml like:
env:
- name: S3_BUCKET_NAME
value: "bucket/prefix/random123/test"
Using ArgoCD, you probably have a Git repository where Helm values files are defined (like values-prod.yml) and/or an ArgoCD application defining values direct. For example, if you have an ArgoCD application defined such as:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
# [...]
spec:
source:
# ...
helm:
# Helm values files for overriding values in the helm chart
valueFiles:
# You can update this file
- values-prod.yaml
# Helm values
values: |
# Or update values here
env:
- name: S3_BUCKET_NAME
value: "bucket/prefix/random123/test"
# ...
You should be able to update either values-prod.yml on the repository used by ArgoCD or update directly values: with you environment variable

Override config map file in helm

We have helm charts to deploy our application. We use a configuration.json file for application properties and load them to config map. But users typically use their own configuration file.
Default configuration.json file is packaged inside helm charts under data directoty. This file is read as
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
{{ (.Files.Glob .Values.appConfigFile).AsConfig | indent 4}}
And in values
appConfigFile: data/configuration.json
If users install our charts directly from repository how can this configuration file be overriden? doing --set appConfigFile=/path/to/custom.json doen't populate config map.
If charts are untarred to a directory they can add the custom configuration file into charts directory and give the configuration file using --set appConfigFile=customData/custom.json works
Can file overrides be achieved when charts are deployed from repository directly?
Adding custom configuration to a values file and execute helm install using -f flag is a solution.
customValues.yaml
overrideConfig: true
customConfig:{
//Add your custom json here as variable value
}
Config map yaml
#If custom values file passed then overrideConfig variable will be set.
#So load configmap from customConfig variable
{{ if .Values.overrideConfig}}
app-config.json : |-
{{ toJson .Values.customConfig }}
{{ else }}
# Else load from the default configuration available in charts.
{{ (.Files.Glob .Values.appConfigFile).AsConfig indent 4 }}
{{ end }}
If custom configuration is needed
helm install -f customValues.yaml repo/chartName
Not sure if this is the perfect solution, but ended up taking this route.

How to set environment related values.yaml in Helm subcharts?

I am currently deploying my applications in a Kubernetes cluster using Helm. Now I also need to be able to modify some parameter in the values.yaml file for different environments.
For simple charts with only one level this is easy by having different values-local.yaml and values-prod.yaml and add this to the helm install flag, e.g. helm install --values values-local.yaml.
But if I have a second layer of subcharts, which also need to distinguish the values between multiple environments, I cannot set a custom values.yaml.
Assuming following structure:
| chart
| Chart.yaml
| values-local.yaml
| values-prod.yaml
| charts
| foo-app
| Chart.yaml
| values-local.yaml
| values-prod.yaml
| templates
| deployments.yaml
| services.yaml
This will not work since Helm is expecting a values.yaml in subcharts.
My workaround right now is to have an if-else-construct in the subchart/values.yaml and set this in as a global variable in the parent values.yaml.
*foo-app/values.yaml*
{{ - if .Values.global.env.local }}
foo-app:
replicas: 1
{{ else if .Values.global.env.dev}}
foo-app:
replicas: 2
{{ end }}
parent/values-local.yaml
global:
env:
local: true
parent/values-prod.yaml
global:
env:
prod: true
But I hope there is a better approach around so I do not need to rely on these custom flags.
I hope you can help me out on this.
Here is how I would do it (for reference overriding values):
In your child charts (foochart) define the number of replicas as a variable:
foochart/values.yaml
...
replicas: 1
...
foochart/templates/deployment.yaml
...
spec:
replicas: {{ .Values.replicas }}
...
Then, in your main chart's values files:
values-local.yaml
foochart:
replicas: 1
values-prod.yaml
foochart:
replicas: 2
Just an idea, needs to be fleshed out a bit more...
At KubeCon I saw a talk where they introduced a Kubernetes Operator called Lostromos. The idea was to simplify deployments to support multiple different environments and make maintaining these kinds of things easier. It uses Custom Resource definitions. I wonder if you can leverage Lostromos in this case. Your sub-charts would have a single values.yaml but use lostromos and a CRD to feed in the properties you need. So you would deploy the CRD instead and the CRD would trigger Lostromos to deploy your Helm chart.
Just something to get the ideas going but seemed like it might be worth exploring.
I'm currently getting my chart from stable/jenkins and am trying to set my values.yaml file. I have made the appropriate changes and try to run 'helm install -n --values= stable/jenkins but it continues to install the default values instead of the modified yaml file i created. To be more specific, I commented out the plug-in requirements in the yaml file since it has been causing my pod status to stay on 'Init:0/1' on Kubernetes.