I was trying to deploy an application with helm on argocd , and this is my case .
I want to deploy vault using helm and
i use hashicorp's vault chart as base chart and overriding the values using sub-chart
And the base chart has conditions on creating services, PVC , etc..
The values are override on the argocd still the service exists even the condition is made false by boolean
Chart.yml
apiVersion: v2
name: keycloak
type: application
version: 1.0.0
dependencies:
- name: keycloak
version: "9.7.3"
repository: "https://charts.bitnami.com/bitnami"
Argocd.yml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: vault
namespace: vault
spec:
project: default
source:
chart: vault
repoURL: https://github.com/myrepo.git
targetRevision: HEAD
destination:
server: "https://kubernetes.default.svc"
namespace: kubeseal
Depends how you are overriding values in your chart and This is more of helm rather than ArgoCD.
Considering the Chart.yaml as below and chart name being infra which also has keycloak as dependency subchart:
apiVersion: v2
name: infra
type: application
version: 1.0.0
dependencies:
- name: keycloak
version: "9.7.3"
repository: "https://charts.bitnami.com/bitnami"
Create a values file in the same directory as your Chart.yaml with following contents
keycloak:
fullnameOverride: keycloak-1
here keycloak: key in the values file sets the values for the subchart of name keycloak
You can have multiple subchart values override like above in one values file.
Related
I have an yaml which gets deployed by the ArgoCd controller, that deploys a helm chart from artifactory.
For my local development I use a sperate values.yml in the helm chart.
My controller looks like below refer git link
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: <name-to-the-app>
namespace: argocd
spec:
project: default
source:
repoURL: https://harbor.1000kit.org/chartrepo/1000kit/
targetRevision: <version-hardcode-in-repo>
chart: <chart-name-that-is-getting-deployed>
helm:
releaseName: <release-name-hardcoded>
# custom values to override the helm chart one
values: |
<pass-the-custom-values>>
destination:
server: https://kubernetes.default.svc
namespace: <namespace-where-to-be-deployed>
syncPolicy:
automated:
prune: true
selfHeal: true
The helm chart that is getting deployed is containing the values.yaml
I am trying to override the values.yml present in the helm chart in artifcatory, so passing all the values in part of the source -> helm -> values like above.
Question:
In the custom values, I skipped some value but the ArgoCd is fetching those values from the helm chart value.yml and using it. Is this the behavior?
Another observation is that, The helm chart repo values.yaml is being loaded as parmater in the ArgoCD, and the argocd.io application yaml the values are displayed in the UI.
From the documents i see there are parameters, which can be overridden but the values can't be overridden.
spec:
source:
helm:
parameters:
- name: app
value: $ARGOCD_APP_NAME
Is there any option to explicitly tell ArgoCD to ignore the values.yml from the helm chart in artifactory.
I am new to ArgoCd
Looks like,
The helm chart in the artifactory has a key : value combination say
# value present in the helm chart values.yaml
app-details:
name: "demoapp"
version: "1.0"
description: "simple demo app"
In argoCd application manifest
# .....
helm:
releaseName: <release-name-hardcoded>
# custom values to override the helm chart one
values: |
app-details :
name: "demoapp"
version: "1.0"
# if I didn't specify the description here <--------------- *
# ....
* - ArgoCd is setting the default value (from values.yaml) for description key in this case. This description value can be seen in the deployed manifest.
Also the values.yml data are displayed as parameter in the UI.
I had to use a different approach, where I need to make changes in the template charts that uses this description value.
I’m using Argocd with helm charts. I have two environments: uat, prod.
As far as I understand, proper approach for helm is to have base folder with commons + per env folder.
So I have single branch with 3 folders:
base # for commons: Chart.yaml, templates, etc.
uat # for uat values.yaml
prod # for prod values.yaml
In my helm chart I have following Chart.yaml (stored in base folder):
apiVersion: v1
appVersion: 1.0.11
name: my-nice-app
version: 1.0.11
With every release I increase appVersion and version (version is used as image tag version in charts).
I use declarative approach to deploy helm chart (this is uat application resource, similar is for prod):
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-nice-app
namespace: argocd
spec:
project: default
source:
repoURL: some-url
targetRevision: HEAD
path: base
helm:
version: v3
valueFiles:
- uat/values.yaml
destination:
server: https://kubernetes.default.svc
namespace: uat
syncPolicy:
syncOptions:
- CreateNamespace=false
automated:
selfHeal: true
prune: true
Question:
I do update uat values file.
I do update Chart.yaml with new version.
I would like to deploy uat only (but when I update base prod would also triggers).
Where or how should I store Chart.yaml? Should I change Argocd Application resource? Or only option is to duplicate charts per env?
I prefer also not to store any version related info in Argocd Application resource (so not to change it every time).
It would be nice not to apply kustomized.io.
You should split it into 2 charts (base chart and value chart)
base chart is chart dependency of value chart, like that if you update base chart, value chart won't be affected if you don't update chart dependency.
File Chart.yaml of value-chart will look like this.
apiVersion: v2
name: my-nice-app-prod
description: Chart for production
type: application
version: 0.0.1
appVersion: "1.0.0"
dependencies:
- name: my-nice-app-chart
version: 0.1.9
Link references:
https://helm.sh/docs/helm/helm_dependency/
You should use the variants folder,
The base directory holds configuration which is common to all environments. It is not expected to change often. If you want to do changes to multiple environments at the same time, it is best to use the “variants” folder.
Please go through this doc
https://codefresh.io/blog/how-to-model-your-gitops-environments-and-promote-releases-between-them/
Right now, I have to install multiple helm charts in different namespaces for my product to work. I am trying to create a super helm chart in which I am planning to add the helm charts (of my tools, as mentioned above) and install them in one shot. My problem is, as these tools are in different namespaces I am not sure where to specify the namespace key where I want that particular dependency (chart) to be installed. For e.g. if below is the Charts.yaml of my super helm chart
dependencies:
- name: first_chart
version: 1.2.3
repository: https://firstchart.repo
- name: second_chart
version: 1.5.6
repository: https://secondchart.repo
I want my first chart to be installed in namespace foo and the second chart to be installed in namespace bar.
I was looking at using conditions but I believe conditions will only take a boolean as a value.
I stumbled upon this link (https://github.com/helm/helm/issues/2060) which says the we can do it in Helm 3 but mostly on how to keep releases between different namespaces. It does not specifically answer my question.
There is no builtin way to do this with pure Helm, but there is with helmfile.
Your example as helmfile.yaml:
releases:
- name: chart1 # name of the release (helm install <...> first_chart)
chart: repo1/first_chart
version: 1.2.3
namespace: foo
- name: chart2
chart: repo2/second_chart
version: 1.5.6
namespace: bar
# in case you want helmfile to automatically update repos
repositories:
- name: repo1
url: https://firstchart.repo
- name: repo2
url: https://secondchart.repo
Then, run:
helmfile sync => run helm install/upgrade on all releases, or
helmfile apply => same as sync, but do a diff first to only upgrade/install releases that changed
There is way more to helmfile, but this is the gist.
PS: if you struggle with values or want to have something similar to how umbrella Chart values are handled, have a look at helmfile: a simple trick to handle values intuitively
The way I solved this for my clusters with with ArgoCD's App of Apps cluster bootstrapping model. Of course, it requires that ArgoCD is install the cluster. However, for many reasons not relevant to this answer I would highly encourage installing ArgoCD regardless of the easy of bootstrapping capabilities.
Assuming ArgoCD is in place the structure is a single Helm chart containing templates for each of the child charts it will deploy and managed via Argo's Application CRD. You will notice there is a definition as part of the CRD, spec.destination.namespace, which governs where the chart will be deployed.
An example Application template which governs my cert-manager chart deployment to the cert-manager namespace looks like:
{{- if .Values.certManager.enabled }}
# ref: https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#applications
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cert-manager
# You'll usually want to add your resources to the argocd namespace.
namespace: argocd
# Add a this finalizer ONLY if you want these to cascade delete.
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
# The project the application belongs to.
project: cluster-configs
# Source of the application manifests
source:
repoURL: https://github.com/yourOrg/Helm
targetRevision: {{ .Values.targetRevision }}
path: charts/cert-manager-chart
# helm specific config
helm:
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
{{- range .Values.certManager.valueFiles }}
- {{ . }}
{{- end }}
# Optional Helm version to template with. If omitted it will fall back to look at the 'apiVersion' in Chart.yaml
# and decide which Helm binary to use automatically. This field can be either 'v2' or 'v3'.
version: v3
# Destination cluster and namespace to deploy the application
destination:
server: https://kubernetes.default.svc
namespace: cert-manager
{{- end }}
With a corresponding values.yaml file for this parent chart which may look something like the following with the path to desired value file(s) in that child chart's directory specified.
targetRevision: v1.11.0
certManager:
enabled: true
valueFiles:
- "values.yaml"
clusterAutoScaler:
valueFiles:
- "envs/dev-account/saas/values.yaml"
clusterResourceLimits:
valueFiles:
- "values.yaml"
externalDns:
valueFiles:
- "envs/dev-account/saas/values.yaml"
ingressNginx:
enabled: true
valueFiles:
- "values.yaml"
Below is a screenshot of one of my app of apps directory to complete the example.
There are build environment variables (https://argoproj.github.io/argo-cd/user-guide/build-environment/) so can inject something like $ARGOCD_APP_NAME on the application/helm yaml file and it resolves to the actual value.
Is there a way we can set custom environment variables so it can be resolved on the argocd application yaml file?
For example on below argocd application yaml, need to set the ENV value so helm can know which values.yaml to use.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
...
spec:
...
source:
...
helm:
valueFiles:
- values_${ENV}.yaml
It's a late answer, but you can. You can use the plugin field to add the ENV variables in the application level, the example follows:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
...
spec:
...
source:
plugin:
env:
- name: ENV_VARIABLE
value: ENV_VALUE
I am trying to add a new dashboard to the below helm chart
https://github.com/helm/charts/tree/master/stable/prometheus-operator
The documentation is not very clear.
I have added a config map to the name space like the below -
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-grafana-dashboard
namespace: monitoring
labels:
grafana_dashboard: "1"
data:
etcd-dashboard.json: |-
{JSON}
According to the documentation, this should just be "picked" up and added, but its not.
https://github.com/helm/charts/tree/master/stable/grafana#configuration
The sidecar option in my values.yaml looks like -
grafana:
enabled: true
## Deploy default dashboards.
##
defaultDashboardsEnabled: true
adminPassword: password
ingress:
## If true, Grafana Ingress will be created
##
enabled: false
## Annotations for Grafana Ingress
##
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
## Labels to be added to the Ingress
##
labels: {}
## Hostnames.
## Must be provided if Ingress is enable.
##
# hosts:
# - grafana.domain.com
hosts: []
## Path for grafana ingress
path: /
## TLS configuration for grafana Ingress
## Secret must be manually created in the namespace
##
tls: []
# - secretName: grafana-general-tls
# hosts:
# - grafana.example.com
#dashboardsConfigMaps:
#sidecarProvider: sample-grafana-dashboard
sidecar:
dashboards:
enabled: true
label: grafana_dashboard
I have also tried adding this to the value.yml
dashboardsConfigMaps:
- sample-grafana-dashboard
Which, doesn't work.
Does anyone have any experience with adding your own dashboards to this helm chart as I really am at my wits end.
To sum up:
For sidecar you need only one option set to true - grafana.sidecar.dashboards.enabled
Install prometheus-operator witch sidecard enabled:
helm install stable/prometheus-operator --name prometheus-operator --set grafana.sidecar.dashboards.enabled=true --namespace monitoring
Add new dashboard, for example
MongoDB_Overview:
wget https://raw.githubusercontent.com/percona/grafana-dashboards/master/dashboards/MongoDB_Overview.json
kubectl -n monitoring create cm grafana-mongodb-overview --from-file=MongoDB_Overview.json
Now the tricky part, you have to set a correct label for your
configmap, by default grafana.sidecar.dashboards.label is set
tografana_dashboard, so:
kubectl -n monitoring label cm grafana-mongodb-overview grafana_dashboard=mongodb-overview
Now you should find your newly added dashboard in grafana, moreover every confimap with label grafana_dashboard will be processed as dashboard.
The dashboard is persisted and safe, stored in configmap.
UPDATE:
January 2021:
Prometheus operator chart was migrated from stable repo to Prometheus Community Kubernetes Helm Charts and helm v3 was released so:
Create namespace:
kubectl create namespace monitoring
Install prometheus-operator from helm chart:
helm install prometheus-operator prometheus-community/kube-prometheus-stack --namespace monitoring
Add Mongodb dashboard as an example
wget https://raw.githubusercontent.com/percona/grafana-dashboards/master/dashboards/MongoDB_Overview.json
kubectl -n monitoring create cm grafana-mongodb-overview --from-file=MongoDB_Overview.json
Lastly, label the dashboard:
kubectl -n monitoring label cm grafana-mongodb-overview grafana_dashboard=mongodb-overview
You have to:
define you dashboard json as a configmap (as you have done, but see below for an easier way)
define a provider: to tell where to load the dashboard
map the two together
from values.yml:
dashboardsConfigMaps:
application: application
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: application
orgId: 1
folder: "Application Metrics"
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/application
Now the application config map should create files in this directory in the pod, and as has been discussed the sidecar should load them into an Application Metrics folder, seen in the GUI.
That probably answers your issue as written, but as long as your dashboards aren't too big using kustonmise mean you can have the json on disk without needing to include the json in another file thus:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# May choose to enable this if need to refer to configmaps outside of kustomize
generatorOptions:
disableNameSuffixHash: true
namespace: monitoring
configMapGenerator:
- name: application
files:
- grafana-dashboards/application/api01.json
- grafana-dashboards/application/api02.json
For completeness sake you can also load dashboards from url or from the Grafana site, although I don't believe mixing method in the same folder works.
So:
dashboards:
kafka:
kafka01:
url: https://raw.githubusercontent.com/kudobuilder/operators/master/repository/kafka/docs/latest/resources/grafana-dashboard.json
folder: "KUDO Kafka"
datasource: Prometheus
nginx:
nginx1:
gnetId: 9614
datasource: Prometheus
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: kafka
orgId: 1
folder: "KUDO Kafka"
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/kafka
- name: nginx
orgId: 1
folder: Nginx
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/nginx
Creates two new folders containing a dashboard each, from external sources, or maybe you point this at your git repo you de-couple your dashboard commits from your deployment.
If you do not change the settings in the helm chart. The default user/password for grafana is:
user: admin
password: prom-operator