Build Kustomize with Helm Fails to Build - kubernetes

kustomize build --enable-helm .I have the following project structure:
project
- helm-k8s
- values.yml
- Chart.yml
- templates
- base
- project-namespace.yml
- grafana
- grafana-service.yml
- grafana-deployment.yml
- grafana-datasource-config.yml
- prometheus
- prometheus-service.yml
- prometheus-deployment.yml
- prometheus-config.yml
- prometheus-roles.yml
- kustomization.yml
- prod
- kustomization.yml
- test
- kustomization.yml
I'm trying to build my kustomization file using helm like below:
project/helm-k8s/templates/base/$ kubectl kustomize build . --enable-helm -> dummy.yml
I get an error message like this:
project/helm-k8s/templates/base$ kubectl kustomize . --enable-helm
error: accumulating resources: accumulation err='accumulating resources from 'project-namespace.yml': missing metadata.name in object {{v1 Namespace} {{ } map[name:] map[]}}': must build at directory: '/home/my-user/project/helm-k8s/templates/base/project-namespace.yml': file is not directory
Is it not possible for kustomize to use the values.yml which is located directly under helm-k8s folder and create the final manifest for my cluster? What am I doing wrong here?
EDIT: Here is how my kustomization.yml looks like:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: open-electrons-monitoring-kustomization
resources:
# 0. Get the namespaces first
- project-namespace.yml
# 1. Set up monitoring services (prometheus)
#- monitoring/prometheus/prometheus-roles.yml
- prometheus/prometheus-config.yml
- prometheus/prometheus-roles.yml
- prometheus/prometheus-deployment.yml
- prometheus/prometheus-service.yml
# 2. Set up monitoring services (grafana)
- grafana/grafana-datasource-config.yml
- grafana/grafana-deployment.yml
- grafana/grafana-service.yml

I think you may have misunderstood the use of the --enable-helm parameter. It does not allow kustomize to perform helm-style templating on files, so when you write:
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.app.namespace }}
labels:
name: {{ .Values.app.namespace }}
That doesn't do anything useful. It just generates invalid YAML output.
The --enable-helm option allows you to explode Helm charts using Kustomize; see here for the documentation, but for example it allows you to process a kustomization.yaml file like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: traefik
repo: https://helm.traefik.io/traefik
includeCRDs: true
releaseName: example
version: 20.8.0
valuesInline:
deployment:
replicas: 3
logs:
access:
enabled: true
Running kubectl kustomize --enable-helm will cause kustomize to fetch the helm chart and run helm template on it, producing YAML manifests on stdout.

Related

How do I set helm values (not files) in ArgoCD Application spec

I looked all over the ArgoCD docs for this but somehow I cannot seem to find an answer. I have an application spec like so:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: myapp
namespace: argocd
spec:
destination:
namespace: default
server: https://kubernetes.default.svc
project: default
source:
helm:
valueFiles:
- my-values.yaml
path: .
repoURL: ssh://git#blah.git
targetRevision: HEAD
However, I also need to specify a particular helm value (like you'd do with --set in the helm command. I see in the ArgoCD web UI that it has a spot for Values, but I have tried every combination of entries I can think of (somekey=somevalue, somekey:somevalue, somekey,somevalue). I also tried editing the manifest directly, but I still get similar errors trying to do so.
The error is long nonsense that ends with error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type map[string]interface {}
What is the correct syntax to set a single value, either through the web UI or the manifest file?
you would use parameters via spec.source.helm.parameters
something like:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
project: my-project
source:
repoURL: https://charts.my-company.com
targetRevision: "1234"
chart: my-chart
helm:
parameters:
- name: my.helm.key
value: some-val
destination:
name: k8s-dev
namespace: my-ns
Sample from Argo Docs - https://argo-cd.readthedocs.io/en/stable/user-guide/helm/#build-environment
To override just a few arbitrary parameters in the values you indeed can use parameters: as the equivalent of Helm's --set option or fileParameters: instead of --set-file:
...
helm:
# Extra parameters to set (same as setting through values.yaml, but these take precedence)
parameters:
- name: "nginx-ingress.controller.service.annotations.external-dns\\.alpha\\.kubernetes\\.io/hostname"
value: mydomain.example.com
- name: "ingress.annotations.kubernetes\\.io/tls-acme"
value: "true"
forceString: true # ensures that value is treated as a string
# Use the contents of files as parameters (uses Helm's --set-file)
fileParameters:
- name: config
path: files/config.json
But to answer your original question, for the "Values" option in the GUI you pass literal YAML block in the manifest, like:
helm:
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
- values-prod.yaml
# Values file as block file
values: |
ingress:
enabled: true
path: /
hosts:
- mydomain.example.com
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
labels: {}
Check ArgoCD sample application for more details.

Templates and Values in different repos via ArgoCD

I'm looking for insights for the following situation...
I have one ArgoCD application pointing to a Git repo (A), where there's a values.yaml;
I would like to use the Helm templates stored in a different repo (B);
Any suggestions/alternatives on how to make this work?
I think helm dependency can help solve your problem.
In file Chart.yaml of repo (A), declares dependency (chart of repo B)
# Chart.yaml
dependencies:
- name: chartB
version: "0.0.1"
repository: "https://link_to_chart_B"
Link references:
https://github.com/argoproj/argocd-example-apps/tree/master/helm-dependency
P/s: You need add repo chart into ArgoCD.
The way we solved it is by writing a very simple helm plugin
and pass to it the URL where the Helm chart location (chartmuseum in our case) as an env variable
server:
name: server
config:
configManagementPlugins: |
- name: helm-yotpo
generate:
command: ["sh", "-c"]
args: ["helm template --version ${HELM_CHART_VERSION} --repo ${HELM_REPO_URL} --namespace ${NAMESPACE} $HELM_CHART_NAME --name-template=${HELM_RELEASE_NAME} -f $(pwd)/${HELM_VALUES_FILE} "]
you can run the helm command with the flag of --repo
and in the ArgoCD Application yaml you call the new plugin
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: application-test
namespace: infra
spec:
destination:
namespace: infra
server: https://kubernetes.default.svc
project: infra
source:
path: "helm-values-files/telegraf"
repoURL: https://github.com/YotpoLtd/argocd-example.git
targetRevision: HEAD
plugin:
name: helm-yotpo
env:
- name: HELM_RELEASE_NAME
value: "telegraf-test"
- name: HELM_CHART_VERSION
value: "1.8.18"
- name: NAMESPACE
value: "infra"
- name: HELM_REPO_URL
value: "https://helm.influxdata.com/"
- name: HELM_CHART_NAME
value: "telegraf"
- name: HELM_VALUES_FILE
value: "telegraf.yaml"
you can read more about it in the following blog
post

How to replace helm values.yaml by using the kustomize configMapGenerator generated name?

I use kustomize to generate various configMap as the following example: -
# kustomization.yaml
configMapGenerator:
- name: my-config-path # <-- My original name.
files:
- file1.txt
- file2.txt
- ...
- fileN.txt
The output is as the following example: -
# my-configmap.yaml
apiVersion: v1
data:
file1.txt: |
...
file2.txt: |
...
fileN.txt |
...
kind: ConfigMap
metadata:
name: my-config-path-mk89db6928 # <-- There is a hashed value appended at the end.
Oh the other hand, I have a third-party helm which requires to override the values.yaml as the following: -
# third-party-chart-values.yaml
customArguments:
- --config.dir=/config
...
volumes:
- mountPath: /config
name: my-config-path # <--- Here, I would like to replace with `my-config-path-mk89db6928`.
type: configMap
Do we have any way to replace the my-config-path at the third-party-chart-values.yaml with the generated value my-config-path-mk89db6928 from the file named my-configmap.yaml? (Note, if the configMap is changed the hashed, e.g. mk89db6928 also be changed, too.)
Do you want to keep the hash? Because you could prevent it with:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
generatorOptions:
disableNameSuffixHash: true
Then you'd have a predictable name. Although that would cause a collision if you installed the same helm chart more than once in the same namespace.

Modifying Environmental Variables of Helm Chart Dependencies without Forking

I am creating a Helm chart that depends on several Helm charts that are not maintained by me, and I would like to make some configurations to these subcharts. The configurations are not too complex, I just want to add a several environmental variables to each of the containers. However, the env fields of the containers are not already templated out in the Helm charts. I would like to avoid forking these charts and maintaining them myself since this is such a trivial change.
Is there an easy way to provide environmental variables to several containers in Kubernetes in a flexible way, either through Helm or another tool?
I am currently looking into using Kustomize to do the last mile changes after Helm fills out the templates, but I am getting hung up on setting up Kustomize patches. In my scenario, I have the environmental variables being filled out by Helm in a ConfigMap. I would like to add an envFrom field to read the ConfigMap and add the given environment variables to the containers. I want to add the envFrom to the resource YAML files through Kustomize. The snag I am hitting is that Kustomize patch.yaml files are resource specific. Below is an example of my patch.yaml and my kustomization.yaml respectively.
patch.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: does-not-matter
spec:
template:
spec:
containers:
- name: server
envFrom:
- configMapRef:
name: my-env
kustomization.yaml:
resources:
- all.yaml
patches:
- path: patch.yaml
target:
kind: "StatefulSet"
name: "*"
To perform the Kustomization, I run:
helm install perceptor ../ --post-renderer ./kustomize
Which basically just fills out the Helm templates and passes them to Kustomize to do the last-mile patches.
In the patch, I have to specify the name of the container ("server") to properly inject my configMap. What I would really like to do is be able to provide those environment variables to all containers in a given deployment (as defined by the target constraints in kustomization.yaml), regardless of their name. From what I have seen, it almost looks like I will have to write a separate patch for each container, which is suboptimal. I just start working with Kubernetes, so it is possible that I am missing something that would easily solve this problem.
I understand, that you don't want to break the open/closed principle of subchart your umbrella chart depends on by forking it, but still you have a right to propose a changes to it by making it more extension-able and flexible. Yes, I would suggest you to submit a Pull Request/request a new Feature to the helm chart project in context.
The following code snippet won't break current functionality, and give users a chance to introduce custom environment variables based on existing ConfigMap(s) in desired resource's Spec.
helm_template.yaml
#helm template
...
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
{{- if .Values.envConfigs }}
{{- range $key, $config := $.Values.envConfigs }}
- name: {{ $key }}
valueFrom:
configMapKeyRef:
name: {{ $config }}
key: {{ $key | quote }}
{{- end }}
{{- end }}
values.yaml
#
# values.yaml
#
envConfigs:
Q3_CFG_MAP: Q3DM17
Q3_CFG_TIMEOUT: 30
# if empty use:
# envConfigs: {}

How to pass gitlab ci/cd variables to kubernetes(AKS) deployment.yaml

I have a node.js (express) project checked into gitlab and this is running in Kubernetes . I know we can set env variables in Kubernetes(on Azure, aks) in deployment.yaml file.
How can i pass gitlab ci/cd env variables to kubernetes(aks) (deployment.yaml file) ?
You can develop your own helm charts. This will pay back in long perspective.
Other approach: there is an easy and versatile way is to put ${MY_VARIABLE} placeholders into the deployment.yaml file. Next, during the pipeline run, at the deployment job use the envsubst command to substitute vars with respective values and deploy the file.
Example deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-${MY_VARIABLE}
labels:
app: nginx
spec:
replicas: 3
(...)
Example job:
(...)
deploy:
stage: deploy
script:
- envsubst < deployment.yaml > deployment-${CI_JOB_NAME}.yaml
- kubectl apply -f deployment-${CI_JOB_NAME}.yaml
I'm going to give you an easy solution that may or may not be "the solution".
To do what you want you could simply add your gitlab env variables in a secret during the cd before launching your deployment. This will allow you to use env secret inside the deployment.
If you want to do it like this you will need to think of how to delete them when you want to update them for idempotence.
Another solution would be to create the thing you are deploying as a Helm Chart. This would allow you to have specific variables (called values) that you can use in the templating and override at install / upgrade time.
There are many articles around getting setup with something like this.
Here is one specifically around the context of CI/CD: https://medium.com/#gajus/the-missing-ci-cd-kubernetes-component-helm-package-manager-1fe002aac680.
Another specifically around GitLab: https://medium.com/#yanick.witschi/automated-kubernetes-deployments-with-gitlab-helm-and-traefik-4e54bec47dcf
For future readers. Another way is to use a template file and generate deployment.yaml from the template using envsubst.
Template file:
# template/deployment.tmpl
---
apiVersion: apps/v1
kind: deployment
metadata:
name: strapi-deployment
namespace: strapi
labels:
app: strapi
# deployment specifications
spec:
replicas: 1
selector:
matchLabels:
app: strapi
serviceName: strapi
# pod specifications
template:
metadata:
labels:
app: strapi
# pod blueprints
spec:
containers:
- name: strapi-container
image: registry.gitlab.com/repo-name/image:${IMAGE_TAG}
imagePullPolicy: Always
imagePullSecrets:
- name: gitlab-registry-secret
deploy stage in .gitlab-ci.yml
(...)
deploy:
stage: deploy
script:
# deploy resources in k8s cluster
- envsubst < strapi-deployment.tmpl > strapi-deployment.yaml
- kubectl apply -f strapi-deployment.yaml
As defined here image: registry.gitlab.com/repo-name/image:${IMAGE_TAG}, IMAGE_TAG is an environment variable defined in gitlab. envsubst would go through strapi-deployment.tmpl and substitute any variable defined there and generate strapi-deployment.yaml file.
sed command helped me with this:
In Deployment.yaml use some placeholder, like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
#Other configs bla-bla-bla
spec:
containers:
- name: app
image: my.registry./myapp:<VERSION>
And in .gitlab-ci.yml use sed:
deploy:
stage: deploy
image: kubectl-img
script:
# - kubectl bla-bla-bla whatever you want to do before the apply command
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" Deployment.yaml
- kubectl apply -f Deployment.yaml
So the resulting Deployment.yaml will contain CI_COMMIT_SHORT_SHA value instead of <VERSION>
Source of the solution