adding multiple ips/domains to filebeat.yml output.elasticsearch via helm - kubernetes-helm

If have multiple elasticsearch/logstash nodes that you want to point them to the output.elasticsearch:hosts in filebeat.yml from the helm chart you can do like that:
values.yaml
note: define hosts as a string, not an array
logstash:
hosts: 192.168.1.2:5444', '192.168.2.100:5544
filebeat-deployment.yml
env:
- name: ELASTICSEARCH_HOSTS
{{- range $key, $val := .Values.logstash }}
value: {{ . | quote }}
{{- end }}
the results will be :
$ helm exec filebeat-pod cat /etc/filebeat/filebeat.yml -n filebeat
setup.template.overwrite: true
setup.ilm.enabled: false
output.elasticsearch:
hosts: ['192.168.1.2:5444', '192.168.2.100:5544']
#username:
#password:
#ssl.verification_mode:
#ssl.certificate_authorities:
#ssl.certificate:
#ssl.key:
filebeat pod logs
$ helm logs filebeat-pod -n filebeat
2022-10-04T09:54:04.539Z INFO eslegclient/connection.go:99 elasticsearch url: http://192.168.1.2:5444
2022-10-04T09:54:04.539Z INFO eslegclient/connection.go:99 elasticsearch url: http://192.168.2.100:5544
NOTE!! - if you have other solutions by adding the multiple ips/domains via helm chart to the ENV container, just reply to this.
Hope you will find this post helpful for you

Related

override output.elasticsearch.hosts filebeat.yml on wazuh-manager v.4.3.8

I'm trying to override the filebeat.yml kubernetes deployment configuration for "output.elasticsearch: hosts" but doesn't work and I'm using the filebeat 7.10.2.
I used several env variables:
env:
- name: ELASTICSEARCH_URL
value: 'http://elasticsearch:9200'
env:
- name: ELASTICSEARCH_HOSTS
value: 'http://elasticsearch:9200'
Doesn't override at all.
The deployment is the wazuh-application v.4.3.8 wazuh-manager and if you have more than one logstash/elasticsearch hosts to add on your filebeat.yml configuration, the only solution that I found is to add the below config and is worked.
values.yml
elasticsearch:
hosts: 192.168.22.33:5544', '192.168.10.2:5544
deployment.yml ### seems that wazuh-app manager v.4.3.8 doesn't use ELASTICSEARCH_URL or ELASTICSEARCH_HOSTS as a variable inside the docker image, is using INDEXER_URL.
- name: INDEXER_URL
{{- range $key, $val := .Values.elasticsearch}}
value: {{ . | quote }}
filebeat.yml results:
output.elasticsearch:
hosts: ['192.168.10.236:5044', '192.168.10.2:5544']

DevOps CI/CD pipelines broken after Kubernetes upgrade to v1.22

Present state
In v1.22 Kubernetes dropped support for v1beta1 API. That made our release pipeline crash and we are not sure how to fix it.
We use build pipelines to build .NET Core applications and deploy them to the Azure Container Registry. Then there are release pipelines that use helm to upgrade them in the cluster from that ACR. This is how it looks exactly.
Build pipeline:
.NET download, restore, build, test, publish
Docker task v0: Build task
Docker task v0: Push to the ACR task
Artifact publish to Azure Pipelines
Release pipeline:
Helm tool installer: Install helm v3.2.4 (check for latest version of Helm unchecked) and install newest Kubectl (Check for latest version checked)
Bash task:
az acr login --name <acrname>
az acr helm repo add --name <acrname>
Helm upgrade task:
chart name <acrname>/<chartname>
version empty
release name `
After the upgrade to Kubernetes v1.22 we are getting the following error in Release step 3.:
Error: UPGRADE FAILED: unable to recognize "": no matches for kind "Ingress" in version "extensions/v1beta1".
What I've already tried
Error is pretty obvious and from Helm compatibility table it states clearly that I need to upgrade the release pipelines to use at least Helm v3.7.x. Unfortunately in this version OCI functionality (about this shortly) is still in experimental phase so at least v3.8.x has to be used.
Bumping helm version to v3.8.0
That makes release step 3. report:
Error: looks like "https://<acrname>.azurecr.io/helm/v1/repo" is not a valid chart repository or cannot be reached: error unmarshaling JSON: while decoding JSON: json: unknown field "acrMetadata"
After reading Microsoft tutorial on how to live with helm and ACR I learned that az acr helm commands use helm v2 so are deprecated and OCI artifacts should be used.
Switching to OCI part 1
After reading that I changed release step 2. to a one-liner:
helm registry login <acrname>.azurecr.io --username <username> --password <password>
That now gives me Login Succeeded in release step 2. but release step 3. fails with
Error: failed to download "<acrname>/<reponame>".
Switching to OCI part 2
I thought that the helm task is incompatible or something with the new approach so I removed release step 3. and decided to make it from the command line in step 2. So now step 2. looks like this:
helm registry login <acrname>.azurecr.io --username <username> --password <password>
helm upgrade --install --wait -n <namespace> <deploymentName> oci://<acrname>.azurecr.io/<reponame> --version latest --values ./values.yaml
Unfortunately, that still gives me:
Error: failed to download "oci://<acrname>.azurecr.io/<reponame>" at version "latest"
Helm pull, export, upgrade instead of just upgrade
The next try was to split the help upgrade into separately helm pull, helm export and then helm upgrade but
helm pull oci://<acrname>.azurecr.io/<reponame> --version latest
gives me:
Error: manifest does not contain minimum number of descriptors (2), descriptors found: 0
Changing docker build and docker push tasks to v2
I also tried changing the docker tasks in the build pipelines to v2. But that didn't change anything at all.
Have you tried changing the Ingress object's apiVersion to networking.k8s.io/v1beta1 or networking.k8s.io/v1? Support for Ingress in the extensions/v1beta1 API version is dropped in k8s 1.22.
Our ingress.yaml file in our helm chart looks something like this to support multiple k8s versions. You can ignore the AWS-specific annotations since you're using Azure. Our chart has a global value of ingress.enablePathType because at the time of writing the yaml file, AWS Load Balancer did not support pathType and so we set the value to false.
{{- if .Values.global.ingress.enabled -}}
{{- $useV1Ingress := and (.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress") .Values.global.ingress.enablePathType -}}
{{- if $useV1Ingress -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: example-ingress
labels:
{{- include "my-chart.labels" . | nindent 4 }}
annotations:
{{- if .Values.global.ingress.group.enabled }}
alb.ingress.kubernetes.io/group.name: {{ required "ingress.group.name is required when ingress.group.enabled is true" .Values.global.ingress.group.name }}
{{- end }}
{{- with .Values.global.ingress.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
# Add these tags to the AWS Application Load Balancer
alb.ingress.kubernetes.io/tags: k8s.namespace/{{ .Release.Namespace }}={{ .Release.Namespace }}
spec:
rules:
- host: {{ include "my-chart.applicationOneServerUrl" . | quote }}
http:
paths:
{{- if $useV1Ingress }}
- path: /
pathType: Prefix
backend:
service:
name: {{ $applicationOneServiceName }}
port:
name: http-grails
{{- else }}
- path: /*
backend:
serviceName: {{ $applicationOneServiceName }}
servicePort: http-grails
{{- end }}
- host: {{ include "my-chart.applicationTwoServerUrl" . | quote }}
http:
paths:
{{- if $useV1Ingress }}
- path: /
pathType: Prefix
backend:
service:
name: {{ .Values.global.applicationTwo.serviceName }}
port:
name: http-grails
{{- else }}
- path: /*
backend:
serviceName: {{ .Values.global.applicationTwo.serviceName }}
servicePort: http-grails
{{- end }}
{{- end }}
Just to make the picture full - mentioned by #wubbalubba change in ingress' YAML in chart definition wasn't the only thing I had to do fixing our pipelines:
So first, obviously, change the API to v1 in ingress' YAML file inside chart definition plus increment the chart version. Then pack it again and push it to the ACR:
helm package .
helm push .\generated-new-chart.tgz oci://<acrname>.azurecr.io/
Next thing, learned from this guide, was to update, or rather I just removed, all the secrets and configmaps connected with my services:
kubectl delete secret -l owner=helm,status=deployed,name=<release_name> --namespace <release_namespace>
kubectl delete configmap -l owner=helm,status=deployed,name=<release_name> --namespace <release_namespace>
Lastly, remove the deployment helm upgrade step. Instead shell script took its responsibility:
helm registry login $(ContainerRegistryUrl) --username $(ContainerRegistryUsername) --password $(ContainerRegistryPassword)
az aks get-credentials --resource-group $(Kubernetes__ResourceGroup) --name $(Kubernetes__Cluster)
helm upgrade --install --wait -n $(NamespaceName) $(ServiceName) oci://$(ContainerRegistryUrl)/services-generic-chart --version 2 -f ./values.yaml
Only then I was able to redeploy everything successfully.

How to provide Vault secrets for a Flink application custom resource in Kubernetes

I would like to provide secrets from a Hashicorp Vault for the Apache Flink jobs running in a Kubernetes cluster.
These credits will be used to access a state-backend for checkpointing and savepoints. The state-backend could be for example Minio S3 storage.
Could someone provide a working example for a FlinkApplication operator please given the following setup?
Vault secrets for username and password (or an access key):
vault kv put vvp/storage/config username=user password=secret
vault kv put vvp/storage/config access-key=minio secret-key=minio123
k8s manifest of the Flink application custom resource:
apiVersion: flink.k8s.io/v1beta1
kind: FlinkApplication
metadata:
name: processor
namespace: default
spec:
image: stream-processor:0.1.0
deleteMode: None
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: vvp-flink-job
vault.hashicorp.com/agent-inject-secret-storage-config.txt: vvp/data/storage/config
flinkConfig:
taskmanager.memory.flink.size: 1024mb
taskmanager.heap.size: 200
taskmanager.network.memory.fraction: 0.1
taskmanager.network.memory.min: 10mb
web.upload.dir: /opt/flink
jobManagerConfig:
resources:
requests:
memory: "1280Mi"
cpu: "0.1"
replicas: 1
taskManagerConfig:
taskSlots: 2
resources:
requests:
memory: "1280Mi"
cpu: "0.1"
flinkVersion: "1.14.2"
jarName: "stream-processor-1.0-SNAPSHOT.jar"
parallelism: 3
entryClass: "org.StreamingJob"
programArgs: >
--name value
Docker file of the flink application:
FROM maven:3.8.4-jdk-11 AS build
ARG revision
WORKDIR /
COPY src /src
COPY pom.xml /
RUN mvn -B -Drevision=${revision} package
# runtime
FROM flink:1.14.2-scala_2.12-java11
ENV FLINK_HOME=/opt/flink
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 6123 8081
CMD ["help"]
The flink-config.yaml contains the following examples:
# state.backend: filesystem
# Directory for checkpoints filesystem, when using any of the default bundled
# state backends.
#
# state.checkpoints.dir: hdfs://namenode-host:port/flink-checkpoints
# Default target directory for savepoints, optional.
#
# state.savepoints.dir: hdfs://namenode-host:port/flink-savepoints
The end goal is to replace the hardcoded secrets or set them somehow from the vault:
state.backend: filesystem
s3.endpoint: http://minio:9000
s3.path.style.access: true
s3.access-key: minio
s3.secret-key: minio123
Thank you.
Once you have vault variables set
You can add the annotation in deployment to get variables out of the vault to deployment
annotations:
vault.hashicorp.com/agent-image: <Agent image>
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-secrets: kv/<Path-of-secret>
vault.hashicorp.com/agent-inject-template-secrets: |2
{{- with secret "kv/<Path-of-secret>" -}}
#!/bin/sh
set -e
{{- range $key, $value := .Data.data }}
export {{ $key }}={{ $value }}
{{- end }}
exec "$#"
{{- end }}
vault.hashicorp.com/auth-path: auth/<K8s cluster for auth>
vault.hashicorp.com/role: app
this will create the file inside your POD.
When you application run it should execute this file first and the environment variable will get injected to POD.
So vault annotation will create one file the same as you are getting as txt but instead, we will be doing it like
{{- range $key, $value := .Data.data }}
export {{ $key }}={{ $value }}
{{- end }}
it will keep and inject the export before key & value. Now the file is a kind of shell script and once it will get executed on the startup of the application it will inject variables to the OS level.
Keep this file in reop and add it in Docker ./bin/runapp
#!/bin/bash
if [ -f '/vault/secrets/secrets' ]; then
source '/vault/secrets/secrets'
fi
node <path-insnide-docker>/index.js #Sorry dont know scala or Java
package.json
"start": "./bin/runapp",
Dockerfile
ADD ./bin/runapp ./
EXPOSE 4444
CMD ["npm", "start"]
Your vault injected file will be something like inside pod at /vault/secrets/secrets or your configured path.
#!/bin/sh
set -e
export development=false
export production=true
exec "$#"

Not able to render the helm template without quotes

I have used almost all possible ways to render the helm template. But now I am out of ideas and seeking help:
values.yaml:
rollout:
namespace: xyz
project: default
baseDomain: "stage.danger.zone"
clusterDomain: cluster.local
manifest.yaml
apps:
certmanager:
source:
repoURL: 'https://artifactory.intern.example.io/artifactory/helm'
targetRevision: 0.0.6
chart: abc
helm:
releaseName: abc
values:
global:
imagePullSecrets:
- name: artifactory
baseDomain: "{{ .Values.rollout.baseDomain }}"
When I try to render the template using the below command in my main.yaml file that will produce the final result:
values: {{- tpl (toYaml $appValue.values | indent 6) $ }}
Expected result:
baseDomain: stage.danger.zone (without quotes)
What I am getting is:
baseDomain: 'stage.danger.zone'
If I try to remove the double quotes from: baseDomain: "{{ .Values.rollout.baseDomain }}", I get the following error:
[debug] error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Values.baseDomain":interface {}(nil)}
Any help or ideas to achieve the same?
This is an expected behaviour by YAML.
One dirty and bad hack would be
values: {{- tpl (toYaml $appValue.values | fromYaml | toYaml | indent 6) $ }}
and then you will not see the single quotes.
However, This is not a problem at all even if you have ' single quotes in your value. You can include this variable for example something like this below:
hosts:
- host: some-gateway.{{ .Values.rollout.baseDomain }}
serviceName: gateway
servicePort: 8080
path: /
Then it will show you your variable value without ' single quotes.
Example rendered output:
hosts:
- host: some-gateway.stage.danger.zone
path: /
serviceName: gateway
servicePort: 8080

How to pull environment variables with Helm charts

I have my deployment.yaml file within the templates directory of Helm charts with several environment variables for the container I will be running using Helm.
Now I want to be able to pull the environment variables locally from whatever machine the helm is ran so I can hide the secrets that way.
How do I pass this in and have helm grab the environment variables locally when I use Helm to run the application?
Here is some part of my deployment.yaml file
...
...
spec:
restartPolicy: Always
containers:
- name: sample-app
image: "sample-app:latest"
imagePullPolicy: Always
env:
- name: "USERNAME"
value: "app-username"
- name: "PASSWORD"
value: "28sin47dsk9ik"
...
...
How can I pull the value of USERNAME and PASSWORD from local environment variables when I run helm?
Is this possible? If yes, then how do I do this?
You can export the variable and use it while running helm install.
Before that, you have to modify your chart so that the value can be set while installation.
Skip this part, if you already know, how to setup template fields.
As you don't want to expose the data, so it's better to have it saved as secret in kubernetes.
First of all, add this two lines in your Values file, so that these two values can be set from outside.
username: root
password: password
Now, add a secret.yaml file inside your template folder. and, copy this code snippet into that file.
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-auth
data:
password: {{ .Values.password | b64enc }}
username: {{ .Values.username | b64enc }}
Now tweak your deployment yaml template and make changes in env section, like this
...
...
spec:
restartPolicy: Always
containers:
- name: sample-app
image: "sample-app:latest"
imagePullPolicy: Always
env:
- name: "USERNAME"
valueFrom:
secretKeyRef:
key: username
name: {{ .Release.Name }}-auth
- name: "PASSWORD"
valueFrom:
secretKeyRef:
key: password
name: {{ .Release.Name }}-auth
...
...
If you have modified your template correctly for --set flag,
you can set this using environment variable.
$ export USERNAME=root-user
Now use this variable while running helm install,
$ helm install --set username=$USERNAME ./mychart
If you run this helm install in dry-run mode, you can verify the changes,
$ helm install --dry-run --set username=$USERNAME --debug ./mychart
[debug] Created tunnel using local port: '44937'
[debug] SERVER: "127.0.0.1:44937"
[debug] Original chart version: ""
[debug] CHART PATH: /home/maruf/go/src/github.com/the-redback/kubernetes-yaml-drafts/helm-charts/mychart
NAME: irreverant-meerkat
REVISION: 1
RELEASED: Fri Apr 20 03:29:11 2018
CHART: mychart-0.1.0
USER-SUPPLIED VALUES:
username: root-user
COMPUTED VALUES:
password: password
username: root-user
HOOKS:
MANIFEST:
---
# Source: mychart/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: irreverant-meerkat-auth
data:
password: password
username: root-user
---
# Source: mychart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: irreverant-meerkat
labels:
app: irreverant-meerkat
spec:
replicas: 1
template:
metadata:
name: irreverant-meerkat
labels:
app: irreverant-meerkat
spec:
containers:
- name: irreverant-meerkat
image: alpine
env:
- name: "USERNAME"
valueFrom:
secretKeyRef:
key: username
name: irreverant-meerkat-auth
- name: "PASSWORD"
valueFrom:
secretKeyRef:
key: password
name: irreverant-meerkat-auth
imagePullPolicy: IfNotPresent
restartPolicy: Always
selector:
matchLabels:
app: irreverant-meerkat
You can see that the data of username in secret has changed to root-user.
I have added this example into github repo.
There is also some discussion in kubernetes/helm repo regarding this. You can see this issue to know about all other ways to use environment variables.
you can pass env key value from the value yaml by setting the deployment yaml as below :
spec:
restartPolicy: Always
containers:
- name: sample-app
image: "sample-app:latest"
imagePullPolicy: Always
env:
{{- range $name, $value := .Values.env }}
- name: {{ $name }}
value: {{ $value }}
{{- end }}
in the values.yaml :
env:
- name: "USERNAME"
value: ""
- name: "PASSWORD"
value: ""
when you install the chart you can pass the username password value
helm install chart_name --name release_name --set env.USERNAME="app-username" --set env.PASSWORD="28sin47dsk9ik"
For those looking to use data structures instead lists for their env variable files, this has worked for me:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range $key, $val := .Values.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
values.yaml:
env:
FOO: "BAR"
USERNAME: "CHANGEME"
PASWORD: "CHANGEME"
That way I can access specific values by name in other parts of the helm chart and pass the sensitive values via helm command line.
To get away from having to set each secret manually, you can use:
export MY_SECRET=123
envsubst < values.yaml | helm install my-release . --values -
where ${MY_SECRET} is referenced in your values.yaml file like:
mychart:
secrets:
secret_1: ${MY_SECRET}
Helm 3.1 supports post rendering (https://helm.sh/docs/topics/advanced/#post-rendering) which passes the manifest to a script before it is actually send to Kubernetes API. Post rendering allows to manipulate the manifest in multiple ways (e.g. use kustomize on top of Helm).
The simplest form of a post renderer which replaces predefined environment values could look like this:
#!/bin/sh
envsubst <&0
Note this will replace every occurance of $<VARNAME> which could collide with variables in the templates like shell scripts in liveness probes. So better explicitly define the variables you want to get replaced: envsubst '${USERNAME} ${PASSWORD}' <&0
Define your env variables in the shell:
export USERNAME=john PASSWORD=my-secret
In the tempaltes (e.g. secret.yaml) use the values defined in the values.yaml:
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-auth
data:
username: {{ .Values.username }}
password: {{ .Values.password }}
Note that you can not apply string transformations like b64enc on the strings as the get injected in the manifest after Helm has already processed all YAML files. Instead you can encode them in the post renderer if required.
In the values.yaml use the variable placeholders:
...
username: ${USERNAME}
password: ${PASSWORD}
The parameter --post-renderer is supported in several Helm commands e.g.
helm install --dry-run --post-renderer ./my-post-renderer.sh my-chart
By using the post renderer the variables/placeholders automatically get replaced by envsubst without additional scripting.
i guess the question is how to lookup for env variable inside chart by looking at the env variables it-self and not by passing this with --set.
for example: i have set a key "my_db_password" and want to change the values by looking at the value in env variable is not supported.
I am not very sure on GO template, but I guess this is disabled as what they explain in helm documentation. "We removed two for security reasons: env and expandenv (which would have given chart authors access to Tiller’s environment)." https://helm.sh/docs/developing_charts/#know-your-template-functions
I think one simple way is just set the value directly. for example, in your Values.yml, you want pass the service name:
...
myapp:
service:
name: ""
...
Your service.yml just use this value as usual:
{{ .Values.myapp.service.name }}
Then to set the value, use --set, like: --set myapp.service.name=hello
Then, for example, if you want to use the environment variable, do export before that:
#set your env variable
export MYAPP_SERVICE=hello
#pass it to helm
helm install myapp --set myapp.service.name=$MYAPP_SERVICE.
If you do debug like:
helm install myapp --set myapp.service.name=$MYAPP_SERVICE --debug --dry-run ./myapp
You can see this information at the beginning of your yml which your "hello" was set.
USER-SUPPLIED VALUES:
myapp:
service:
name: hello
As an alternative to pass local environment variables, I like to store these kind of sensitive values in a folder ignored by your VCS, and use Helm .Files object to read them and provide the values to your templates.
In my opinion, the advantage is that it doesn't require the host that will operate the Helm chart to set any OS specific environment variable, and makes the chart self-contained whilst not exposing these values.
# In a folder not committed, e.g. <chart_base_directory>/secrets
username: app-username
password: 28sin47dsk9ik
Then in your chart templates:
# In deployment.yaml file
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-auth
stringData::
{{ .Files.Get "<chart_base_directory>/secrets" | indent 2 }}
As a result, everything the Chart needs is accessible from within the directory where you define everything else. And instead of setting system-wide env vars, it just needs a file.
This file can be generated automatically, or copied from a committed template with dummy values. Helm will also fire an error early on install/update if this isn't defined, as opposed to creating your secret with username="" and password="" if your env vars haven't been defined, which only becomes obvious once your changes are applied to the cluster.