I install loki and prometheus using helm.
However, I would like to replace the logs in one place.
I have used: helm show values grafana/loki-stack > loki-stack-values.yml to output the values and came to the following result:
loki:
enabled: true
isDefault: true
promtail:
enabled: true
config:
lokiAddress: http://{{ .Release.Name }}:3100/loki/api/v1/push
prometheusSpec:
additionalScrapeConfigs:
- match:
selector: '{name="promtail"}'
stages:
# The regex stage parses out a level, timestamp, and component. At the end
# of the stage, the values for level, timestamp, and component are only
# set internally for the pipeline. Future stages can use these values and
# decide what to do with them.
- regex:
expression: '.*level=(?P<level>[a-zA-Z]+).*ts=(?P<timestamp>[T\d-:.Z]*).*component=(?P<component>[a-zA-Z]+)'
Actually, everything would work great. But my output is really weird so I try to add the the additionalScapeConfig
2022-05-06 18:31:55
{"log":"2022-05-06T18:31:55,003 \u001b[36mDEBUG\u001b[m
So to the question:
How can I use helm install dlp-dev-loki grafana/loki-stack --values loki-stack-values.yml -n dev. and additional scape configs for promtail.
According to the docs, promtail pipelines, The timestamp stage takes the timestamp extracted from the regex stage and promotes it to be the new timestamp of the log entry, the timestamp should be parsing it as an RFC3339Nano-formatted value. Add the below to the config file below regex.
- timestamp:
format: RFC3339Nano
source: timestamp
Related
How to append a list to another list inside a dictionary using Helm?
I have a Helm chart specifying the key helm inside of an Argo CD Application (see snippet below).
Now given a values.yaml file, e.g.:
helm:
valueFiles:
- myvalues1.yaml
- myvalues2.yaml
I want to append helm.valuesFiles to the one below. How can I achieve this? The merge function doesn't seem to satisfy my needs in this case, since precedence will be given to the first dictionary.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
# You'll usually want to add your resources to the argocd namespace.
namespace: argocd
# Add this finalizer ONLY if you want these to cascade delete.
finalizers:
- resources-finalizer.argocd.argoproj.io
# Add labels to your application object.
labels:
name: guestbook
spec:
# The project the application belongs to.
project: default
# Source of the application manifests
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git # Can point to either a Helm chart repo or a git repo.
targetRevision: HEAD # For Helm, this refers to the chart version.
path: guestbook # This has no meaning for Helm charts pulled directly from a Helm repo instead of git.
# helm specific config
chart: chart-name # Set this when pulling directly from a Helm repo. DO NOT set for git-hosted Helm charts.
helm:
passCredentials: false # If true then adds --pass-credentials to Helm commands to pass credentials to all domains
# Extra parameters to set (same as setting through values.yaml, but these take precedence)
parameters:
- name: "nginx-ingress.controller.service.annotations.external-dns\\.alpha\\.kubernetes\\.io/hostname"
value: mydomain.example.com
- name: "ingress.annotations.kubernetes\\.io/tls-acme"
value: "true"
forceString: true # ensures that value is treated as a string
# Use the contents of files as parameters (uses Helm's --set-file)
fileParameters:
- name: config
path: files/config.json
# Release name override (defaults to application name)
releaseName: guestbook
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
- values-prod.yaml
https://raw.githubusercontent.com/argoproj/argo-cd/master/docs/operator-manual/application.yaml
If you only need to append helm.valueFiles to the existing .spec.source.helm.valueFiles, you can range through the list in the values file and add the list items like this:
valueFiles:
- values-prod.yaml
{{- range $item := .Values.helm.valueFiles }}
- {{ $item }}
{{- end }}
I deployed Loki and Promtail using the grafana Helm Chart and I struggle to configure it.
As a simple configuration, I would like to add a specific label (an UUID). To do so, I use the specific yaml :
config:
lokiAddress: http://loki-headless.admin:3100/loki/api/v1/push
snippets:
extraScrapeConfigs: |
- job_name: dashboard
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- docker: {}
- match:
selector: '{container = "dashboard"}'
stages:
- regex:
expression: '(?P<loguuid>[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12})'
- labels:
loguuid:
Which is deployed with the command:
helm upgrade -install promtail -n admin grafana/promtail -f promtail.yaml
Of course, I still don’t have the label in grafana.
Can someone tell me what I did wrong?
I've been making some tests with a Kubernetes cluster and I installed the loki-promtail stack by means of the helm loki/loki-stack chart.
The default configuration works fine, but now I would like to add some custom behaviour to the standard promtail config.
According to the Promtail documentation I tried to customise the values.xml in this way:
promtail:
extraScrapeConfigs:
- job_name: dlq-reader
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- template:
source: new_key
template: 'test'
- output:
source: new_key
The expected behaviour is that every log line is replaced by the static text "test" (of course this is a silly test just to get familiar with this environment).
What I see is that this configuration is correctly applied to the loki config-map but without any effect: the log lines looks exactly as if this additional configuration wasn't there.
The loki-stack chart version is 0.39.0 which installs loki 1.5.0.
I cannot see any error in the loki/promtails logs... Any suggestion?
I finally discovered the issue then I post what I found in case this might help anyone else with the same issue.
In order to modify the log text or to add custom labels, the correct values.yaml section to provide is pipelineStages instead of extraScrapeConfigs. Then, the previous snippet must be changed in the following way:
promtail:
pipelineStages:
- docker: {}
- match:
selector: '{container="dlq-reader"}'
stages:
- template:
source: new_key
template: 'test'
- output:
source: new_key
Regarding the current installation options for Istio 1.1.4 it should be possible to define a default node selector which gets added to all Istio deployments
The documentation does not show a dedicated sample how the selector has to be defined, only {} as value.
Currently I was not able to find a working format to pass the values to the helm charts by using --set, e.g:
--set global.defaultNodeSelector="{cloud.google.com/gke-nodepool:istio-pool}"
I tried several variations, with and without escapes, JSON map, ... But currently everything results into the same Helm error message:
2019/05/06 15:58:10 Warning: Merging destination map for chart 'istio'. Cannot overwrite table item 'defaultNodeSelector', with non table value: map[]
Istio version 1.1.4
Helm 2.13.1
The expectation would be to have a more detailed documentation, giving some samples on Istio side.
When specifying overrides with --set, multiple key/value pairs are deeply merged based on keys. It means in your case, that only last item will be present in the generated template. The same will happen even if you override with -f (YAML file) option.
Here is an example of -f option usage with custom_values.yaml, with distinguished keys:
#custom_values.yaml
global:
defaultNodeSelector:
cloud.google.com/bird: stork
cloud.google.com/bee: wallace
helm template . -x charts/pilot/templates/deployment.yaml -f
custom_values.yaml
Snippet of rendered Istio`s Pilot deployment.yaml manifest file:
volumes:
- name: config-volume
configMap:
name: istio
- name: istio-certs
secret:
secretName: istio.istio-pilot-service-account
optional: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
- key: cloud.google.com/bee
operator: In
values:
- wallace
- key: cloud.google.com/bird
operator: In
values:
- stork
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 2
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
The same can be achieved with --set:
--set global.defaultNodeSelector."cloud\.google\.com/bird"=stork,global.defaultNodeSelector."cloud\.google\.com/bee"=wallace
After searching for some hours I found a solution right after posting the question by digging in the Istio commits.
I'll leave my findings as a reference, maybe someone can safe some time that way.
Setting a default node selector works, at least for me, by separating the key by dots and escaping additional ones with \ (if there are dots in the label of interest)
--set global.defaultNodeSelector.cloud\\.google\\.com/gke-nodepool=istio-pool
To create a defaultNodeSelector for a node pool labeled with
cloud.google.com/gke-nodepool: istio-pool
I was not able to add multiple values that way the {} notation for adding lists in Helm doesn't seem to get respected.
I've created a custom helm chart with elastic-stack as a subchart with following configurations.
# requirements.yaml
dependencies:
- name: elastic-stack
version: 1.5.0
repository: '#stable'
# values.yaml
elastic-stack:
kibana:
# at this level enabled is not recognized (does not work)
# enabled: true
# configs like env, only work at this level
env:
ELASTICSEARCH_URL: http://foo-elasticsearch-client.default.svc.cluster.local:9200
service:
externalPort: 80
# enabled only works at root level
elasticsearch:
enabled: true
kibana:
enabled: true
logstash:
enabled: false
What i don't get is why i have to define the enabled tags outside the elasatic-stack: and all other configurations inside?
Is this a normal helm behavior or some misconfiguration in elastic-stack chart?
Helm conditions are evaluated in the top parent's values:
Condition - The condition field holds one or more YAML paths
(delimited by commas). If this path exists in the top parent’s values
and resolves to a boolean value, the chart will be enabled or disabled
based on that boolean value
Take a look at the conditions in requirements.yaml from stable/elastic-stack:
- name: elasticsearch
version: ^1.17.0
repository: https://kubernetes-charts.storage.googleapis.com/
condition: elasticsearch.enabled
- name: kibana
version: ^1.1.0
repository: https://kubernetes-charts.storage.googleapis.com/
condition: kibana.enabled
- name: logstash
version: ^1.2.1
repository: https://kubernetes-charts.storage.googleapis.com/
condition: logstash.enabled
The conditions paths are elasticsearch.enabled, kibana.enabled and logstash.enabled, so you need to use them in your parent chart values.
Those properties in parent values.yaml serve as switch for the subcharts.
You are suppose to use condition in your requirements.yaml to control the installation or execution of your dependent subcharts. If not provided then helm simply proceeds to deploy the subchart without and problem.
And also, those values are in parent's values.yaml because they are being used in the parent chart itself and moreover cannot be used inside the subchart unless provided as global or within the subchart's name property key (which is in your case elastic-stack).