Promtail ignores extraScrapeConfigs - kubernetes

I've been making some tests with a Kubernetes cluster and I installed the loki-promtail stack by means of the helm loki/loki-stack chart.
The default configuration works fine, but now I would like to add some custom behaviour to the standard promtail config.
According to the Promtail documentation I tried to customise the values.xml in this way:
promtail:
extraScrapeConfigs:
- job_name: dlq-reader
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- template:
source: new_key
template: 'test'
- output:
source: new_key
The expected behaviour is that every log line is replaced by the static text "test" (of course this is a silly test just to get familiar with this environment).
What I see is that this configuration is correctly applied to the loki config-map but without any effect: the log lines looks exactly as if this additional configuration wasn't there.
The loki-stack chart version is 0.39.0 which installs loki 1.5.0.
I cannot see any error in the loki/promtails logs... Any suggestion?

I finally discovered the issue then I post what I found in case this might help anyone else with the same issue.
In order to modify the log text or to add custom labels, the correct values.yaml section to provide is pipelineStages instead of extraScrapeConfigs. Then, the previous snippet must be changed in the following way:
promtail:
pipelineStages:
- docker: {}
- match:
selector: '{container="dlq-reader"}'
stages:
- template:
source: new_key
template: 'test'
- output:
source: new_key

Related

loki-stack helm chart not able to disable kube-system logs

I am using loki-stack helm chart I am doing following configuration to disable kube-system namespace logs in promtail so that loki doesnt use it
promtail:
enabled: true
#
# Enable Promtail service monitoring
# serviceMonitor:
# enabled: true
#
# User defined pipeline stages
pipelineStages:
- docker: {}
- drop:
source: namespace
expression: "kube-.*"
Please help in solving inside container this values are not getting updated
The configuration is already mentioned above
I had the same issue with this configuration and it seems like the pipelineStages at this level is being ignored. I solved my problem by moving it to snippets.
promtail:
enabled: true
config:
snippets:
pipelineStages:
- docker: {}
- drop:
source: namespace
expression: "kube-.*"
This worked for me and I hope it helps someone else who might run into the same problem. For more details, please check out this link: https://github.com/grafana/helm-charts/blob/main/charts/promtail/values.yaml

Different name required to override value in Helm subchart

I have read the Helm docs and various StackOverflow questions - this is not (I hope!) a lazy question. I'm having an issue overriding a single particular value in a Helm chart, not having trouble with the concept in general.
I'm trying to install the Gitea helm chart on a k8s cluster on Raspberry Pis (that is - on arm64 architecture). Since the default memcached dependency chart is from Bitnami, who don't support arm64, I have overridden the image appropriately (to arm64v8/memcached, link).
However, this new image has a different entrypoint - /entrypoint.sh instead of /run.sh. Referencing the relevant part of the template, I believed I needed to override memcached.args, but that didn't work as expected:
$ cat values.yaml
memcached:
image:
repository: "arm64v8/memcached"
tag: "1.6.17"
args:
- "/entrypoint.sh"
diagnosticMode:
enabled: false
$ helm template gitea-charts/gitea --values values.yaml
[...]
# Source: gitea/charts/memcached/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-memcached
namespace: gitea
labels: [...]
spec:
selector:
matchLabels: [...]
replicas: 1
template:
metadata:
labels: [...]
spec:
[...]
serviceAccountName: release-name-memcached
containers:
- name: memcached
image: docker.io/arm64v8/memcached:1.6.17
imagePullPolicy: "IfNotPresent"
args:
- /run.sh # <----- this should be `/entrypoint.sh`
env:
- name: BITNAMI_DEBUG
value: "false"
ports:
- name: memcache
containerPort: 11211
[...]
However, when I instead overrode memcached.arguments, the expected behaviour occurred - the contents of memcached.arguments rendered in the template's args (or, if memcached.arguments was empty, no args were rendered)
Where is this mapping from arguments to args taking place?
Note in particular that the Bitnami chart docs refer to args, so this is unexpected - though note also that the Bitnami chart's values.yaml refers to arguments in the comment (this is what prompted me to try this "obviously wrong" approach!). In the "Upgrade to 5.0.0 notes", we see "arguments has been renamed to args." - but the Gitea chart is using a >5.0.0 version of the Bitnami chart.
You're reasoning is correct. And the current parameter name is definitely called args (arguments is deprecated, someone just forgot to update the comment here).
Now, why arguments work for you and args? I think you're just using the old version, before it was renamed. I checked it and:
Gitea chart uses version 5.9.0 from the repo https://raw.githubusercontent.com/bitnami/charts/pre-2022/bitnami
This corresponds to the following Helm Chart: https://charts.bitnami.com/bitnami/memcached-5.9.0.tgz (you can check it here).
When you extract this file chart, you see it's the old version of chart (with arguments not yet renamed to args).

Helm Loki Stack additional promtail config

I install loki and prometheus using helm.
However, I would like to replace the logs in one place.
I have used: helm show values grafana/loki-stack > loki-stack-values.yml to output the values and came to the following result:
loki:
enabled: true
isDefault: true
promtail:
enabled: true
config:
lokiAddress: http://{{ .Release.Name }}:3100/loki/api/v1/push
prometheusSpec:
additionalScrapeConfigs:
- match:
selector: '{name="promtail"}'
stages:
# The regex stage parses out a level, timestamp, and component. At the end
# of the stage, the values for level, timestamp, and component are only
# set internally for the pipeline. Future stages can use these values and
# decide what to do with them.
- regex:
expression: '.*level=(?P<level>[a-zA-Z]+).*ts=(?P<timestamp>[T\d-:.Z]*).*component=(?P<component>[a-zA-Z]+)'
Actually, everything would work great. But my output is really weird so I try to add the the additionalScapeConfig
2022-05-06 18:31:55
{"log":"2022-05-06T18:31:55,003 \u001b[36mDEBUG\u001b[m
So to the question:
How can I use helm install dlp-dev-loki grafana/loki-stack --values loki-stack-values.yml -n dev. and additional scape configs for promtail.
According to the docs, promtail pipelines, The timestamp stage takes the timestamp extracted from the regex stage and promotes it to be the new timestamp of the log entry, the timestamp should be parsing it as an RFC3339Nano-formatted value. Add the below to the config file below regex.
- timestamp:
format: RFC3339Nano
source: timestamp

Parse logs for specific container to add labels

I deployed Loki and Promtail using the grafana Helm Chart and I struggle to configure it.
As a simple configuration, I would like to add a specific label (an UUID). To do so, I use the specific yaml :
config:
lokiAddress: http://loki-headless.admin:3100/loki/api/v1/push
snippets:
extraScrapeConfigs: |
- job_name: dashboard
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- docker: {}
- match:
selector: '{container = "dashboard"}'
stages:
- regex:
expression: '(?P<loguuid>[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12})'
- labels:
loguuid:
Which is deployed with the command:
helm upgrade -install promtail -n admin grafana/promtail -f promtail.yaml
Of course, I still don’t have the label in grafana.
Can someone tell me what I did wrong?

Istio 1.1.4 helm setup --set global.defaultNodeSelector sample

Regarding the current installation options for Istio 1.1.4 it should be possible to define a default node selector which gets added to all Istio deployments
The documentation does not show a dedicated sample how the selector has to be defined, only {} as value.
Currently I was not able to find a working format to pass the values to the helm charts by using --set, e.g:
--set global.defaultNodeSelector="{cloud.google.com/gke-nodepool:istio-pool}"
I tried several variations, with and without escapes, JSON map, ... But currently everything results into the same Helm error message:
2019/05/06 15:58:10 Warning: Merging destination map for chart 'istio'. Cannot overwrite table item 'defaultNodeSelector', with non table value: map[]
Istio version 1.1.4
Helm 2.13.1
The expectation would be to have a more detailed documentation, giving some samples on Istio side.
When specifying overrides with --set, multiple key/value pairs are deeply merged based on keys. It means in your case, that only last item will be present in the generated template. The same will happen even if you override with -f (YAML file) option.
Here is an example of -f option usage with custom_values.yaml, with distinguished keys:
#custom_values.yaml
global:
defaultNodeSelector:
cloud.google.com/bird: stork
cloud.google.com/bee: wallace
helm template . -x charts/pilot/templates/deployment.yaml -f
custom_values.yaml
Snippet of rendered Istio`s Pilot deployment.yaml manifest file:
volumes:
- name: config-volume
configMap:
name: istio
- name: istio-certs
secret:
secretName: istio.istio-pilot-service-account
optional: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
- key: cloud.google.com/bee
operator: In
values:
- wallace
- key: cloud.google.com/bird
operator: In
values:
- stork
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 2
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
The same can be achieved with --set:
--set global.defaultNodeSelector."cloud\.google\.com/bird"=stork,global.defaultNodeSelector."cloud\.google\.com/bee"=wallace
After searching for some hours I found a solution right after posting the question by digging in the Istio commits.
I'll leave my findings as a reference, maybe someone can safe some time that way.
Setting a default node selector works, at least for me, by separating the key by dots and escaping additional ones with \ (if there are dots in the label of interest)
--set global.defaultNodeSelector.cloud\\.google\\.com/gke-nodepool=istio-pool
To create a defaultNodeSelector for a node pool labeled with
cloud.google.com/gke-nodepool: istio-pool
I was not able to add multiple values that way the {} notation for adding lists in Helm doesn't seem to get respected.