Parse logs for specific container to add labels - kubernetes-helm

I deployed Loki and Promtail using the grafana Helm Chart and I struggle to configure it.
As a simple configuration, I would like to add a specific label (an UUID). To do so, I use the specific yaml :
config:
lokiAddress: http://loki-headless.admin:3100/loki/api/v1/push
snippets:
extraScrapeConfigs: |
- job_name: dashboard
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- docker: {}
- match:
selector: '{container = "dashboard"}'
stages:
- regex:
expression: '(?P<loguuid>[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12})'
- labels:
loguuid:
Which is deployed with the command:
helm upgrade -install promtail -n admin grafana/promtail -f promtail.yaml
Of course, I still don’t have the label in grafana.
Can someone tell me what I did wrong?

Related

Process json logs with Grafana/loki

I have set up Grafana, Prometheus and loki (2.6.1) as follows on my kubernetes (1.21) cluster:
helm upgrade --install promtail grafana/promtail -n monitoring -f monitoring/promtail.yaml
helm upgrade --install prom prometheus-community/kube-prometheus-stack -n monitoring --values monitoring/prom.yaml
helm upgrade --install loki grafana/loki -n monitoring --values monitoring/loki.yaml
with:
# monitoring/loki.yaml
loki:
schemaConfig:
configs:
- from: 2020-09-07
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: loki_index_
period: 24h
storageConfig:
aws:
s3: s3://eu-west-3/cluster-loki-logs
boltdb_shipper:
shared_store: filesystem
active_index_directory: /var/loki/index
cache_location: /var/loki/cache
cache_ttl: 168h
# monitoring/promtail.yaml
config:
serverPort: 80
clients:
- url: http://loki:3100/loki/api/v1/push
# monitoring/prom.yaml
prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: false
serviceMonitorSelector: {}
serviceMonitorNamespaceSelector:
matchLabels:
monitored: "true"
grafana:
sidecar:
datasources:
defaultDatasourceEnabled: true
additionalDataSources:
- name: Loki
type: loki
url: http://loki.monitoring:3100
I get data from my containers, but, whenever I have a container logging in json format, I can't get access to the nested fields:
{app="product", namespace="api-dev"} | unpack | json
Yields:
My aim is, for example, to filter by log.severity
Actually, following this answer, it occurs to be a promtail scraping issue.
The current (promtail-6.3.1 / 2.6.1) helm chart default is to have cri as pipeline's stage, which expects this kind of logs:
"2019-04-30T02:12:41.8443515Z stdout xx message"
I should have use docker, which expects json, consequently, my promtail.yaml changed to:
config:
serverPort: 80
clients:
- url: http://loki:3100/loki/api/v1/push
snippets:
pipelineStages:
- docker: {}

Auto-scrape realm metrics from Keycloak with Prometheus-Operator

I installed Keycloak using the bitnami/keycloak Helm chart (https://bitnami.com/stack/keycloak/helm).
As I'm also using Prometheus-Operator for monitoring I enabled the metrics endpoint and the service monitor:
keycloak:
...
metrics:
enabled: true
serviceMonitor:
enabled: true
namespace: monitoring
additionalLabels:
release: my-prom-operator-release
As I'm way more interested in actual realm metrics I installed the keycloak-metrics-spi provider (https://github.com/aerogear/keycloak-metrics-spi) by setting up an init container that downloads it to a shared volume.
keycloak:
...
extraVolumeMounts:
- name: providers
mountPath: /opt/bitnami/keycloak/providers
extraVolumes:
- name: providers
emptyDir: {}
...
initContainers:
- name: metrics-spi-provider
image: SOME_IMAGE_WITH_WGET_INSTALLED
imagePullPolicy: Always
command:
- sh
args:
- -c
- |
KEYCLOAK_METRICS_SPI_VERSION=2.5.2
wget --no-check-certificate -O /providers/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar \
https://github.com/aerogear/keycloak-metrics-spi/releases/download/${KEYCLOAK_METRICS_SPI_VERSION}/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar
chmod +x /providers/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar
touch /providers/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar.dodeploy
volumeMounts:
- name: providers
mountPath: /providers
The provider enables metrics endpoints on the regular public-facing http port instead of the http-management port, which is not great for me. But I can block external access to them in my reverse proxy.
What I'm missing is some kind of auto-scraping of those endpoints. Right now I created an additional template, that creates a new service monitor for each element of a predefined list in my chart:
values.yaml
keycloak:
...
metrics:
extraServiceMonitors:
- realmName: master
- realmName: my-realm
servicemonitor-metrics-spi.yaml
{{- range $serviceMonitor := .Values.keycloak.metrics.extraServiceMonitors }}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ $.Release.Name }}-spi-{{ $serviceMonitor.realmName }}
...
spec:
endpoints:
- port: http
path: /auth/realms/{{ $serviceMonitor.realmName }}/metrics
...
{{- end }}
Is there a better way of doing this? So that Prometheus can auto-detect all my realms and scrape their endpoints?
Thanks in advance!
As commented by #jan-garaj there is no need to query all the endpoints. All return the accumulated data of all realms. So it is enough to just scrape the endpoint of one realm (e.g. the master realm).
Thanks a lot!
It might help someone, the bitnami image so the helm chart already include the metrics-spi-provider. So do not need any further installation action but the metrics must be enabled in values.

How to solve Promtail extraScrapeConfigs not being picked up?

It seems that excluding logs in a pod using the configuration below does not work.
extrascrapeconfig.yaml:
- job_name: kubernetes-pods-app
pipeline_stages:
- docker: {}
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: drop
regex: .+
source_labels:
- __meta_kubernetes_pod_label_name
###
- action: keep
regex: ambassador
source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_pod_namespace
###
To Reproduce
Steps to reproduce the behavior:
Deployed helm loki-stack :
helm install loki grafana/loki-stack --version "${HELM_CHART_VERSION}" \
--namespace=monitoring \
--create-namespace \
-f "loki-stack-values-v${HELM_CHART_VERSION}.yaml"
loki-stack-values-v2.4.1.yaml:
loki:
enabled: true
config:
promtail:
enabled: true
extraScrapeConfigs: extrascrapeconfig.yaml
fluent-bit:
enabled: false
grafana:
enabled: false
prometheus:
enabled: false
Attach grafana to loki datasource
Query: {namespace="kube-system"} in Grafana Loki
RESULT:
See logs
Expected behavior:
Not seeing any logs
Environment:
Infrastructure: Kubernetes
Deployment tool: Helm
What am I missing?
If you need Helm to pick up a specific file and pass it as a value, you should not pass the value itself in the values YAML file, but via another flag when installing or upgrading the release.
The command you are using is just applying the Helm values as-is, since the -f flag does not support parsing other files into the values by itself. Instead, use --set-file, which works similarly to --set, but gets the value content from the passed file.
Your command would now look like this:
helm install loki grafana/loki-stack --version "${HELM_CHART_VERSION}" \
--namespace=monitoring \
--create-namespace \
-f "loki-stack-values-v${HELM_CHART_VERSION}.yaml" \
--set-file promtail.extraScrapeConfigs=extrascrapeconfig.yaml

Promtail ignores extraScrapeConfigs

I've been making some tests with a Kubernetes cluster and I installed the loki-promtail stack by means of the helm loki/loki-stack chart.
The default configuration works fine, but now I would like to add some custom behaviour to the standard promtail config.
According to the Promtail documentation I tried to customise the values.xml in this way:
promtail:
extraScrapeConfigs:
- job_name: dlq-reader
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- template:
source: new_key
template: 'test'
- output:
source: new_key
The expected behaviour is that every log line is replaced by the static text "test" (of course this is a silly test just to get familiar with this environment).
What I see is that this configuration is correctly applied to the loki config-map but without any effect: the log lines looks exactly as if this additional configuration wasn't there.
The loki-stack chart version is 0.39.0 which installs loki 1.5.0.
I cannot see any error in the loki/promtails logs... Any suggestion?
I finally discovered the issue then I post what I found in case this might help anyone else with the same issue.
In order to modify the log text or to add custom labels, the correct values.yaml section to provide is pipelineStages instead of extraScrapeConfigs. Then, the previous snippet must be changed in the following way:
promtail:
pipelineStages:
- docker: {}
- match:
selector: '{container="dlq-reader"}'
stages:
- template:
source: new_key
template: 'test'
- output:
source: new_key

How to setup ansible playbook that is able to execute kubectl (kubernetes) commands

I'm trying to write simple ansible playbook that would be able to execute some arbitrary command against the pod (container) running in kubernetes cluster.
I would like to utilise kubectl connection plugin: https://docs.ansible.com/ansible/latest/plugins/connection/kubectl.html but having struggle to figure out how to actually do that.
Couple of questions:
Do I need to first have inventory for k8s defined? Something like: https://docs.ansible.com/ansible/latest/plugins/inventory/k8s.html. My understanding is that I would define kube config via inventory which would be used by the kubectl plugin to actually connect to the pods to perform specific action.
If yes, is there any example of arbitrary command executed via kubectl plugin (but not via shell plugin that invokes kubectl on some remote machine - this is not what I'm looking for)
I'm assuming that, during the ansible-playbook invocation, I would point to k8s inventory.
Thanks.
I would like to utilise kubectl connection plugin: https://docs.ansible.com/ansible/latest/plugins/connection/kubectl.html but having struggle to figure out how to actually do that.
The fine manual describes how one uses connection plugins, and while it is possible to use in in tasks, that is unlikely to make any sense unless your inventory started with Pods.
The way I have seen that connection used is to start by identifying the Pods against which you might want to take action, and then run a playbook against a unique group for that purpose:
- hosts: all
tasks:
- set_fact:
# this is *just an example for brevity*
# in reality you would use `k8s:` or `kubectl get -o name pods -l my-selector=my-value` to get the pod names
pod_names:
- nginx-12345
- nginx-3456
- add_host:
name: '{{ item }}'
groups:
- my-pods
with_items: '{{ pod_names }}'
- hosts: my-pods
connection: kubectl
tasks:
# and now you are off to the races
- command: ps -ef
# watch out if the Pod doesn't have a working python installed
# as you will have to use raw: instead
# (and, of course, disable "gather_facts: no")
- raw: ps -ef
First install k8s collections
ansible-galaxy collection install community.kubernetes
and here is play-book, it will sort all pods and run a command in every pod
---
-
hosts: localhost
vars_files:
- vars/main.yaml
collections:
- community.kubernetes
tasks:
-
name: Get the pods in the specific namespace
k8s_info:
kubeconfig: '{{ k8s_kubeconfig }}'
kind: Pod
namespace: test
register: pod_list
-
name: Print pod names
debug:
msg: "pod_list: {{ pod_list | json_query('resources[*].status.podIP') }} "
- set_fact:
pod_names: "{{pod_list|json_query('resources[*].metadata.name')}}"
-
k8s_exec:
kubeconfig: '{{ k8s_kubeconfig }}'
namespace: "{{ namespace }}"
pod: "{{ item.metadata.name }}"
command: apt update
with_items: "{{ pod_list.resources }}"
register: exec
loop_control:
label: "{{ item.metadata.name }}"
Maybe you can use like this...
- shell: |
kubectl exec -i -n {{ namespace }} {{ pod_name }} -- bash -c 'clickhouse-client --query "INSERT INTO customer FORMAT CSV"
--user=test --password=test < /mnt/azure/azure/test/test.tbl'
As per the latest documentation you can use the following k8s modules
The following are some of the examples
- name: Create a k8s namespace
kubernetes.core.k8s:
name: testing
api_version: v1
kind: Namespace
state: present
- name: Create a Service object from an inline definition
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Service
metadata:
name: web
namespace: testing
labels:
app: galaxy
service: web
spec:
selector:
app: galaxy
service: web
ports:
- protocol: TCP
targetPort: 8000
name: port-8000-tcp
port: 8000
- name: Remove an existing Service object
kubernetes.core.k8s:
state: absent
api_version: v1
kind: Service
namespace: testing
name: web