How to solve Promtail extraScrapeConfigs not being picked up? - kubernetes

It seems that excluding logs in a pod using the configuration below does not work.
extrascrapeconfig.yaml:
- job_name: kubernetes-pods-app
pipeline_stages:
- docker: {}
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: drop
regex: .+
source_labels:
- __meta_kubernetes_pod_label_name
###
- action: keep
regex: ambassador
source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_pod_namespace
###
To Reproduce
Steps to reproduce the behavior:
Deployed helm loki-stack :
helm install loki grafana/loki-stack --version "${HELM_CHART_VERSION}" \
--namespace=monitoring \
--create-namespace \
-f "loki-stack-values-v${HELM_CHART_VERSION}.yaml"
loki-stack-values-v2.4.1.yaml:
loki:
enabled: true
config:
promtail:
enabled: true
extraScrapeConfigs: extrascrapeconfig.yaml
fluent-bit:
enabled: false
grafana:
enabled: false
prometheus:
enabled: false
Attach grafana to loki datasource
Query: {namespace="kube-system"} in Grafana Loki
RESULT:
See logs
Expected behavior:
Not seeing any logs
Environment:
Infrastructure: Kubernetes
Deployment tool: Helm
What am I missing?

If you need Helm to pick up a specific file and pass it as a value, you should not pass the value itself in the values YAML file, but via another flag when installing or upgrading the release.
The command you are using is just applying the Helm values as-is, since the -f flag does not support parsing other files into the values by itself. Instead, use --set-file, which works similarly to --set, but gets the value content from the passed file.
Your command would now look like this:
helm install loki grafana/loki-stack --version "${HELM_CHART_VERSION}" \
--namespace=monitoring \
--create-namespace \
-f "loki-stack-values-v${HELM_CHART_VERSION}.yaml" \
--set-file promtail.extraScrapeConfigs=extrascrapeconfig.yaml

Related

Airflow installation with helm on kubernetes cluster is failing with db migration pod

Error:
Steps:
I have downloaded the helm chart from here https://github.com/apache/airflow/releases/tag/helm-chart/1.8.0 (Under Assets, Source code zip).
Added following extra params to default values.yaml,
createUserJob:
useHelmHooks: false
migrateDatabaseJob:
useHelmHooks: false
dags:
gitSync:
enabled: true
#all data....
airflow:
extraEnv:
- name: AIRFLOW__API__AUTH_BACKEND
value: "airflow.api.auth.backend.basic_auth"
ingress:
web:
tls:
enabled: true
secretName: wildcard-tls-cert
host: "mydns.com"
path: "/airflow"
I also need KubernetesExecutor hence using https://github.com/airflow-helm/charts/blob/main/charts/airflow/sample-values-KubernetesExecutor.yaml as k8sExecutor.yaml
Installing using following command,
helm install my-airflow airflow-8.6.1/airflow/ --values values.yaml
--values k8sExecutor.yaml -n mynamespace
It worked when I tried the following way,
helm repo add airflow-repo https://airflow-helm.github.io/charts
helm install my-airflow airflow-repo/airflow --version 8.6.1 --values k8sExecutor.yaml --values values.yaml
values.yaml - has only overridden parameters

Process json logs with Grafana/loki

I have set up Grafana, Prometheus and loki (2.6.1) as follows on my kubernetes (1.21) cluster:
helm upgrade --install promtail grafana/promtail -n monitoring -f monitoring/promtail.yaml
helm upgrade --install prom prometheus-community/kube-prometheus-stack -n monitoring --values monitoring/prom.yaml
helm upgrade --install loki grafana/loki -n monitoring --values monitoring/loki.yaml
with:
# monitoring/loki.yaml
loki:
schemaConfig:
configs:
- from: 2020-09-07
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: loki_index_
period: 24h
storageConfig:
aws:
s3: s3://eu-west-3/cluster-loki-logs
boltdb_shipper:
shared_store: filesystem
active_index_directory: /var/loki/index
cache_location: /var/loki/cache
cache_ttl: 168h
# monitoring/promtail.yaml
config:
serverPort: 80
clients:
- url: http://loki:3100/loki/api/v1/push
# monitoring/prom.yaml
prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: false
serviceMonitorSelector: {}
serviceMonitorNamespaceSelector:
matchLabels:
monitored: "true"
grafana:
sidecar:
datasources:
defaultDatasourceEnabled: true
additionalDataSources:
- name: Loki
type: loki
url: http://loki.monitoring:3100
I get data from my containers, but, whenever I have a container logging in json format, I can't get access to the nested fields:
{app="product", namespace="api-dev"} | unpack | json
Yields:
My aim is, for example, to filter by log.severity
Actually, following this answer, it occurs to be a promtail scraping issue.
The current (promtail-6.3.1 / 2.6.1) helm chart default is to have cri as pipeline's stage, which expects this kind of logs:
"2019-04-30T02:12:41.8443515Z stdout xx message"
I should have use docker, which expects json, consequently, my promtail.yaml changed to:
config:
serverPort: 80
clients:
- url: http://loki:3100/loki/api/v1/push
snippets:
pipelineStages:
- docker: {}

Parse logs for specific container to add labels

I deployed Loki and Promtail using the grafana Helm Chart and I struggle to configure it.
As a simple configuration, I would like to add a specific label (an UUID). To do so, I use the specific yaml :
config:
lokiAddress: http://loki-headless.admin:3100/loki/api/v1/push
snippets:
extraScrapeConfigs: |
- job_name: dashboard
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- docker: {}
- match:
selector: '{container = "dashboard"}'
stages:
- regex:
expression: '(?P<loguuid>[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12})'
- labels:
loguuid:
Which is deployed with the command:
helm upgrade -install promtail -n admin grafana/promtail -f promtail.yaml
Of course, I still don’t have the label in grafana.
Can someone tell me what I did wrong?

Unable to add `linkerd.io/inject: enabled` to ArgoCD manifest - invalid type for io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.annotations

I can install bitnami/redis with this helm command:
helm upgrade --install "my-release" bitnami/redis \
--set auth.existingSecret=redis-key \
--set metrics.enabled=true \
--set metrics.podAnnotations.release=prom \
--set master.podAnnotations."linkerd\.io/inject"=enabled \
--set replica.podAnnotations."linkerd\.io/inject"=enabled
Now I want to install it using ArgoCD Manifest.
project: default
source:
repoURL: 'https://charts.bitnami.com/bitnami'
targetRevision: 14.1.1
helm:
valueFiles:
- values.yaml
parameters:
- name: metrics.enabled
value: 'true'
- name: metrics.podAnnotations.release
value: 'prom'
- name: master.podAnnotations.linkerd.io/inject
value: enabled
- name: replica.podAnnotations.linkerd.io/inject
value: enabled
- name: auth.existingSecret
value: redis-key
chart: redis
destination:
server: 'https://kubernetes.default.svc'
namespace: default
syncPolicy: {}
But I'm getting validation error because of master.podAnnotations.linkerd.io/inject and replica.podAnnotations.linkerd.io/inject
error validating data: ValidationError(StatefulSet.spec.template.metadata.annotations."linkerd): invalid type for io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.annotations: got "map", expected "string"
error validating data: ValidationError(StatefulSet.spec.template.metadata.annotations."linkerd): invalid type for io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.annotations: got "map", expected "string"
If I remove these two annotation settings the app can be installed.
I've tried master.podAnnotations."linkerd.io\/inject", but it doesn't work. I guess it has something to do with the "." or "/". Can anyone help me solve this issue?
Look at this example, parameters containing dots need to be escaped.

Argo helm chart does not apply my custom values.yml file

I deploy Argo with Helm 3 to my cluster
helm upgrade --install argo argo/argo-cd -n argocd -f argovalues.yaml
My argovalues.yml file is the following
global.image.tag: "v2.0.1"
server.service.type: "NodePort"
server.name: "kabamaru"
server.ingress.enabled: true
server.metrics.enabled: true
server.additionalApplications: |
- name: guestbook
namespace: argocd
additionalLabels: {}
additionalAnnotations: {}
project: default
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git
targetRevision: HEAD
path: guestbook
directory:
recurse: true
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: false
selfHeal: false
and .... none of these values is applied.It is very frustrating.
If I do the following
helm upgrade --install argo argo/argo-cd -n argocd --set server.name=hello
it works and changes successfully!
What on earth is going on?
I'm using helm chart upgrade command with the option --set-file and it's working correctly :
helm upgrade --install argo argo/argo-cd -n argocd --set-file argovalues.yaml
there are other option like --set and --set-string like that:
I think that can help you to resolve your issue.
The solution
I managed to make it work like this:
argovalues.yaml
server:
service:
type: "NodePort" #by default 80:30080/TCP,443:30443/TCP
image:
tag: "v2.0.1"
autoscaling:
enabled: true
ingress:
enabled: true
metrics:
enabled: true
additionalApplications:
- name: bootstrap
namespace: argocd
additionalLabels: {}
additionalAnnotations: {}
project: default
source:
repoURL: https://github.com/YOUR_REPO/argotest.git
targetRevision: HEAD
path: bootstrap
directory:
recurse: true
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: false
Note: the wrong indentation can drive you crazy!