loki-stack helm chart not able to disable kube-system logs - kubernetes

I am using loki-stack helm chart I am doing following configuration to disable kube-system namespace logs in promtail so that loki doesnt use it
promtail:
enabled: true
#
# Enable Promtail service monitoring
# serviceMonitor:
# enabled: true
#
# User defined pipeline stages
pipelineStages:
- docker: {}
- drop:
source: namespace
expression: "kube-.*"
Please help in solving inside container this values are not getting updated
The configuration is already mentioned above

I had the same issue with this configuration and it seems like the pipelineStages at this level is being ignored. I solved my problem by moving it to snippets.
promtail:
enabled: true
config:
snippets:
pipelineStages:
- docker: {}
- drop:
source: namespace
expression: "kube-.*"
This worked for me and I hope it helps someone else who might run into the same problem. For more details, please check out this link: https://github.com/grafana/helm-charts/blob/main/charts/promtail/values.yaml

Related

Airflow installation with helm on kubernetes cluster is failing with db migration pod

Error:
Steps:
I have downloaded the helm chart from here https://github.com/apache/airflow/releases/tag/helm-chart/1.8.0 (Under Assets, Source code zip).
Added following extra params to default values.yaml,
createUserJob:
useHelmHooks: false
migrateDatabaseJob:
useHelmHooks: false
dags:
gitSync:
enabled: true
#all data....
airflow:
extraEnv:
- name: AIRFLOW__API__AUTH_BACKEND
value: "airflow.api.auth.backend.basic_auth"
ingress:
web:
tls:
enabled: true
secretName: wildcard-tls-cert
host: "mydns.com"
path: "/airflow"
I also need KubernetesExecutor hence using https://github.com/airflow-helm/charts/blob/main/charts/airflow/sample-values-KubernetesExecutor.yaml as k8sExecutor.yaml
Installing using following command,
helm install my-airflow airflow-8.6.1/airflow/ --values values.yaml
--values k8sExecutor.yaml -n mynamespace
It worked when I tried the following way,
helm repo add airflow-repo https://airflow-helm.github.io/charts
helm install my-airflow airflow-repo/airflow --version 8.6.1 --values k8sExecutor.yaml --values values.yaml
values.yaml - has only overridden parameters

Parse logs for specific container to add labels

I deployed Loki and Promtail using the grafana Helm Chart and I struggle to configure it.
As a simple configuration, I would like to add a specific label (an UUID). To do so, I use the specific yaml :
config:
lokiAddress: http://loki-headless.admin:3100/loki/api/v1/push
snippets:
extraScrapeConfigs: |
- job_name: dashboard
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- docker: {}
- match:
selector: '{container = "dashboard"}'
stages:
- regex:
expression: '(?P<loguuid>[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12})'
- labels:
loguuid:
Which is deployed with the command:
helm upgrade -install promtail -n admin grafana/promtail -f promtail.yaml
Of course, I still don’t have the label in grafana.
Can someone tell me what I did wrong?

Promtail ignores extraScrapeConfigs

I've been making some tests with a Kubernetes cluster and I installed the loki-promtail stack by means of the helm loki/loki-stack chart.
The default configuration works fine, but now I would like to add some custom behaviour to the standard promtail config.
According to the Promtail documentation I tried to customise the values.xml in this way:
promtail:
extraScrapeConfigs:
- job_name: dlq-reader
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- template:
source: new_key
template: 'test'
- output:
source: new_key
The expected behaviour is that every log line is replaced by the static text "test" (of course this is a silly test just to get familiar with this environment).
What I see is that this configuration is correctly applied to the loki config-map but without any effect: the log lines looks exactly as if this additional configuration wasn't there.
The loki-stack chart version is 0.39.0 which installs loki 1.5.0.
I cannot see any error in the loki/promtails logs... Any suggestion?
I finally discovered the issue then I post what I found in case this might help anyone else with the same issue.
In order to modify the log text or to add custom labels, the correct values.yaml section to provide is pipelineStages instead of extraScrapeConfigs. Then, the previous snippet must be changed in the following way:
promtail:
pipelineStages:
- docker: {}
- match:
selector: '{container="dlq-reader"}'
stages:
- template:
source: new_key
template: 'test'
- output:
source: new_key

k8s scdf2 how config volumenMount in a task (no freetext)

Deploying a task, as user, i need config k8s params like i do using "freetext".
The k8s config is following
Secret: "kind": "Secret","apiVersion": "v1","metadata": {"name": "omni-secret","namespace": "default",
bootstrap.yml:
spring:
application:
name: mk-adobe-analytics-task
cloud:
kubernetes:
config:
enabled: false
secrets:
enabled: true
namespace: default
paths:
- /etc/secret-volume
log.info(AdobeAnalyticsConstants.LOG_RECOVERING_SECRET, env.getProperty("aws.bucketname"));
Deploying task:
task launch test-007 --properties "deployer.*.kubernetes.volumeMounts=[{name: secret-volume, mountPath: '/etc/secret-volume'}], deployer.* .kubernetes.volumes=[{name: 'secret-volume', secret: {secretName: 'omni-secret' }}]"
Result:
2019-06-10 10:32:50.852 INFO 1 --- Recovering property "aws.bucketname": null
How can i map into a task the k8s volumens? simply k8s deploy , it is ok using streams
it's not clear how to start with your issue but please take a look for Kubernetes PropertySource implementations.
Inside "Secrets PropertySource - Table 3.2. Properties" you can find other settings like:
- spring.cloud.kubernetes.secrets.name
- spring.cloud.kubernetes.secrets.labels
- spring.cloud.kubernetes.secrets.enableApi
So please refer to the documentation.
It's also possible that your environment variable aws.bucketname wasn't configured properly.
Hope this help.

Unable to get a Grafana helm-charts URL to work with subpath

I am setting up a Grafana server on my local kube cluster using helm-charts. I am trying to get it to work on a subpath in order to implement it on a production env with tls later on, but I am unable to access Grafana on http://localhost:3000/grafana.
I have tried all most all the recommendations out there on the internet about adding a subpath to ingress, but nothing seems to work.
The Grafana login screen shows up on http://localhost:3000/ when I remove root_url: http://localhost:3000/grafana from Values.yaml
But when I add root_url: http://localhost:3000/grafana back into values.yaml file I see the error attached below (towards the end of this post).
root_url: http://localhost:3000/grafana and ingress as:
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
labels: {}
path: /grafana
hosts:
- localhost
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
I expect the http://localhost:3000/grafana url to show me the login screen instead i see the below errors:
If you're seeing this Grafana has failed to load its application files
1. This could be caused by your reverse proxy settings.
2. If you host grafana under subpath make sure your grafana.ini root_url setting includes subpath
3. If you have a local dev build make sure you build frontend using: yarn start, yarn start:hot, or yarn build
4. Sometimes restarting grafana-server can help
Can you please help me fix the ingress and root_url on values.yaml to get Grafana URL working at /grafana ?
As you check documentation for Configuring grafana behind Proxy, root_url should be configured in grafana.ini file under [server] section. You can modify your values.yaml to achieve this.
grafana.ini:
...
server:
root_url: http://localhost:3000/grafana/
Also your ingress in values should look like this.
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
labels: {}
path: /grafana/
hosts:
- ""
Hope it helps.
I followed exact steps mentioned by #coolinuxoid however, I still faced issue when trying to access UI by hitting http://localhost:3000/grafana/
I got redirected to http://localhost:3000/grafana/login with no UI displayed.
A small modification helped me achieve accessing UI through http://localhost:3000/grafana/
In the grafana.ini configuration, I added "serve_from_sub_path: true", so my final grafana.ini looked something like this:
grafana.ini:
server:
root_url: http://localhost:3000/grafana/
serve_from_sub_path: true
Ingress Configuration were exactly same. If it is version specific issue, I cannot be sure but I'm using Grafana v8.2.1.
You need to tell the grafana application, that it is run not under the root url / (the default), but under some subpath. The easiest way is via GF_ prefixed env vars:
grafana:
env:
GF_SERVER_ROOT_URL: https://myhostname.example.com/grafana
GF_SERVER_SERVE_FROM_SUB_PATH: 'true'
ingress:
enabled: true
hosts:
- myhostname.example.com
path: /grafana($|(/.*))
pathType: ImplementationSpecific
Above example works for the kubernetes' nginx-ingress-controller. Depending on the ingress controller you use, you may need
path: /grafana
pathType: Prefix
instead.