I have set up Grafana, Prometheus and loki (2.6.1) as follows on my kubernetes (1.21) cluster:
helm upgrade --install promtail grafana/promtail -n monitoring -f monitoring/promtail.yaml
helm upgrade --install prom prometheus-community/kube-prometheus-stack -n monitoring --values monitoring/prom.yaml
helm upgrade --install loki grafana/loki -n monitoring --values monitoring/loki.yaml
with:
# monitoring/loki.yaml
loki:
schemaConfig:
configs:
- from: 2020-09-07
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: loki_index_
period: 24h
storageConfig:
aws:
s3: s3://eu-west-3/cluster-loki-logs
boltdb_shipper:
shared_store: filesystem
active_index_directory: /var/loki/index
cache_location: /var/loki/cache
cache_ttl: 168h
# monitoring/promtail.yaml
config:
serverPort: 80
clients:
- url: http://loki:3100/loki/api/v1/push
# monitoring/prom.yaml
prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: false
serviceMonitorSelector: {}
serviceMonitorNamespaceSelector:
matchLabels:
monitored: "true"
grafana:
sidecar:
datasources:
defaultDatasourceEnabled: true
additionalDataSources:
- name: Loki
type: loki
url: http://loki.monitoring:3100
I get data from my containers, but, whenever I have a container logging in json format, I can't get access to the nested fields:
{app="product", namespace="api-dev"} | unpack | json
Yields:
My aim is, for example, to filter by log.severity
Actually, following this answer, it occurs to be a promtail scraping issue.
The current (promtail-6.3.1 / 2.6.1) helm chart default is to have cri as pipeline's stage, which expects this kind of logs:
"2019-04-30T02:12:41.8443515Z stdout xx message"
I should have use docker, which expects json, consequently, my promtail.yaml changed to:
config:
serverPort: 80
clients:
- url: http://loki:3100/loki/api/v1/push
snippets:
pipelineStages:
- docker: {}
Related
k3s cluster.
I have used velero helm installation:
helm install vmware-tanzu/velero --namespace velero-minio -f helm-custom-values-minio.yaml --generate-name --create-namespace
and
helm install vmware-tanzu/velero --namespace velero-aws -f helm-custom-values-aws.yaml --generate-name --create-namespace
Custom helm values:
helm-custom-values-minio.yaml
configuration:
provider: aws
backupStorageLocation:
bucket: k3s-backup
name: minio
default: false
config:
region: minio
s3ForcePathStyle: true
s3Url: http://10.10.5.15:9009
volumeSnapshotLocation:
name: minio
config:
region: minio
credentials:
secretContents:
cloud: |
[default]
aws_access_key_id=minioadm
aws_secret_access_key=<password>
initContainers:
- name: velero-plugin-for-aws
image: velero/velero-plugin-for-aws:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /target
name: plugins
snapshotsEnabled: true
deployRestic: true
and helm-custom-values-aws.yaml
configuration:
provider: aws
backupStorageLocation:
name: aws-s3
bucket: k3s-backup-aws
default: false
provider: aws
config:
region: us-east-1
s3ForcePathStyle: false
volumeSnapshotLocation:
name: aws-s3
provider: aws
config:
region: us-east-1
credentials:
secretContents:
cloud: |
[default]
aws_access_key_id=A..............MJ
aws_secret_access_key=qZ79rA/yVUq2c................xnIA
initContainers:
- name: velero-plugin-for-aws
image: velero/velero-plugin-for-aws:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /target
name: plugins
snapshotsEnabled: true
deployRestic: true
velero backup jobs:
velero create backup k3s-mongodb-restic-minio --include-namespaces mongodb --default-volumes-to-restic=true --storage-location minio -n velero-minio
velero create backup k3s-mongodb-restic-aws --include-namespaces mongodb --default-volumes-to-restic=true --storage-location aws-s3 -n velero-aws
....
They all failed:
Restic Backups:
Failed:
mongodb/mongodb-cluster-0: agent-scripts, data-volume, healthstatus, hooks, logs-volume, mongodb-cluster-keyfile, tmp
mongodb/mongodb-cluster-1: agent-scripts, data-volume, healthstatus, hooks, logs-volume, mongodb-cluster-keyfile, tmp
time="2022-10-17T17:42:32Z" level=error msg="Error backing up item" backup=velero-minio/k3s-mongodb-restic-minio error="pod volume backup failed: running Restic backup, stderr=Fatal: unable to open config file: Stat: The Access Key Id you provided does not exist in our records.\nIs there a repository at the following location?\ns3:http://10.10.5.15:9009/k3s-backup/restic/mongodb\n: exit status 1" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/restic/backupper.go:199" error.function="github.com/vmware-tanzu/velero/pkg/restic.(*backupper).BackupPodVolumes" logSource="pkg/backup/backup.go:417" name=mongodb-cluster-0
...
velero get backup-locations -n velero-aws
NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT
aws-s3 aws k3s-backup-aws Available 2022-10-17 14:12:46 -0400 EDT ReadWrite
...
velero get backup-locations -n velero-minio
NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT
minio aws k3s-backup Available 2022-10-17 14:16:25 -0400 EDT ReadWrite
velero backup part works without errors but restic fails for all my jobs (mongodb is the only example).
It looks like the restic can't create snapshots for my nfs pvc.
What am I doing wrong?
It looks like velero doesn't work with multiple installations, at least the restic part fails (in my case, two instances in name spaces velero-aws and velero-minio).
So, I installed only one instance of velero to work with minio.
Removed --default-volumes-to-restic=true from the backup job configuration.
Used opt-in pod volume backup with the restic integration.
Each pod that has pvc volume needs to be annotated, like the following:
kubectl -n mongodb annotate pod/mongodb-cluster-0 backup.velero.io/backup-volumes=logs-volume,data-volume
I have not tried velero-pvc-watcher, probably it works well
Now backup works with no errors.
I deployed Loki and Promtail using the grafana Helm Chart and I struggle to configure it.
As a simple configuration, I would like to add a specific label (an UUID). To do so, I use the specific yaml :
config:
lokiAddress: http://loki-headless.admin:3100/loki/api/v1/push
snippets:
extraScrapeConfigs: |
- job_name: dashboard
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- docker: {}
- match:
selector: '{container = "dashboard"}'
stages:
- regex:
expression: '(?P<loguuid>[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12})'
- labels:
loguuid:
Which is deployed with the command:
helm upgrade -install promtail -n admin grafana/promtail -f promtail.yaml
Of course, I still don’t have the label in grafana.
Can someone tell me what I did wrong?
It seems that excluding logs in a pod using the configuration below does not work.
extrascrapeconfig.yaml:
- job_name: kubernetes-pods-app
pipeline_stages:
- docker: {}
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: drop
regex: .+
source_labels:
- __meta_kubernetes_pod_label_name
###
- action: keep
regex: ambassador
source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_pod_namespace
###
To Reproduce
Steps to reproduce the behavior:
Deployed helm loki-stack :
helm install loki grafana/loki-stack --version "${HELM_CHART_VERSION}" \
--namespace=monitoring \
--create-namespace \
-f "loki-stack-values-v${HELM_CHART_VERSION}.yaml"
loki-stack-values-v2.4.1.yaml:
loki:
enabled: true
config:
promtail:
enabled: true
extraScrapeConfigs: extrascrapeconfig.yaml
fluent-bit:
enabled: false
grafana:
enabled: false
prometheus:
enabled: false
Attach grafana to loki datasource
Query: {namespace="kube-system"} in Grafana Loki
RESULT:
See logs
Expected behavior:
Not seeing any logs
Environment:
Infrastructure: Kubernetes
Deployment tool: Helm
What am I missing?
If you need Helm to pick up a specific file and pass it as a value, you should not pass the value itself in the values YAML file, but via another flag when installing or upgrading the release.
The command you are using is just applying the Helm values as-is, since the -f flag does not support parsing other files into the values by itself. Instead, use --set-file, which works similarly to --set, but gets the value content from the passed file.
Your command would now look like this:
helm install loki grafana/loki-stack --version "${HELM_CHART_VERSION}" \
--namespace=monitoring \
--create-namespace \
-f "loki-stack-values-v${HELM_CHART_VERSION}.yaml" \
--set-file promtail.extraScrapeConfigs=extrascrapeconfig.yaml
I deploy Argo with Helm 3 to my cluster
helm upgrade --install argo argo/argo-cd -n argocd -f argovalues.yaml
My argovalues.yml file is the following
global.image.tag: "v2.0.1"
server.service.type: "NodePort"
server.name: "kabamaru"
server.ingress.enabled: true
server.metrics.enabled: true
server.additionalApplications: |
- name: guestbook
namespace: argocd
additionalLabels: {}
additionalAnnotations: {}
project: default
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git
targetRevision: HEAD
path: guestbook
directory:
recurse: true
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: false
selfHeal: false
and .... none of these values is applied.It is very frustrating.
If I do the following
helm upgrade --install argo argo/argo-cd -n argocd --set server.name=hello
it works and changes successfully!
What on earth is going on?
I'm using helm chart upgrade command with the option --set-file and it's working correctly :
helm upgrade --install argo argo/argo-cd -n argocd --set-file argovalues.yaml
there are other option like --set and --set-string like that:
I think that can help you to resolve your issue.
The solution
I managed to make it work like this:
argovalues.yaml
server:
service:
type: "NodePort" #by default 80:30080/TCP,443:30443/TCP
image:
tag: "v2.0.1"
autoscaling:
enabled: true
ingress:
enabled: true
metrics:
enabled: true
additionalApplications:
- name: bootstrap
namespace: argocd
additionalLabels: {}
additionalAnnotations: {}
project: default
source:
repoURL: https://github.com/YOUR_REPO/argotest.git
targetRevision: HEAD
path: bootstrap
directory:
recurse: true
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: false
Note: the wrong indentation can drive you crazy!
I am using this guide to run filebeat on a Kubernetes cluster.
https://www.elastic.co/guide/en/beats/filebeat/master/running-on-kubernetes.html#_kubernetes_deploy_manifests
filebeat version: 6.6.0
I updated config file with:
filebeat.yml: |-
filebeat.config:
inputs:
# Mounted `filebeat-inputs` configmap:
path: ${path.config}/inputs.d/*.yml
# Reload inputs configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
# To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# hints.enabled: true
filebeat.modules:
- module: nginx
access:
enabled: true
var.paths: ["/var/log/nginx/access.log*"]
- module: apache2
access:
enabled: true
var.paths: ["/var/log/apache2/access.log*"]
error:
enabled: true
var.paths: ["/var/log/apache2/error.log*"]
But, the logs from the PHP application (/var/log/apache2/error.log) are not being fetched by filebeat. I checked by execing into the filebeat pod and I see that apache2 and nginx modules are not enabled.
How can I set it up correctly in above yaml file.
UPDATE
I updated filebeat config file with below settings:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
templates:
- condition:
config:
- type: docker
containers.ids:
- "${data.kubernetes.container.id}"
exclude_lines: ["^\\s+[\\-`('.|_]"] # drop asciiart lines
- condition:
equals:
kubernetes.labels.app: "my-apache-app"
config:
- module: apache2
log:
input:
type: docker
containers.ids:
- "${data.kubernetes.container.id}"
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-modules
namespace: default
labels:
k8s-app: filebeat
data:
apache2.yml: |-
- module: apache2
access:
enabled: true
error:
enabled: true
nginx.yml: |-
- module: nginx
access:
enabled: true
Now, I am logging apache errors in /dev/stderr so that I can see it thru kubectl logs. Logs are fetching over kibana dashboard. But, apache module is still noe visible.
I tried checking with ./filebeat modules list:
Enabled:
apache2
nginx
Disabled:
Kibana Dashboard