I am using this guide to run filebeat on a Kubernetes cluster.
https://www.elastic.co/guide/en/beats/filebeat/master/running-on-kubernetes.html#_kubernetes_deploy_manifests
filebeat version: 6.6.0
I updated config file with:
filebeat.yml: |-
filebeat.config:
inputs:
# Mounted `filebeat-inputs` configmap:
path: ${path.config}/inputs.d/*.yml
# Reload inputs configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
# To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# hints.enabled: true
filebeat.modules:
- module: nginx
access:
enabled: true
var.paths: ["/var/log/nginx/access.log*"]
- module: apache2
access:
enabled: true
var.paths: ["/var/log/apache2/access.log*"]
error:
enabled: true
var.paths: ["/var/log/apache2/error.log*"]
But, the logs from the PHP application (/var/log/apache2/error.log) are not being fetched by filebeat. I checked by execing into the filebeat pod and I see that apache2 and nginx modules are not enabled.
How can I set it up correctly in above yaml file.
UPDATE
I updated filebeat config file with below settings:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
templates:
- condition:
config:
- type: docker
containers.ids:
- "${data.kubernetes.container.id}"
exclude_lines: ["^\\s+[\\-`('.|_]"] # drop asciiart lines
- condition:
equals:
kubernetes.labels.app: "my-apache-app"
config:
- module: apache2
log:
input:
type: docker
containers.ids:
- "${data.kubernetes.container.id}"
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-modules
namespace: default
labels:
k8s-app: filebeat
data:
apache2.yml: |-
- module: apache2
access:
enabled: true
error:
enabled: true
nginx.yml: |-
- module: nginx
access:
enabled: true
Now, I am logging apache errors in /dev/stderr so that I can see it thru kubectl logs. Logs are fetching over kibana dashboard. But, apache module is still noe visible.
I tried checking with ./filebeat modules list:
Enabled:
apache2
nginx
Disabled:
Kibana Dashboard
Related
I am creating/configuring a service account(SA) in the helm chart.
It is created(in the k8s namespace as a secret), however, when I try to use its token in a HTTP/REST API call e.g. get folders, it says:
"invalid API key"
The idea is whenever Grafana is installed from scratch, an SA should be provisioned. This SA token will be then used for accessing the REST API.
Chart.yaml
apiVersion: v2
name: kraken-observability-stack
version: 0.1.0
#We don't have a built-in-house app so we dont set
#appVersion: 0.1.0
kubeVersion: "^1.20.0-0"
description: The kraken observability stack for collecting and visualizing metrics, logs and traces related to CI pipelines.
home: https://docs.net/
dependencies:
- name: grafana
repository: https://grafana.github.io/helm-charts
version: 6.50.x
- name: mimir-distributed
repository: https://grafana.github.io/helm-charts
version: 3.2.x
- name: loki-distributed
repository: https://grafana.github.io/helm-charts
version: 0.68.x
- name: tempo-distributed
repository: https://grafana.github.io/helm-charts
version: 1.0.x
- name: opentelemetry-collector
repository: https://open-telemetry.github.io/opentelemetry-helm-charts
version: 0.47.x
(partial) values.yaml
grafana:
testFramework:
enabled: false
resources:
limits:
#maybe we shouldn't set cpu limits to avoid overbooking of resources.
#cpu: 1000m
memory: 1Gi
requests:
memory: 200Mi
cpu: 200m
grafana.ini:
force_migration: true
data_proxy:
timeout: 60s
#feature_toggles:
# enable: tempoServiceGraph,tempoSearch,tempoBackendSearch,tempoApmTable
auth:
login_cookie_name: "kraken_grafana_session"
auth.anonymous:
enabled: true
org_name: 'CICDS Pipelines User'
org_role: 'Viewer'
analytics:
reporting_enabled: false
check_for_updates: false
check_for_plugin_updates: false
enable_feedback_links: false
log:
level: warn
mode: console
plugins:
enable_alpha: true
app_tls_skip_verify_insecure: true
allow_loading_unsigned_plugins: true
#podAnnotations for grafana to expose its own metrics
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/schema: "http"
prometheus.io/port: "http"
prometheus.io/path: "/metrics"
rbac:
#disable Create and use RBAC resources
create: false
#disable Create PodSecurityPolicy (we don't have privileges for that)
pspEnabled: false
#disable to enforce AppArmor in created PodSecurityPolicy
pspUseAppArmor: false
serviceAccount:
create: true
name: grafana-init-sa
labels: {kraken-init}
replicas: 3
image:
#repository: docker-virtual.repository.net/grafana/grafana
repository: grafana/grafana
downloadDashboardsImage:
repository: docker-virtual.repository.net/curlimages/curl
tag: 7.85.0
pullPolicy: IfNotPresent
persistence:
type: statefulset
enabled: true
initChownData:
## This allows the prometheus-server to be run with an arbitrary user
##
enabled: false
#image:
# repository: docker-virtual.repository.net/busybox
# Administrator credentials when not using an existing secret (see below)
adminUser: admin
adminPassword: changeit
# Use an existing secret for the admin user.
# grafana-admin-credentials name is reserved by the operator and thus -creds
admin:
existingSecret: "grafana-admin-user"
userKey: ADMIN_USER
passwordKey: ADMIN_PASSWORD
env:
HTTP_PROXY: http://p985nst:p985nst#proxyvip-se.sbcore.net:8080/
HTTPS_PROXY: http://p985nst:p985nst#proxyvip-se.sbcore.net:8080/
NO_PROXY: .cluster.local,.net,.sbcore.net,.svc,10.0.0.0/8,172.30.0.0/16,localhost
# ## Pass the plugins you want installed as a list.
# ##
# plugins:
# - digrich-bubblechart-panel
# - grafana-clock-panel
# - grafana-piechart-panel
# - natel-discrete-panel
extraSecretMounts:
- name: loki-credentials-secret-mount
secretName: loki-credentials
defaultMode: 0440
mountPath: /etc/secrets/.loki_credentials
readOnly: true
I have set up Grafana, Prometheus and loki (2.6.1) as follows on my kubernetes (1.21) cluster:
helm upgrade --install promtail grafana/promtail -n monitoring -f monitoring/promtail.yaml
helm upgrade --install prom prometheus-community/kube-prometheus-stack -n monitoring --values monitoring/prom.yaml
helm upgrade --install loki grafana/loki -n monitoring --values monitoring/loki.yaml
with:
# monitoring/loki.yaml
loki:
schemaConfig:
configs:
- from: 2020-09-07
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: loki_index_
period: 24h
storageConfig:
aws:
s3: s3://eu-west-3/cluster-loki-logs
boltdb_shipper:
shared_store: filesystem
active_index_directory: /var/loki/index
cache_location: /var/loki/cache
cache_ttl: 168h
# monitoring/promtail.yaml
config:
serverPort: 80
clients:
- url: http://loki:3100/loki/api/v1/push
# monitoring/prom.yaml
prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: false
serviceMonitorSelector: {}
serviceMonitorNamespaceSelector:
matchLabels:
monitored: "true"
grafana:
sidecar:
datasources:
defaultDatasourceEnabled: true
additionalDataSources:
- name: Loki
type: loki
url: http://loki.monitoring:3100
I get data from my containers, but, whenever I have a container logging in json format, I can't get access to the nested fields:
{app="product", namespace="api-dev"} | unpack | json
Yields:
My aim is, for example, to filter by log.severity
Actually, following this answer, it occurs to be a promtail scraping issue.
The current (promtail-6.3.1 / 2.6.1) helm chart default is to have cri as pipeline's stage, which expects this kind of logs:
"2019-04-30T02:12:41.8443515Z stdout xx message"
I should have use docker, which expects json, consequently, my promtail.yaml changed to:
config:
serverPort: 80
clients:
- url: http://loki:3100/loki/api/v1/push
snippets:
pipelineStages:
- docker: {}
I'm trying to install Traefik on a K8s cluster using ArgoCD to deploy the official Helm chart. But I also need it to us an additional "values.yml" file. When I try to specify in the Application yaml file what additional values file to use, it fails to file not found for it.
Here is what I'm using:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: argo-traefik-chart
namespace: argocd
spec:
project: default
source:
path: traefik
repoURL: https://github.com/traefik/traefik-helm-chart.git
targetRevision: HEAD
helm:
valueFiles:
- /traefik-values.yml
destination:
server: https://kubernetes.default.svc
namespace: 2195-leaf-dev-traefik
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
prune: true
selfHeal: true
Here is the traefik-value.yml file.
additionalArguments:
# Configure your CertificateResolver here...
#
# HTTP Challenge
# ---
# Generic Example:
# - --certificatesresolvers.generic.acme.email=your-email#example.com
# - --certificatesresolvers.generic.acme.caServer=https://acme-v02.api.letsencrypt.org/directory
# - --certificatesresolvers.generic.acme.httpChallenge.entryPoint=web
# - --certificatesresolvers.generic.acme.storage=/ssl-certs/acme-generic.json
#
# Prod / Staging Example:
# - --certificatesresolvers.staging.acme.email=your-email#example.com
# - --certificatesresolvers.staging.acme.caServer=https://acme-staging-v02.api.letsencrypt.org/directory
# - --certificatesresolvers.staging.acme.httpChallenge.entryPoint=web
# - --certificatesresolvers.staging.acme.storage=/ssl-certs/acme-staging.json
# - --certificatesresolvers.production.acme.email=your-email#example.com
# - --certificatesresolvers.production.acme.caServer=https://acme-v02.api.letsencrypt.org/directory
# - --certificatesresolvers.production.acme.httpChallenge.entryPoint=web
# - --certificatesresolvers.production.acme.storage=/ssl-certs/acme-production.json
#
# DNS Challenge
# ---
# Cloudflare Example:
# - --certificatesresolvers.cloudflare.acme.dnschallenge.provider=cloudflare
# - --certificatesresolvers.cloudflare.acme.email=your-email#example.com
# - --certificatesresolvers.cloudflare.acme.dnschallenge.resolvers=1.1.1.1
# - --certificatesresolvers.cloudflare.acme.storage=/ssl-certs/acme-cloudflare.json
#
# Generic (replace with your DNS provider):
# - --certificatesresolvers.generic.acme.dnschallenge.provider=generic
# - --certificatesresolvers.generic.acme.email=your-email#example.com
# - --certificatesresolvers.generic.acme.storage=/ssl-certs/acme-generic.json
logs:
# Configure log settings here...
general:
level: DEBUG
ports:
# Configure your entrypoints here...
web:
# (optional) Permanent Redirect to HTTPS
redirectTo: websecure
websecure:
tls:
enabled: true
# (optional) Set a Default CertResolver
# certResolver: cloudflare
#env:
# Set your environment variables here...
#
# DNS Challenge Credentials
# ---
# Cloudflare Example:
# - name: CF_API_EMAIL
# valueFrom:
# secretKeyRef:
# key: email
# name: cloudflare-credentials
# - name: CF_API_KEY
# valueFrom:
# secretKeyRef:
# key: apiKey
# name: cloudflare-credentials
# Just to do it for now
envFrom:
- secretRef:
name: traefik-secrets
# Disable Dashboard
ingressRoute:
dashboard:
enabled: true
# Persistent Storage
persistence:
enabled: true
name: ssl-certs
size: 1Gi
path: /ssl-certs
# deployment:
# initContainers:
# # The "volume-permissions" init container is required if you run into permission issues.
# # Related issue: https://github.com/containous/traefik/issues/6972
# - name: volume-permissions
# image: busybox:1.31.1
# command: ["sh", "-c", "chmod -Rv 600 /ssl-certs/*"]
# volumeMounts:
# - name: ssl-certs
# mountPath: /ssl-certs
# Set Traefik as your default Ingress Controller, according to Kubernetes 1.19+ changes.
ingressClass:
enabled: true
isDefaultClass: true
The traefik-values.yml file is in the same sub-directory as this file. I fire this of with kubectl apply -f but when I got to look at it in the Argo GUI, it shows an error. I'll paste the entire thing below, but it looks like the important part is this:
` failed exit status 1: Error: open .traefik-values.yml: no such file or directory
It's putting a period before the name of the file. I tried different ways of specifying the file: .traefik-values.yml and ./treafik-values.yml. Those get translated to:
: Error: open .traefik/.traefik-values.yml: no such file or directory
When I do a helm install using the exact same traefik-values.yml file, I get exactly what I expect. And when I run the Argo without the alternate file, it deploys but with out the needed options of course.
Any ideas?
I assume this is because Argo will look for traefik-values.yml file in the repoURL (so, not in the location where Application file is), and it obviously doesn't exist there.
You can check more about this issue here. There you can also find a couple of proposed solutions. Some of them are:
a plugin to do a helm template with your values files
a custom CI pipeline to take the content of your values.yaml file and add it to Application manifest
putting values directly in Application manifest, skipping the values.yaml file altogether
having a chart that depends on a chart like here (I don't like this one as it is downloading twice from two different locations, plus this issue)
play around with kustomize
or wait for ArgoCD 2.5, it seems it will include a native solution out of the box, according to mentioned github issue
Our applications are deployed in AWS EKS cluster, and for certain reasons we need to write our app logs to separate file lets say ${POD_NAME}.applog instead of stdout (we mounted /var/log/container/ to the pod /log folder and app writes /log/${POD_NAME}.applog ). And we are using filebeat to send the logs to Elasticsearch and we are using Kibana for visualization. Our filebeat config file looks like this
data:
filebeat.yml: |-
filebeat.inputs:
- type: log
paths:
- /var/log/containers/*.applog
json.keys_under_root: true
json.message_key: log
processors:
- add_cloud_metadata:
- add_host_metadata:
This is working fine, but we realised we are missing the kuberenetes metadata in ES and Kibana. But we are getting kuberenetes metadata when we include -type: conatainer.
data:
filebeat.yml: |-
filebeat.inputs:
- type: log
paths:
- /var/log/containers/*.applog
json.keys_under_root: true
json.message_key: log
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
So we tried adding the config like this
data:
filebeat.yml: |-
filebeat.inputs:
- type: log
paths:
- /var/log/containers/*.applog
json.keys_under_root: true
json.message_key: log
processors:
- add_kubernetes_metadata:
in_cluster: true
host: ${NODE_NAME}
- add_cloud_metadata:
- add_host_metadata:
Still we are not getting the kuberenetes metadata in kibana. I tried with all trial and error method, but nothing works.
Can someone please help me how to get Kubernetes metadata with custom logfile in filebeat.
We meet the same issue under EKS 1.24, there is no kubernetes metadata in the log entry. Per doc Add Kubernetes metadataedit, we solve it through the following config under filebeat version 7.17.8.
- type: container
paths:
- /var/log/pods/*/*/*.log
exclude_files: ['filebeat.*',
'logstash.*',
'kube.*',
'cert-manager.*']
processors:
- add_kubernetes_metadata:
in_cluster: true
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: '/var/log/pods/'
default_indexers.enabled: false
default_matchers.enabled: false
indexers:
- pod_uid:
matchers:
- logs_path:
logs_path: '/var/log/pods/'
resource_type: 'pod'
The default indexer and matcher should be disabled. And the pod_uid indexers could identify the pod metadata using the UID of the pod. You could find the logs_path matchers config from here
I'm new to Helm and Kubernetes and cannot figure out how to use helm install --name kibana --namespace logging stable/kibana with the Logtrail plugin enabled. I can see there's an option in the values.yaml file to enable plugins during installation but I cannot figure out how to set it.
I've tried this without success:
helm install --name kibana --namespace logging stable/kibana \
--set plugins.enabled=true,plugins.value=logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip
Update:
As Ryan suggested, it's best to provide such complex settings via a custom values file. But as it turned out, the above mentioned settings are not the only ones that one would have to provide to get the Logtrail plugin working in Kibana. Some configuration for Logtrail must be set before doing the helm install. And here's how to set it. In your custom values file set the following:
extraConfigMapMounts:
- name: logtrail
configMap: logtrail
mountPath: /usr/share/kibana/plugins/logtrail/logtrail.json
subPath: logtrail.json
After that the full content of your custom values file should look similar to this:
image:
repository: "docker.elastic.co/kibana/kibana-oss"
tag: "6.5.4"
pullPolicy: "IfNotPresent"
commandline:
args: []
env: {}
# All Kibana configuration options are adjustable via env vars.
# To adjust a config option to an env var uppercase + replace `.` with `_`
# Ref: https://www.elastic.co/guide/en/kibana/current/settings.html
#
# ELASTICSEARCH_URL: http://elasticsearch-client:9200
# SERVER_PORT: 5601
# LOGGING_VERBOSE: "true"
# SERVER_DEFAULTROUTE: "/app/kibana"
files:
kibana.yml:
## Default Kibana configuration from kibana-docker.
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
## Custom config properties below
## Ref: https://www.elastic.co/guide/en/kibana/current/settings.html
# server.port: 5601
# logging.verbose: "true"
# server.defaultRoute: "/app/kibana"
deployment:
annotations: {}
service:
type: ClusterIP
externalPort: 443
internalPort: 5601
# authProxyPort: 5602 To be used with authProxyEnabled and a proxy extraContainer
## External IP addresses of service
## Default: nil
##
# externalIPs:
# - 192.168.0.1
#
## LoadBalancer IP if service.type is LoadBalancer
## Default: nil
##
# loadBalancerIP: 10.2.2.2
annotations: {}
# Annotation example: setup ssl with aws cert when service.type is LoadBalancer
# service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:EXAMPLE_CERT
labels: {}
## Label example: show service URL in `kubectl cluster-info`
# kubernetes.io/cluster-service: "true"
## Limit load balancer source ips to list of CIDRs (where available)
# loadBalancerSourceRanges: []
ingress:
enabled: false
# hosts:
# - kibana.localhost.localdomain
# - localhost.localdomain/kibana
# annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# tls:
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
serviceAccount:
# Specifies whether a service account should be created
create: false
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
# If set and create is false, the service account must be existing
name:
livenessProbe:
enabled: false
initialDelaySeconds: 30
timeoutSeconds: 10
readinessProbe:
enabled: false
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 10
successThreshold: 5
# Enable an authproxy. Specify container in extraContainers
authProxyEnabled: false
extraContainers: |
# - name: proxy
# image: quay.io/gambol99/keycloak-proxy:latest
# args:
# - --resource=uri=/*
# - --discovery-url=https://discovery-url
# - --client-id=client
# - --client-secret=secret
# - --listen=0.0.0.0:5602
# - --upstream-url=http://127.0.0.1:5601
# ports:
# - name: web
# containerPort: 9090
resources: {}
# limits:
# cpu: 100m
# memory: 300Mi
# requests:
# cpu: 100m
# memory: 300Mi
priorityClassName: ""
# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
# affinity: {}
# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
podAnnotations: {}
replicaCount: 1
revisionHistoryLimit: 3
# To export a dashboard from a running Kibana 6.3.x use:
# curl --user <username>:<password> -XGET https://kibana.yourdomain.com:5601/api/kibana/dashboards/export?dashboard=<some-dashboard-uuid> > my-dashboard.json
# A dashboard is defined by a name and a string with the json payload or the download url
dashboardImport:
timeout: 60
xpackauth:
enabled: false
username: myuser
password: mypass
dashboards: {}
# k8s: https://raw.githubusercontent.com/monotek/kibana-dashboards/master/k8s-fluentd-elasticsearch.json
# List of plugins to install using initContainer
# NOTE : We notice that lower resource constraints given to the chart + plugins are likely not going to work well.
plugins:
# set to true to enable plugins installation
enabled: false
# set to true to remove all kibana plugins before installation
reset: false
# Use <plugin_name,version,url> to add/upgrade plugin
values:
- logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip
# - elastalert-kibana-plugin,1.0.1,https://github.com/bitsensor/elastalert-kibana-plugin/releases/download/1.0.1/elastalert-kibana-plugin-1.0.1-6.4.2.zip
# - logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.4.2-0.1.30.zip
# - other_plugin
persistentVolumeClaim:
# set to true to use pvc
enabled: false
# set to true to use you own pvc
existingClaim: false
annotations: {}
accessModes:
- ReadWriteOnce
size: "5Gi"
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
# default security context
securityContext:
enabled: false
allowPrivilegeEscalation: false
runAsUser: 1000
fsGroup: 2000
extraConfigMapMounts:
- name: logtrail
configMap: logtrail
mountPath: /usr/share/kibana/plugins/logtrail/logtrail.json
subPath: logtrail.json
And the last thing you should do is add this ConfigMap resource to Kubernetes:
apiVersion: v1
kind: ConfigMap
metadata:
name: logtrail
namespace: logging
data:
logtrail.json: |
{
"version" : 1,
"index_patterns" : [
{
"es": {
"default_index": "logstash-*"
},
"tail_interval_in_seconds": 10,
"es_index_time_offset_in_seconds": 0,
"display_timezone": "local",
"display_timestamp_format": "MMM DD HH:mm:ss",
"max_buckets": 500,
"default_time_range_in_days" : 0,
"max_hosts": 100,
"max_events_to_keep_in_viewer": 5000,
"fields" : {
"mapping" : {
"timestamp" : "#timestamp",
"hostname" : "kubernetes.host",
"program": "kubernetes.pod_name",
"message": "log"
},
"message_format": "{{{log}}}"
},
"color_mapping" : {
}
}]
}
After that you're ready to helm install with the values file specified via the -f flag.
Getting input with --set that matches to what the example in the values file has is a bit tricky. Following the example we want the values to be:
plugins:
enabled: true
values:
- logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.4.2-0.1.30.zip
The plugin.values here is tricky because it is an array, which means you need to enclose with {}. And the relevant entry contains commas, which have to be escaped with backslash. To get it to match you can use:
helm install --name kibana --namespace logging stable/kibana --set plugins.enabled=true,plugins.values={"logtrail\,0.1.30\,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip"}
If you add --dry-run --debug then you can see what the computed values are for any command you run, including with --set, so this can help check the match. This kind of value is easier to set with a custom values file referenced with -f as it avoids having to work out how the --set evaluates to values.