I've got Kubernetes cluster with ECK Operator deployed. I also deploy Filebeat to my cluster. Here's file:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat
namespace: logging-dev
spec:
type: filebeat
version: 8.2.0
elasticsearchRef:
name: elastic-logging-dev
kibanaRef:
name: kibana
config:
filebeat:
autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints:
enabled: true
default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- add_cloud_metadata: { }
- add_host_metadata: { }
daemonSet:
podTemplate:
metadata:
annotations:
co.elastic.logs/enabled: "false"
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true # Allows to provide richer host metadata
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: filebeat
securityContext:
runAsUser: 0
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
resources:
requests:
memory: 500Mi
cpu: 100m
limits:
memory: 500Mi
cpu: 200m
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
It's working very well, but I want to also parse Kubernetes logs, eg.:
E0819 18:57:51.309161 1 watcher.go:327] failed to prepare current and previous objects: conversion webhook for minio.min.io/v2, Kind=Tenant failed: Post "https://operator.minio-operator.svc:4222/webhook/v1/crd-conversion?timeout=30s": dial tcp 10.233.8.119:4222: connect: connection refused
How can I do that?
In Fluentd it's quite simple:
<filter kubernetes.var.log.containers.kube-apiserver-*_kube-system_*.log>
#type parser
key_name log
format /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)$/
time_format %m%d %H:%M:%S.%N
types pid:integer
reserve_data true
remove_key_name_field false
</filter>
But I cannot find any example, tutorial or whatever how to do this with Filebeat.
Related
Filebeat 7.12.1
ECK operator 2.2
I'm trying to setup the filbeat for the Nginx-ingress access logs in my ECK stack (installed in GKE). I can access the logs directly in the pod but nothing is coming to my Kibana dashboard.
I have set up two filebeat.autodiscover.providers
hints.enabled: true, which looks for all the containers with co.elastic.logs/enabled: "true"
Checks the container containing name ingress. I can confirm that the name of the pod is nginx-ingress-ingress-nginx-controller-xxxx-xxxxx
Below is my Filebeat auto discover content:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat
namespace: search
spec:
type: filebeat
version: 7.12.1
elasticsearchRef:
name: elastic-search
kibanaRef:
name: kibana-web
config:
filebeat.autodiscover.providers:
- node: ${NODE_NAME}
type: kubernetes
hints.enabled: true
#add_resource_metadata.namespace.enabled: true
hints.default_config.enabled: "false"
- node: ${NODE_NAME}
type: kubernetes
#add_resource_metadata.namespace.enabled: true
hints.default_config.enabled: "false"
templates:
- condition:
contains:
kubernetes.container.name: ingress
config:
- paths: ["/var/log/containers/*${data.kubernetes.container.id}.log"]
type: container
exclude_lines: ["^\\s+[\\-`('.|_]"]
processors:
- add_cloud_metadata: {}
- add_host_metadata: {}
daemonSet:
podTemplate:
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirstWithHostNet
#hostNetwork: true # Allows to provide richer host metadata
containers:
- name: filebeat
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
resources:
requests:
memory: 200Mi
cpu: 0.2
limits:
memory: 300Mi
cpu: 0.4
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Adding the answer here in case someone else run into this issue.
The issue is how I'm checking the contains condition. It should've been kubernetes.pod.name instead of kubernetes.container.name. So I replaced
- condition:
contains:
kubernetes.container.name: ingress
to
- condition:
contains:
kubernetes.pod.name: ingress
in the above file and things started to work!
Want to deploy filebeat with 3 log definations together. Send to different output targets.
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.10.0
args: [
"-c", "/etc/logs1.yml",
"-c", "/etc/logs2.yml",
"-c", "/etc/logs3.yml",
"-e",
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config-logs1
mountPath: /etc/logs1.yml
subPath: filebeat-logs1.yml
readOnly: true
- name: config-logs2
mountPath: /etc/logs2.yml
subPath: logs2.yml
readOnly: true
- name: config-logs3
mountPath: /etc/logs3.yml
subPath: logs3.yml
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config-logs1
configMap:
defaultMode: 0600
name: configmap-logs1
- name: config-logs2
configMap:
defaultMode: 0600
name: configmap-logs2
- name: config-logs3
configMap:
defaultMode: 0600
name: configmap-logs3
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
logs1's configmap
data:
filebeat-logs1.yml: |-
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/logs1.json
output.logstash:
hosts: ["logstash-logs1.default.svc.cluster.local:5044"]
logs2's configmap
data:
filebeat-logs2.yml: |-
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/logs2.json
output.logstash:
hosts: ["logstash-logs2.default.svc.cluster.local:5044"]
logs3's configmap
data:
filebeat-logs3.yml: |-
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/logs3.json
output.logstash:
hosts: ["logstash-logs3.default.svc.cluster.local:5044"]
When each log files changed, every time it only send to the third logs3's output logstash-logs3.default.svc.cluster.local:5044. But can get data all of three logs1.json/logs2.json/logs3.json files.
Can't filebeat use multiple output in this case on one machine?
You can have as many inputs as you want but you can only have one output, you will need to send your logs to a single logstash and from there you can send them to other places.
Filebeat does not support sending the same data to multiple logstash servers simultaneously. To achieve this you have to start multiple instances of Filebeat with different logstash server configurations.
This is the limitation of the Filebeat output plugin
To start multiple instances of filebeat in the same host, you can refer to this link
will sending data to Logstash server and to console output simultaneously work?
output.logstash:
hosts: ["localhost:5044"]
output.console:
Something like this..
I would like to add a fluent-bit agent as a sidecar container to an existing Istio Ingress Gateway Deployment that is generated via external tooling (istioctl). I figured using ytt and its overlays would be a good way to accomplish this since it should let me append an additional container to the Deployment and a few extra volumes while leaving the rest of the generated YAML intact.
Here's a placeholder Deployment that approximates an istio-ingressgateay to help visualize the structure:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: istio-ingressgateway
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
template:
metadata:
labels:
app: istio-ingressgateway
spec:
containers:
- args:
- example-args
command: ["example-command"]
image: gcr.io/istio/proxyv2
imagePullPolicy: Always
name: istio-proxy
volumes:
- name: example-volume-secret
secret:
secretName: example-secret
- name: example-volume-configmap
configMap:
name: example-configmap
I want to add a container to this that looks like:
- name: fluent-bit
image: fluent/fluent-bit
resources:
limits:
memory: 100Mi
requests:
cpu: 10m
memory: 10Mi
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc
- name: varlog
mountPath: /var/log
- name: dockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
and volumes that look like:
- name: fluent-bit-config
configMap:
name: ingressgateway-fluent-bit-forwarder-config
- name: varlog
hostPath:
path: /var/log
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
I managed to hack something together by modifying the overylay files example in the ytt playground, this looks like this:
## load("#ytt:overlay", "overlay")
##overlay/match by=overlay.subset({"kind": "Deployment", "metadata":{"name":"istio-ingressgateway"}}),expects=1
---
spec:
template:
spec:
containers:
##overlay/append
- name: fluent-bit
image: fluent/fluent-bit
resources:
limits:
memory: 100Mi
requests:
cpu: 10m
memory: 10Mi
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc
- name: varlog
mountPath: /var/log
- name: dockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
##overlay/match by=overlay.subset({"kind": "Deployment", "metadata":{"name":"istio-ingressgateway"}}),expects=1
---
spec:
template:
spec:
volumes:
##overlay/append
- name: fluent-bit-config
configMap:
name: ingressgateway-fluent-bit-forwarder-config
##overlay/append
- name: varlog
hostPath:
path: /var/log
##overlay/append
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
What I am wondering, though, is what is the best, most idiomatic way of using ytt to do this?
Thanks!
What you have now is good! The one suggestion I would make is that, if the volumes and containers always need to be added together, they be combined in to the same overlay, like so:
## load("#ytt:overlay", "overlay")
##overlay/match by=overlay.subset({"kind": "Deployment", "metadata":{"name":"istio-ingressgateway"}}),expects=1
---
spec:
template:
spec:
containers:
##overlay/append
- name: fluent-bit
image: fluent/fluent-bit
resources:
limits:
memory: 100Mi
requests:
cpu: 10m
memory: 10Mi
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc
- name: varlog
mountPath: /var/log
- name: dockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
##overlay/append
- name: fluent-bit-config
configMap:
name: ingressgateway-fluent-bit-forwarder-config
##overlay/append
- name: varlog
hostPath:
path: /var/log
##overlay/append
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
This will guarantee any time the container is added, the appropriate volumes will be included as well.
I'm trying to run FileBeat on minikube following this doc with k8s 1.16
https://www.elastic.co/guide/en/beats/filebeat/7.4/running-on-kubernetes.html
I downloaded the manifest file as instructed
curl -L -O https://raw.githubusercontent.com/elastic/beats/7.4/deploy/kubernetes/filebeat-kubernetes.yaml
Contents of the yaml file below
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# host: ${NODE_NAME}
# hints.enabled: true
# hints.default_config:
# type: container
# paths:
# - /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- add_cloud_metadata:
- add_host_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.4.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
---
When I try the deploy step,
kubectl create -f filebeat-kubernetes.yaml
I get the output + error:
configmap/filebeat-config created
clusterrolebinding.rbac.authorization.k8s.io/filebeat created
clusterrole.rbac.authorization.k8s.io/filebeat created
serviceaccount/filebeat created
error: unable to recognize "filebeat-kubernetes.yaml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
As we can see there
DaemonSet, Deployment, StatefulSet, and ReplicaSet resources will no longer be served from extensions/v1beta1, apps/v1beta1, or apps/v1beta2 by default in v1.16. Migrate to the apps/v1 API
You need to change apiVersion
apiVersion: extensions/v1beta1 -> apiVersion: apps/v1
Then there is another error
missing required field "selector" in io.k8s.api.apps.v1.DaemonSetSpec;
So we have to add selector field
spec:
selector:
matchLabels:
k8s-app: filebeat
Edited DaemonSet yaml:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.4.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
Let me know if that help you.
I have deployed Prometheus-Grafana on kubernetes cluster with following manifest file :
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: kube-monitoring
labels:
app: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:6.3.2
imagePullPolicy: IfNotPresent
env:
- name: GF_SECURITY_ADMIN_USER
value: admin
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: grafana-secret
key: admin-password
ports:
- containerPort: 3000
resources:
limits:
cpu: 500m
memory: 2500Mi
requests:
cpu: 100m
memory: 100Mi
livenessProbe:
exec:
command:
- wget
- localhost:3000
- --spider
initialDelaySeconds: 30
periodSeconds: 30
volumeMounts:
- mountPath: /var/lib/grafana
subPath: grafana
name: grafana-storage
readOnly: false
- mountPath: /etc/grafana/provisioning/datasources/
name: grafana-datasource-conf
readOnly: true
- mountPath: /etc/grafana/provisioning/dashboards/
name: grafana-dashboards-conf
readOnly: false
- mountPath: /var/lib/grafana/dashboards/0/k8s-cluster-usage
name: grafana-dashboard-k8s-cluster-usage
readOnly: false
- mountPath: /var/lib/grafana/dashboards/0/k8s-cluster-nodes
name: grafana-dashboard-k8s-cluster-nodes
readOnly: false
- mountPath: /var/lib/grafana/dashboards/0/k8s-core-dns
name: grafana-dashboard-k8s-core-dns
readOnly: false
securityContext:
runAsUser: 472
fsGroup: 472
restartPolicy: Always
volumes:
- name: grafana-storage
persistentVolumeClaim:
claimName: pvc-grafana
- name: grafana-datasource-conf
configMap:
name: grafana-datasource-conf
items:
- key: datasource.yaml
path: datasource.yaml
- name: grafana-dashboards-conf
configMap:
name: grafana-dashboards-conf
items:
- key: dashboards.yaml
path: dashboards.yaml
- name: grafana-dashboard-k8s-cluster-usage
configMap:
name: grafana-dashboard-k8s-cluster-usage
- name: grafana-dashboard-k8s-cluster-nodes
configMap:
name: grafana-dashboard-k8s-cluster-nodes
- name: grafana-dashboard-k8s-core-dns
configMap:
name: grafana-dashboard-k8s-core-dns
and dashboard config is https://pastebin.com/zAYn9BhY (its too long)
Among the list Core DNS & Cluster Usages shows proper data & graphs, but Cluster Nodes doesn't show any data all metric says No data points
Anyone can help here ?
Cluster Nodes won't show you any metrics, because you are probably missing metric-server.
If you are staring with whole Prometheus stack I would consider using prometheus-operator deployed via helm. It is a little overwhelming, but in a fairly easy way, you can start with it and prometheus-operator will deploy metrics-server too.