filebeat Failed to list *v1.Pod: Unauthorized - permmision issue - kubernetes

i am trying to deploy a filebeat deamonset on my aks cluster
i want it to run on every node and collect all the logs generated by the pods
to do so i have 5 steps
1.create user
2.create role with appropriate permissions
3.bind them
4.create config map
5.create deamonset utilizing the config map
everything was created just fine.
however upon inspection of the filebeat logs i see the following messages indicating filebeat does not have permission to list pods:
E0519 16:19:18.243183 1 reflector.go:125] github.com/elastic/beats/libbeat/common/kubernetes/watcher.go:235: Failed to list *v1.Pod: Unauthorized
E0519 16:19:19.251644 1 reflector.go:125] github.com/elastic/beats/libbeat/common/kubernetes/watcher.go:235: Failed to list *v1.Pod: Unauthorized
this is my yml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: default
labels:
k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
namespace: default
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
namespace: default
subjects:
- kind: ServiceAccount
name: filebeat
namespace: default
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
enabled: true
paths:
- /var/log/containers/*.log
# If you setup helm for your cluster and want to investigate its logs, comment out this section.
exclude_files: ['tiller-deploy-*']
# To be used by Logstash for distinguishing index names while writing to elasticsearch.
fields_under_root: true
fields:
index_prefix: k8s-logs
# Enrich events with k8s, cloud metadata
processors:
- add_cloud_metadata:
- add_host_metadata:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
# Send events to Logstash.
output.logstash:
enabled: true
hosts: ["logstash-logstash-headless.elk-stack:9600"]
# You can set logging.level to debug to see the generated events by the running filebeat instance.
logging.level: info
logging.to_files: false
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
# Refers to our previously defined ServiceAccount.
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.5.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources: # comment out for using full speed
limits:
memory: 200Mi
requests:
cpu: 500m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
# Bind previously defined ConfigMap
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
any idea what might be the problem?

Related

Convert filebeat [#timestamp] from UTC to local timezone

I am trying to collect logs from the running pod in the KOPS cluster. I run filebeat DemonSet in the KOPS cluster to collect logs from my pod(application) and then ship those logs to the outside of the cluster where the logstash service is accepting them and saves them into a file.
I noticed filebeat always producing the logs with UTC timestamp even though all of my nodes and pods are running in SGT timezone.
I set add_locale in filebeat processor but it doesn't help.
add_locale:
format: offset
nodes timezone
pod timezone
Complete filebeat-kubernetes.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: logging
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
templates:
- condition:
equals:
kubernetes.namespace: default
- condition:
contains:
kubernetes.pod.name: "application1"
config:
- type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}*.log
- condition:
contains:
kubernetes.pod.name: "application2"
config:
- type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}*.log
processors:
- add_locale:
format: offset
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
output.logstash:
hosts: ["IP:5044"]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.10.1
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: config
configMap:
defaultMode: 0640
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
# When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
path: /var/lib/filebeat-data
type: DirectoryOrCreate
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Singapore
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: logging
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
---
output log from filebeat
Unfortunately, I don't have enough reputation to add a comment so posting this as an answer. The following reference doc mentions:
The processor adds the a event.timezone value to each event.
so it could be possible that the log timestamp itself is not converted to the local timezone but it adds additional field in the event logs to represent the timezone and that can be used to format the logs by the application consuming the logs.

Separate filebeat daemon set based on namespace?

I am trying to run the filebeat daemon set to get the log for particular app. There are basically two nodegroups:- eai and eai-staging. eai nodgroup have only single namespace by the eai-staging have multiple namespace.
I have following filebeat config:
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
labels:
app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: log
fields:
app_type: "${NAMESPACE}". <<<---- I want this app type to be different based on namespace
log_type: secure
fields_under_root: true
output.logstash:
hosts: ["${LOGSTASH_HOST}:${LOGSTASH_PORT}"]
ttl: 1s
pipelining: 0
processors:
- drop_fields:
fields: ["beat", "host", "input", "offset", "source"]
Filebeat Daemon set
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
labels:
app: filebeat
spec:
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: filebeat
spec:
nodeSelector:
nodegroup: eai
priorityClassName: critical
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
imagePullPolicy: Always
image: docker.elastic.co/beats/filebeat:6.5.4
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: LOGSTASH_HOST
value: "logstash-headless.etl.svc.cluster.local"
- name: LOGSTASH_PORT
value: "5046"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: app-log
mountPath: /var/log/app/
readOnly: true
volumes:
- name: config
configMap:
name: filebeat-config
- name: data
hostPath:
path: /var/lib/filebeat-data/eai/app-filebeat
type: DirectoryOrCreate
- name: app-log
hostPath:
path: /var/log/app/
type: DirectoryOrCreate
Now, how can I get the particular namespace from where the app log is obtained by the filebeat.
I tried to deploy one daemon set in the eai namespace in the eai nodegroup. So I can get the namespace for that using metadata.namespace.
But, if I deployed the daemon set in the eai-staging node group in the particular namespace. I will always get the same namespace value.
Is there any way around. Or should I deploy the daemon set in each namespace?
P.S. I could not use the filebeat in the same container because if filebeat is down due to some reason, the pod cannot receive the request for the app
Deploy filebeat as daemonset in each node and filebeat will get logs from all containers in that node but you can add namespace, pod name, labels as metadata to each event. This way you will get to know from which namespace the event was originated.
The add_kubernetes_metadata processor annotates each event with relevant metadata based on which Kubernetes pod the event originated from. Each event is annotated with:
Pod Name
Namespace
Labels
https://www.elastic.co/guide/en/beats/filebeat/6.1/add-kubernetes-metadata.html

Fluentbit get Docker Logs(Systemd) in Kubernetes not working

I'm trying to configure Fluentbit in Kubernetes to get Logs from application PODs/Docker Containers and send this log messages to Graylog using GELF format, but this is not working.
See my stack below:
INPUT
Docker version 1.13.1
Docker Log format => JSON
Docker Log Driver => Journald => systemd
Fluent-bit 1.3 running as Daemonset in Kubernetes
Kubernetes 1.17
OS Host: CentOS 7
OUTPUT
Message output format: GELF 1.1
Centralized log => Graylog 3
The problem is the fluentbit not read the log from systemd I'm not get any log in both outputs(Systemd,Stdout), the STDOUT is just to help in troubleshooting.
I don't know why I'm not able to read from systemd.
I followed the documentation exactly
https://docs.fluentbit.io/manual/input/systemd
My K8S configurations:
fluent-bit-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-config
namespace: log
labels:
k8s-app: fluent-bit
data:
# Configuration files: server, input, filters and output
# ======================================================
fluent-bit.conf: |
[SERVICE]
Flush 1
Log_Level debug
Daemon off
#INCLUDE input-systemd.conf
#INCLUDE output-stdout.conf
input-systemd.conf: |
[INPUT]
Name systemd
Tag host.*
Parser json
Systemd_Filter _SYSTEMD_UNIT=docker.service
output-graylog.conf: |
[OUTPUT]
Name gelf
Match *
Host 10.142.15.214
Port 12201
Mode tcp
Gelf_Short_Message_Key log
output-stdout.conf: |
[OUTPUT]
Name stdout
Match *
fluent-bit-ds.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluent-bit
namespace: log
labels:
k8s-app: fluent-bit-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
k8s-app: fluent-bit-logging
version: v1
kubernetes.io/cluster-service: "true"
template:
metadata:
labels:
k8s-app: fluent-bit-logging
version: v1
kubernetes.io/cluster-service: "true"
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "2020"
prometheus.io/path: /api/v1/metrics/prometheus
spec:
containers:
- name: fluent-bit
image: fluent/fluent-bit:1.3.5
imagePullPolicy: Always
ports:
- containerPort: 2020
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: fluent-bit-config
mountPath: /fluent-bit/etc/
terminationGracePeriodSeconds: 10
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: fluent-bit-config
configMap:
name: fluent-bit-config
serviceAccountName: fluent-bit
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- operator: "Exists"
effect: "NoExecute"
- operator: "Exists"
effect: "NoSchedule"
fluent-bit-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: fluent-bit-read
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: fluent-bit-read
subjects:
- kind: ServiceAccount
name: fluent-bit
namespace: log
fluent-bit-role.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: fluent-bit-read
rules:
- apiGroups: [""]
resources:
- namespaces
- pods
verbs: ["get", "list", "watch"]
fluent-bit-service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluent-bit
namespace: log
My Fluentbit OUTPUT(STDOUT) just for debug:
$ kubectl logs -f fluent-bit-2bzxb -n log
[2020/02/20 18:54:23] [Warning] [config] I cannot open /fluent-bit/etc/..2020_02_20_18_54_22.252769193/parsers_custom.conf file
[2020/02/20 18:54:23] [ info] [storage] initializing...
[2020/02/20 18:54:23] [ info] [storage] in-memory
[2020/02/20 18:54:23] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2020/02/20 18:54:23] [ info] [engine] started (pid=1)
[2020/02/20 18:54:23] [ info] [filter_kube] https=1 host=kubernetes.default.svc port=443
[2020/02/20 18:54:23] [ info] [filter_kube] local POD info OK
[2020/02/20 18:54:23] [ info] [filter_kube] testing connectivity with API server...
[2020/02/20 18:54:23] [ info] [filter_kube] API server connectivity OK
[2020/02/20 18:54:23] [ info] [sp] stream processor started
The problem is I'm not getting any log from systemd with this configuration
Thank you #edsiper I fix my Daemonset adding "path: /run/log"
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluent-bit
namespace: log
labels:
k8s-app: fluent-bit-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
k8s-app: fluent-bit-logging
version: v1
kubernetes.io/cluster-service: "true"
template:
metadata:
labels:
k8s-app: fluent-bit-logging
version: v1
kubernetes.io/cluster-service: "true"
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "2020"
prometheus.io/path: /api/v1/metrics/prometheus
spec:
containers:
- name: fluent-bit
image: fluent/fluent-bit:1.3.5
imagePullPolicy: Always
ports:
- containerPort: 2020
env:
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: systemdlog
mountPath: /run/log
- name: fluent-bit-config
mountPath: /fluent-bit/etc/
terminationGracePeriodSeconds: 10
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: systemdlog
hostPath:
path: /run/log
- name: fluent-bit-config
configMap:
name: fluent-bit-config
serviceAccountName: fluent-bit
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- operator: "Exists"
effect: "NoExecute"
- operator: "Exists"
effect: "NoSchedule"
does your Fluent Bit container have access to the Systemd journal path ?
Not enough Karma to post a comment, so posting as an answer to #edsiper:
"does your Fluent Bit container have access to the Systemd journal path ?"
On default settings - no - it does not. When I tried to solve this problem I stumbled across this thread: https://github.com/fluent/fluent-bit/issues/497
Long story short:
you need to run fluent-bit container as root, since accessing the journal requires root permission
set the machine id in docker to the same as in your root machine
bind /run/log/journal:/run/log/journal
so:
fluent-bit:
image: 'bitnami/fluent-bit:latest'
restart: always
user: root #give root access
network_mode: host
command: /fluent-bit/bin/fluent-bit -c /fluent-bit/etc/fluent-bit.conf
volumes:
- ./service/config/fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
- /etc/machine-id:/etc/machine-id:ro #set the machine id
- /run/log/journal:/run/log/journal #give access to logs
Then, in fluent-bit.conf you need edit the INPUT Path:
[INPUT]
Name systemd
Tag *
Path /run/log/journal
Systemd_Filter _SYSTEMD_UNIT=docker.service
Systemd_Filter _SYSTEMD_UNIT=kubelet.service

Problem parsing Fluentbit Docker Logs(Systemd) to GELF message Output in Kubernetes

I'm getting Docker(Systemd) logs and trying send it in GELF format to Graylog 3 output but the log is not in the correct format, and Graylog discart it.
I'm folow this references:
https://docs.fluentbit.io/manual/output/gelf
https://docs.fluentbit.io/manual/input/systemd
https://fluentbit.io/kubernetes/
See my stack below:
INPUT
Docker version 1.13.1
Docker Log format => JSON
Docker Log Driver => Journald => systemd
Fluent-bit 1.3 running as Daemonset in Kubernetes
Kubernetes 1.17
OS Host: CentOS 7
OUTPUT
Message output format: GELF 1.1
Centralized log => Graylog 3
My Kubernetes configurations:
fluent-bit-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-config
namespace: log
labels:
k8s-app: fluent-bit
data:
# Configuration files: server, input, filters and output
# ======================================================
fluent-bit.conf: |
[SERVICE]
Flush 1
Log_Level info
Daemon off
Parsers_File parser-docker.conf
[INPUT]
Name tail
Tag kube.*
Path /var/log/messages
Parser docker
DB /var/log/flb_kube.db
Mem_Buf_Limit 5MB
Refresh_Interval 10
[FILTER]
Name kubernetes
Match kube.*
Merge_Log_Key log
Merge_Log On
Keep_Log Off
Annotations Off
Labels Off
[FILTER]
Name nest
Match *
Operation lift
Nested_under log
[OUTPUT]
Name gelf
Match kube.*
Host 10.142.15.214
Port 12201
Mode tcp
Gelf_Short_Message_Key data
[OUTPUT]
Name stdout
Match *
parser-docker.conf: |
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep Off
fluent-bit-ds.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluent-bit
namespace: log
labels:
k8s-app: fluent-bit-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
k8s-app: fluent-bit-logging
version: v1
kubernetes.io/cluster-service: "true"
template:
metadata:
labels:
k8s-app: fluent-bit-logging
version: v1
kubernetes.io/cluster-service: "true"
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "2020"
prometheus.io/path: /api/v1/metrics/prometheus
fluentbit.io/exclude: "true"
spec:
containers:
- name: fluent-bit
image: fluent/fluent-bit:1.3.5
imagePullPolicy: Always
ports:
- containerPort: 2020
env:
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: systemdlog
mountPath: /run/log
- name: fluent-bit-config
mountPath: /fluent-bit/etc/
terminationGracePeriodSeconds: 10
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: systemdlog
hostPath:
path: /run/log
- name: fluent-bit-config
configMap:
name: fluent-bit-config
serviceAccountName: fluent-bit
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- operator: "Exists"
effect: "NoExecute"
- operator: "Exists"
effect: "NoSchedule"
fluent-bit-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: fluent-bit-read
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: fluent-bit-read
subjects:
- kind: ServiceAccount
name: fluent-bit
namespace: log
fluent-bit-role.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: fluent-bit-read
rules:
- apiGroups: [""]
resources:
- namespaces
- pods
verbs: ["get", "list", "watch"]
fluent-bit-service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluent-bit
namespace: log
My Fluentbit OUTPUT(STDOUT) just for debug:
$ kubectl logs -f fluent-bit-2bzxb -n log
[9] host.docker.service: [1582317069.020005000, {"PRIORITY"=>"6", "_TRANSPORT"=>"journal", "_PID"=>"1486", "_UID"=>"0", "_GID"=>"0", "_COMM"=>"dockerd-current", "_EXE"=>"/usr/bin/dockerd-current", "_CMDLINE"=>"/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json --selinux-enabled --log-driver=journald --signature-verification=false --storage-driver overlay2", "_CAP_EFFECTIVE"=>"1fffffffff", "_SYSTEMD_CGROUP"=>"/system.slice/docker.service", "_SYSTEMD_UNIT"=>"docker.service", "_SYSTEMD_SLICE"=>"system.slice", "_BOOT_ID"=>"18d81c7e97e6419f999af50af13060c8", "_MACHINE_ID"=>"b5447343d5617cb5f7fd428164927298", "_HOSTNAME"=>"k8s-worker-1", "CONTAINER_TAG"=>"d72b414a9bcc", "CONTAINER_ID"=>"d72b414a9bcc", "CONTAINER_ID_FULL"=>"d72b414a9bccef31dbaa4c8473b06d63583195ebde9a8b729a06a81b68233144", "CONTAINER_NAME"=>"k8s_fluent-bit_fluent-bit-zg7pz_log_43007dd0-ce10-4c6d-a97f-e7369f866879_0", "MESSAGE"=>"
[9] host.docker.service: [1582317068.864240000, {"_TRANSPORT"=>"journal", "_PID"=>"1486", "_UID"=>"0", "_GID"=>"0", "_COMM"=>"dockerd-current", "_EXE"=>"/usr/bin/dockerd-current", "_CMDLINE"=>"/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json --selinux-enabled --log-driver=journald --signature-verification=false --storage-driver overlay2", "_CAP_EFFECTIVE"=>"1fffffffff", "_SYSTEMD_CGROUP"=>"/system.slice/docker.service", "_SYSTEMD_UNIT"=>"docker.service", "_SYSTEMD_SLICE"=>"system.slice", "_BOOT_ID"=>"18d81c7e97e6419f999af50af13060c8", "_MACHINE_ID"=>"b5447343d5617cb5f7fd428164927298", "_HOSTNAME"=>"k8s-worker-1", "PRIORITY"=>"3", "CONTAINER_TAG"=>"d3383915a884", "CONTAINER_ID"=>"d3383915a884", "CONTAINER_ID_FULL"=>"d3383915a884fb0e2b40189e7db1a1131161e57dba39a40824accf1b2aa59f22", "CONTAINER_NAME"=>"k8s_demo-app_demo-app-6c79ffd869-trstt_default_cd6c5f4d-ec44-4f43-9b92-d7bb28a5f676_1", "MESSAGE"=>"2020/02/21 20:31:08 10.142.15.231:48012 GET /", "_SOURCE_REALTIME_TIMESTAMP"=>"1582317068863691"}]", "_SOURCE_REALTIME_TIMESTAMP"=>"1582317069000509"}]"
The problem is how to properly format the log to GELF format to send to Graylog 3

unable to recognize "filebeat-kubernetes.yaml": no matches for kind "DaemonSet" in version "extensions/v1beta1"

I'm trying to run FileBeat on minikube following this doc with k8s 1.16
https://www.elastic.co/guide/en/beats/filebeat/7.4/running-on-kubernetes.html
I downloaded the manifest file as instructed
curl -L -O https://raw.githubusercontent.com/elastic/beats/7.4/deploy/kubernetes/filebeat-kubernetes.yaml
Contents of the yaml file below
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# host: ${NODE_NAME}
# hints.enabled: true
# hints.default_config:
# type: container
# paths:
# - /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- add_cloud_metadata:
- add_host_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.4.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
---
When I try the deploy step,
kubectl create -f filebeat-kubernetes.yaml
I get the output + error:
configmap/filebeat-config created
clusterrolebinding.rbac.authorization.k8s.io/filebeat created
clusterrole.rbac.authorization.k8s.io/filebeat created
serviceaccount/filebeat created
error: unable to recognize "filebeat-kubernetes.yaml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
As we can see there
DaemonSet, Deployment, StatefulSet, and ReplicaSet resources will no longer be served from extensions/v1beta1, apps/v1beta1, or apps/v1beta2 by default in v1.16. Migrate to the apps/v1 API
You need to change apiVersion
apiVersion: extensions/v1beta1 -> apiVersion: apps/v1
Then there is another error
missing required field "selector" in io.k8s.api.apps.v1.DaemonSetSpec;
So we have to add selector field
spec:
selector:
matchLabels:
k8s-app: filebeat
Edited DaemonSet yaml:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.4.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
Let me know if that help you.