I want to store Kubelet logs in specific path to ship logs in my ELK stack. According Kubernetes reference --log-dir flag is deprecated from v1.23. here
How can do this in my on-premise Kubernetes. (v1.26)
OS = Oracle Linux 8.6
As per this 1.26V Kubernetes Logging Architecture, you can use this log location process as mentioned below in official document :
On Linux nodes that use systemd, the kubelet and container runtime write to journald by default. You use journalctl to read the systemd
journal; for example: journalctl -u kubelet.
If systemd is not present, the kubelet and container runtime write to .log files in the /var/log directory. If you want to have logs
written elsewhere, you can indirectly run the kubelet via a helper
tool, kube-log-runner, and use that tool to redirect kubelet logs to
a directory that you choose.
You can also set a logging directory using the deprecated kubelet command line argument --log-dir. However, the kubelet always directs
your container runtime to write logs into directories within
/var/log/pods. For more information on kube-log-runner, read System Logs.
As per the 1.26V kubernetes deprecations and major changes doc :
Generally available (GA) or stable API versions may be marked as
deprecated but must not be removed within a major version of
Kubernetes.
So, you can also use below and have a try :
--log - dirs flag allows you to specify a list of directories where Kubelet logs will be stored. For example, to store the Kubelet logs in the directory /var/log/kubelet, you can use the following command:
kubelet --log-dirs=/var/log/kubelet
You should also ensure that the directory you specify is writable by the user running the Kubelet process.
Related
How to force all kubernetes services (proxy, kublet, apiserver..., containers) to write logs to /var/logs?
For example:
/var/logs/apiServer.log
or:
/var/logs/proxy.log
Can I use syslog config to do that? What would be an example of that config?
I have already tried journald configuration forward to syslogs=yes.
Just first what comes to my mind - create sidecar container that will gather all the logs in 1 place.
The Complete Guide to Kubernetes Logging.
That's a pretty wide question that should be divided on few parts. Kubernets stores different types of logs in different places.
Kubernetes Container Logs (out of this question, but simply kubectl logs <podname> + -n for namespace, if its not default + -c for specifying container inside the pod)
Kubernetes Node Logs
Kubernetes Cluster Logs
Kubernetes Node Logs
Depending on your operating system and services, there are various
node-level logs you can collect, such as kernel logs or systemd logs.
On nodes with systemd both the kubelet and container runtime write to
journald. If systemd is not present, they write to .log files in the
/var/log directory.
You can access systemd logs with the journalctl command.
Tutorial: Logging with journald have a huge explanation how can you configure journalctl to gather logs. With agrregation logs tools like ELK and without them. journald log filtering can simplify your life.
There are two ways of centralizing journal entries via syslog:
syslog daemon acts as a journald client (like journalctl or Logstash or Journalbeat)
journald forwards messages to syslog (via socket)
Option 1) is slower – reading from the journal is slower than reading from the socket – but captures all the fields from the journal.
Option 2) is safer (e.g. no issues with journal corruption), but the journal will only forward traditional syslog fields (like severity, hostname, message..)
Talking about ForwardToSyslog=yes in /etc/systemd/journald.conf --> it will write messages, in syslog format, to /run/systemd/journal/syslog. You can pass processing then this file to rsyslog for example. Either you can manually process logs or move them to desired place..
Kubernetes Cluster Logs
By default, system components outside a container write files to journald, while components running in containers write to /var/log directory. However, there is the option to configure the container engine to stream logs to a preferred location.
Kubernetes doesn’t provide a native solution for logging at cluster level. However, there are other approaches available to you:
Use a node-level logging agent that runs on every node
Add a sidecar container for logging within the application pod
Expose logs directly from the application.
P.S. I have NOT tried below approach, but it looks promising - check it and maybe it will help you in your not easiest task.
The easiest way of setting up a node-level logging agent is to
configure a DaemonSet to run the agent on each node
helm install --name st-agent \
--set infraToken=xxxx-xxxx \
--set containerToken=xxxx-xxxx \
--set logsToken=xxxx-xxxx \
--set region=US \
stable/sematext-agent
This setup will, by default, send all cluster and container logs to a
central location for easy management and troubleshooting. With a tiny
bit of added configuration, you can configure it to collect node-level
logs and audit logs as well.
kubectl logs -f <pod-name>
This command shows the logs from the container log file.
Basically, I want to check the difference between "what is generated by the container" and "what is written to the log file".
I see some unusual binary logs, so I just want to find out if the container is creating those binary logs or the logs are not properly getting written to the log file.
"Unusual logs":
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
Usually, containerized applications, do not write to the log files but send messages to stdout/stderr, there is no point in storing log files inside containers, as they will be deleted when the pod is deleted.
What you see when running
kubectl logs -f <pod-name>
are messages sent to stdout/stderr. There are no container specific logs here, only application logs.
If, for some reason, your application does write to the log file, you can check it by execing into pod with e.g.
kubectl exec -it <pod-name> -- /bin/bash
and read logs as you would in shell.
Edit
Application logs
A container engine handles and redirects any output generated to a containerized application's stdout and stderr streams. For example, the Docker container engine redirects those two streams to a logging driver, which is configured in Kubernetes to write to a file in JSON format.
Those logs are also saved to
/var/log/containers/
/var/log/pods/
By default, if a container restarts, the kubelet keeps one terminated container with its logs. If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs.
Everything you see by issuing the command
kubectl logs <pod-name>
is what application sent to stdout/stderr, or what was redirected to stdout/stderr. For example nginx:
The official nginx image creates a symbolic link from /var/log/nginx/access.log to /dev/stdout, and creates another symbolic link from /var/log/nginx/error.log to /dev/stderr, overwriting the log files and causing logs to be sent to the relevant special device instead.
Node logs
Components that do not run inside containers (e.g kubelet, container runtime) write to journald. Otherwise, they write to .log fies inside /var/log/ directory.
Excerpt from official documentation:
For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. (note that on systemd-based systems, you may need to use journalctl instead)
Master
/var/log/kube-apiserver.log - API Server, responsible for serving the API
/var/log/kube-scheduler.log - Scheduler, responsible for making scheduling decisions
/var/log/kube-controller-manager.log - Controller that manages replication controllers
Worker Nodes
/var/log/kubelet.log - Kubelet, responsible for running containers on the node
/var/log/kube-proxy.log - Kube Proxy, responsible for service load balancing
The only way I could imagine this to work is to make use of some external logging facility like Syslog or Elastisearch or anything else. Configure your application to send logs directly to logging facility (avoiding agents like fluentd or logstash which parse logs from files).
All modern languages have support for external logging. You can also configure Docker to send logs to syslog server.
Simple way to check log is kubernets:
=> If pod have single container
kubectl logs POD_NAME
=> If pod have multiple containers
kubectl logs POD_NAME -c CONTAINER_NAME -n NAMESPACE
I am pretty sure it writes it on disk somewhere. Otherwise if the container runs for several hours and logs a lot, then it would exceed what the stderr can hold I think. No?
Is it possible to compress and download the logs of kubectl logs?i.e. comparess on the container without downloading them?
Firstly take a look on logging-kubernetes official documentation.
In most cases, Docker container logs are put in the /var/log/containers directory on your host (host they are deployed on). Docker supports multiple logging drivers but Kubernetes API does not support driver configuration.
Once a container terminates or restarts, kubelet keeps its logs on the node. To prevent these files from consuming all of the host’s storage, a log rotation mechanism should be set on the node.
You can use kubectl logs to retrieve logs from a previous instantiation of a container with --previous flag, in case the container has crashed.
If you want to take a look at additional logs. For example, in Linux journald logs can be retrieved using the journalctl command:
$ journalctl -u docker
You can implement cluster-level logging and expose or push logs directly from every application but the implementation for such a logging mechanism is not in the scope of Kubernetes.
Also there are many tools offered for Kubernetes for logging management and aggregation - see: logs-tools.
I have created a k8s cluster using kubeadm and have a couple of questions about the kube-controller-manager and kuber-apiserver components.
When created using kubeadm, those components are started as pods, not systemd daemons. If I kill any of those pods, they are restarted, but who is restarting them? I haven't seen any replicacontroller nor deployment in charge of doing that.
What is the "right" way of updating their configuration? Imagine I want to change the authorization-mode of the api server. In the master node we can find a /etc/kubernetes/manifests folder with a kube-apiserver.yaml file. Are we supposed to change this file and just kill the pod so that it restarts with the new config?
The feature you've described is called Static Pods. Here is a part of documentation that describes their behaviour.
Static pods are managed directly by kubelet daemon on a specific node,
without the API server observing it. It does not have an associated
replication controller, and kubelet daemon itself watches it and
restarts it when it crashes. There is no health check. Static pods are
always bound to one kubelet daemon and always run on the same node
with it.
Kubelet automatically tries to create a mirror pod on the Kubernetes
API server for each static pod. This means that the pods are visible
on the API server but cannot be controlled from there.
The configuration files are just standard pod definitions in json or
yaml format in a specific directory. Use kubelet
--pod-manifest-path=<the directory> to start kubelet daemon, which periodically scans the directory and creates/deletes static pods as
yaml/json files appear/disappear there. Note that kubelet will ignore
files starting with dots when scanning the specified directory.
When kubelet starts, it automatically starts all pods defined in
directory specified in --pod-manifest-path= or --manifest-url=
arguments, i.e. our static-web.
Usually, those manifests are stored in the directory /etc/kubernetes/manifests.
If you put any changes to any of those manifests, that resource will be adjusted just like if you would run kubectl apply -f something.yaml command.
Question
Are there known available Fluentd daemonset for journald docker logging driver so that I can send K8S pod logs to Elasticsearch?
Background
As in add support to log in kubeadm, the default logging driver for K8S installed by kubeadm is journald.
the community is collectively moving away from files on disk at every place possible in general, and this would unfortunately be a step backwards. ...
You can edit you /etc/docker/daemon.json to set its default log to json files and set a max size and max files to take care of the log rotation. After that, logs wont be written to journald and you will be able to send your log files to ES.
However, the K8S EFK addon and Fluentd K8S or Aggregated Logging in Tectonic still expect to look for files in /var/log/containers in the host, if I understood correctly.
It looks Alternative fluentd docker image designed as a drop-in replacement for the fluentd-es-image looks to be adopting journald driver. However could not make it run the pods.
docker log driver journald send docker logs to systemd-journald.service
so, we need to make systemd-journald persistent save to /var/log/journal
edit /etc/systemd/journald.conf:
...
[Journal]
Storage=persistent
#Compress=yes
...
then restart to apply changes:
systemctl restart systemd-journald
ls -l /var/log/journal
as /var/log has been mounted into fluentd pod, its all done, restart fluentd pod it works for me #202104.
by the way, i am using fluentd yaml from:
https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-elasticsearch-rbac.yaml
and the env FLUENTD_SYSTEMD_CONF value should not be disable