Where is the log file for `kubectl log pod/yourpod` - kubernetes

When I type kubectl log pod/yourpod to get my pod's logs, behind the scene, k8s must read the log from somewhere in my pod.
What's the default path to the log generated by my container?
How to change the path?
Inside my container, my process uses sigs.k8s.io/controller-runtime/pkg/log to generate logs.

It is the console output (stdout / stderr) that is captured by the container runtime and made available by the kubelet running on the node to the API server.
So there is not real log file, though the container runtime usually has a means of buffering the logs to the file system.

Related

Kubernetes get log within container

Background: I use glog to register signal handler, but it cannot kill the init process (PID=1) with kill sigcall. That way, even though deadly signals like SIGABRT is raised, kubernetes controller manager won't be able to understand the pod is actually not functioning, thus kill the pod and restart a new one.
My idea is to add logic into my readiness/liveness probe: check the content for current container, whether it's in healthy state.
I'm trying to look into the logs on container's local filesystem /var/log, but haven't found anything useful.
I'm wondering if it's possible to issue a HTTP request to somewhere, to get the complete log? I assume it's stored somewhere.
You can find the kubernetes logs on Master machine at:
/var/log/pods
if using docker containers:
/var/lib/docker/containers
Containers are Ephemeral
Docker containers emit logs to the stdout and stderr output streams. Because containers are stateless, the logs are stored on the Docker host in JSON files by default.
The default logging driver is json-file. The logs are then annotated with the log origin, either stdout or stderr, and a timestamp. Each log file contains information about only one container.
As #Uri Loya said, You can find these JSON log files in /var/lib/docker/containers/ directory on a Linux Docker host. Here's how you can access them:
/var/lib/docker/containers/<container id>/<container id>-json.log
You can collect the logs with a log aggregator and store them in a place where they'll be available forever. It's dangerous to keep logs on the Docker host because they can build up over time and eat into your disk space. That's why you should use a central location for your logs and enable log rotation for your Docker containers.

How do we check container logs in kubernetes before they are written to the log file?

kubectl logs -f <pod-name>
This command shows the logs from the container log file.
Basically, I want to check the difference between "what is generated by the container" and "what is written to the log file".
I see some unusual binary logs, so I just want to find out if the container is creating those binary logs or the logs are not properly getting written to the log file.
"Unusual logs":
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
Usually, containerized applications, do not write to the log files but send messages to stdout/stderr, there is no point in storing log files inside containers, as they will be deleted when the pod is deleted.
What you see when running
kubectl logs -f <pod-name>
are messages sent to stdout/stderr. There are no container specific logs here, only application logs.
If, for some reason, your application does write to the log file, you can check it by execing into pod with e.g.
kubectl exec -it <pod-name> -- /bin/bash
and read logs as you would in shell.
Edit
Application logs
A container engine handles and redirects any output generated to a containerized application's stdout and stderr streams. For example, the Docker container engine redirects those two streams to a logging driver, which is configured in Kubernetes to write to a file in JSON format.
Those logs are also saved to
/var/log/containers/
/var/log/pods/
By default, if a container restarts, the kubelet keeps one terminated container with its logs. If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs.
Everything you see by issuing the command
kubectl logs <pod-name>
is what application sent to stdout/stderr, or what was redirected to stdout/stderr. For example nginx:
The official nginx image creates a symbolic link from /var/log/nginx/access.log to /dev/stdout, and creates another symbolic link from /var/log/nginx/error.log to /dev/stderr, overwriting the log files and causing logs to be sent to the relevant special device instead.
Node logs
Components that do not run inside containers (e.g kubelet, container runtime) write to journald. Otherwise, they write to .log fies inside /var/log/ directory.
Excerpt from official documentation:
For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. (note that on systemd-based systems, you may need to use journalctl instead)
Master
/var/log/kube-apiserver.log - API Server, responsible for serving the API
/var/log/kube-scheduler.log - Scheduler, responsible for making scheduling decisions
/var/log/kube-controller-manager.log - Controller that manages replication controllers
Worker Nodes
/var/log/kubelet.log - Kubelet, responsible for running containers on the node
/var/log/kube-proxy.log - Kube Proxy, responsible for service load balancing
The only way I could imagine this to work is to make use of some external logging facility like Syslog or Elastisearch or anything else. Configure your application to send logs directly to logging facility (avoiding agents like fluentd or logstash which parse logs from files).
All modern languages have support for external logging. You can also configure Docker to send logs to syslog server.
Simple way to check log is kubernets:
=> If pod have single container
kubectl logs POD_NAME
=> If pod have multiple containers
kubectl logs POD_NAME -c CONTAINER_NAME -n NAMESPACE

Where does K8s store the logs that it prints when running kubectl logs -f

I am pretty sure it writes it on disk somewhere. Otherwise if the container runs for several hours and logs a lot, then it would exceed what the stderr can hold I think. No?
Is it possible to compress and download the logs of kubectl logs?i.e. comparess on the container without downloading them?
Firstly take a look on logging-kubernetes official documentation.
In most cases, Docker container logs are put in the /var/log/containers directory on your host (host they are deployed on). Docker supports multiple logging drivers but Kubernetes API does not support driver configuration.
Once a container terminates or restarts, kubelet keeps its logs on the node. To prevent these files from consuming all of the host’s storage, a log rotation mechanism should be set on the node.
You can use kubectl logs to retrieve logs from a previous instantiation of a container with --previous flag, in case the container has crashed.
If you want to take a look at additional logs. For example, in Linux journald logs can be retrieved using the journalctl command:
$ journalctl -u docker
You can implement cluster-level logging and expose or push logs directly from every application but the implementation for such a logging mechanism is not in the scope of Kubernetes.
Also there are many tools offered for Kubernetes for logging management and aggregation - see: logs-tools.

i can not see the sysout log on a kubernetes pod

I use logback to save the log to a file.
However, if i log on to the pod and see the log file, the log with logback is written well, but i can not find the log with sysout.
kubectl exec -it pod-name bash
Also, if i check the kubernetes pod log, i can not see the log written in logback, but i can check only the log written in sysout.
kubectl logs -f pod-name
In addition, when use logback and sysout together at function, can not be found any log using logback.
Do you know how to fix it?
kubectl logs -f <pod_name> will show only actions which are affecting POD, not what is actually happening inside the container (any calculations, entered data to the file).
Without function is hard to say what is going to happen, however keeping logs inside the POD is not the best idea. If something happen to the POD, it will crashed or any error occurs, Kubernetes will restart it and all data will be lost.
The proper way is to have your application log to stdout and then use an external tool to capture that and write to a file. Fluentd is frequently used for that purpose, with aggregation i.e. ElasticSearch.
I would suggest you to look at K8s logging architecture and Elasticsearch

how to disabled kubernetes log to disk file?

I have tried
kubectl create -f x.yaml --logtostderr=true
but it didn't work.
The Kubernetes API doesn't currently expose a way to change the logging behavior. It'll rotate the log files as appropriate to avoid filling up the disk, but if you need more control, you'll have to modify the docker daemon on each node to change its logging driver.
Or if you want to do it for a specific application, change the command in your x.yaml file that you're using to start the app to redirect stdout and stderr to /dev/null inside the container.