My team has a special requirement to delete all pod logs every X hours. This is cause the logs contain some sensitive info - we read and process them with fluentbit, but it's an issue that the logs are still there after.
I couldn't find any normal way to rotate them by time, only recommendations on the docker daemon logging driver that rotates by file size.
Is it possible to create a k8s cronjob to do something like "echo ''> /path/to/logfile" per pod/container? If yes, how?
I'd appreciate any help here.
Thanks!
Kubernetes doesn’t provide built-in log rotation, but this functionality is available in many tools.
According to Kubernetes Logging Architecture:
An important consideration in node-level logging is implementing log
rotation, so that logs don't consume all available storage on the
node. Kubernetes is not responsible for rotating logs, but rather a
deployment tool should set up a solution to address that. For example,
in Kubernetes clusters, deployed by the kube-up.sh script, there is a
logrotate tool configured to run each hour. You can also set up a
container runtime to rotate an application's logs automatically.
Below are some examples of how the log rotation can be implemented:
Enable Log Rotation in Kubernetes Cluster
logrotate-container
You can use them as a guide.
Related
I had a few pods restarting in my EKS cluster. I could see that they were SIGKILL'ed by K8s. Now I would like to know the reason but I can't because the Kubernetes events TTL is only one hour.
I am checking the control plane logs for the EKS cluster in CloudWatch now but don't know which of them contains these messages as well.
Which of the logs does contain these events form K8s?
Yes you are right, the default value of --event-ttl is 60m00s, and unfortunately, there is currently no any native option to change that value in EKS. The github issue is still opened without any promising timeframes.
As per guide you sent and as per Streaming EKS Metrics and Logs to CloudWatch, if you configured everything correctly, you can find logs under “Container Insights” from the drop-down menu.
Logs you might want to check are
Control plane logs consist of scheduler logs, API server logs, and
audit logs.
Data plane logs consist of kubelet and container runtime
engine logs.
Can you please specify what exact logs you have in your cloudwatch control plane logs and what you already checked? Maybe that will help
I currently have a Cronjob that has a job that schedule at some period of time and run in a pattern. I want to export the logs of each pod runs to a file in the path as temp/logs/FILENAME
with the FILENAME to be the timestamp of the run being created. How am I going to do that? Hopefully to provide a solution. If you would need to add a script, then please use python or shell command. Thank you.
According to Kubernetes Logging Architecture:
In a cluster, logs should have a separate storage and lifecycle
independent of nodes, pods, or containers. This concept is called
cluster-level logging.
Cluster-level logging architectures require a separate backend to
store, analyze, and query logs. Kubernetes does not provide a native
storage solution for log data. Instead, there are many logging
solutions that integrate with Kubernetes.
Which brings us to Cluster-level logging architectures:
While Kubernetes does not provide a native solution for cluster-level
logging, there are several common approaches you can consider. Here
are some options:
Use a node-level logging agent that runs on every node.
Include a dedicated sidecar container for logging in an application pod.
Push logs directly to a backend from within an application.
Kubernetes does not provide log aggregation of its own. Therefore, you need a local agent to gather the data and send it to the central log management. See some options below:
Fluentd
ELK Stack
You can find all logs that PODs are generating at /var/log/containers/*.log
on each Kubernetes node. You could work with them manually if you prefer, using simple scripts, but you will have to keep in mind that PODs can run on any node (if not restricted), and nodes may come and go.
Consider sending your logs to an external system like ElasticSearch or Grafana Loki and manage them there.
When I try to retrieve logs from my pods, I note that K8s does not print all the logs, and I know that because I observe that logs about microservice initialization are not present in the head of logs.
Considering that my pods print a lot of logs in a long observation period, does someone know if K8s has a limit in showing all logs?
I also tried to set --since parameter in the kubectl logs command to get all logs in a specific time range, but it seems to have no effect.
Thanks.
The container runtime engine typically manages container (pod) logs. Do check the settings on the runtime engine in use.
There seems to be an issue with the logging earlier. Attaching the link for the same. https://github.com/kubernetes/kubernetes/pull/78071
There are some answers, I'll add more details and sources.
The answer is quite short. There is no limit but free space. By default kubernetes is not responsible for log rotation:
An important consideration in node-level logging is implementing log
rotation, so that logs don't consume all available storage on the
node. Kubernetes is not responsible for rotating logs, but rather a
deployment tool should set up a solution to address that. For example,
in Kubernetes clusters, deployed by the kube-up.sh script, there is a
logrotate tool configured to run each hour. You can also set up a
container runtime to rotate an application's logs automatically.
As it was stated by William, Kubernetes itself doesn’t provide log aggregation of its own and it relies on container runtime by default.
When a container running on Kubernetes writes its logs to stdout or
stderr streams, they are picked up by the kubelet service running on
that node, and are delegated to the container engine for handling
based on the logging driver configured in Kubernetes.
In most cases, Docker container logs will end up in the
/var/log/containers directory on your host. Docker supports multiple
logging drivers but, unfortunately, Kubernetes API does not support
driver configuration.
Once a container terminates or restarts, kubelet keeps its logs on the
node. To prevent these files from consuming all of the host’s storage,
a log rotation mechanism should be set on the node.
Kubernetes doesn’t provide built-in log rotation, but this
functionality is available in many tools, such as Docker’s log-opt, or
standard file shippers or even a simple custom cron job. When a
container is evicted from the node, so are its corresponding log files
That means you can try to find full logs in /var/log/containers and var/log/pods. This part is from official documentation and more precise:
By default, if a container restarts, the kubelet keeps one terminated
container with its logs. If a pod is evicted from the node, all
corresponding containers are also evicted, along with their logs.
To have a good visibility and accessibility of logs you may consider having a dedicated solution for logs storing. E.g. node logging agent or streaming to a sidecar
Please find articles and official kubernetes documentation with concepts and examples:
Kubernetes logging architecture
Practical guide to kubernetes
we are using k8s cluster for one of our application, cluster is owned by other team and we dont have full control over there… We are trying to find out metrics around resource utilization (CPU and memory), detail about running containers/pods/nodes etc. Need to find out how many parallel containers are running. Problem is they have exposed monitoring of cluster via Prometheus but with Prometheus we are not getting live data, it does not have info about running containers.
My query is , what is that API which is by default available in k8s cluster and can give all what we need. We dont want to read data form another client like Prometheus or anything else, we want to read metrics directly from cluster so that data is not stale. Any suggestions?
As you mentioned you will need metrics-server (or heapster) to get those information.
You can confirm if your metrics server is running kubectl top nodes/pods or just by checking if there is a heapster or metrics-server pod present in kube-system namespace.
Also the provided command would be able to show you the information you are looking for. I wont go into details as here you can find a lot of clues and ways of looking at cluster resource usage. You should probably take a look at cadvisor too which should be already present in the cluster. It exposes a web UI which exports live information about all the containers on the machine.
Other than that there are probably commercial ways of acheiving what you are looking for, for example SignalFx and other similar projects - but this will probably require the cluster administrator involvement.
Is there a way to monitor the pod status and restart count of pods running in a GKE cluster with Stackdriver?
While I can see CPU, memory and disk usage metrics for all pods in Stackdriver there seems to be no way of getting metrics about crashing pods or pods in a replica set being restarted due to crashes.
I'm using a Kubernetes replica set to manage the pods, hence they are respawned and created with a new name when they crash. As far as I can tell the metrics in Stackdriver appear by pod-name (which is unique for the lifetime of the pod) which doesn't sound really sensible.
Alerting upon pod failures sounds like such a natural thing that it sounds hard to believe that this is not supported at the moment. The monitoring and alerting capabilities that I get from Stackdriver for Google Container Engine as they stand seem to be rather useless as they are all bound to pods whose lifetime can be very short.
So if this doesn't work out of the box are there known workarounds or best practices on how to monitor for continuously crashing pods?
You can achieve this manually with the following:
In Logs Viewer, creating the following filter:
resource.labels.project_id="<PROJECT_ID>"
resource.labels.cluster_name="<CLUSTER_NAME>"
resource.labels.namespace_name="<NAMESPACE, or default>"
jsonPayload.message:"failed liveness probe"
Create a metric by clicking on the Create Metric button above the filter input and filling up the details.
You may now track this metric in Stackdriver.
Would be happy to be informed of a built-in metric instead of this.
There is a built in metric now, so it's easy to dashboard and/or alert on it without setting up custom metrics
Metric: kubernetes.io/container/restart_count
Resource type: k8s_container
In my cluster (a bare-metal k8s cluster),I use kube-state-metrics https://github.com/kubernetes/kube-state-metrics to do what you want. This project belongs to kubernetes repo and it is quite easy to use. Once deployed u can use kube_pod_container_status_restarts this metrics to know if a container restarts
Others have commented on how to do this with metrics, which is the right solution if you have a very large number of crashing pods.
An alernative approach is to treat crashing pods as discrete events or even log-lines. You can do this with Robusta (disclaimer, I wrote this) with YAML like this:
triggers:
- on_pod_update: {}
actions:
- restart_loop_reporter:
restart_reason: CrashLoopBackOff
- image_pull_backoff_reporter:
rate_limit: 3600
sinks:
- slack
Here we're triggering an action named restart_loop_reporter whenever a pod updates. The data stream comes from the APIServer.
The restart_loop_reporter is an action which filters out non-crashing pods. Above it's configured to report only on CrashLoopBackOffs but you could remove that to report all crashes.
A benefit of doing it this way is that you can gather extra data about the crash automatically. For example, the above will fetch the pod's logs and forward them along with the crash report.
I'm sending the result here to Slack, but you could just as well send it to a structured output like Kafka (already builtin) or Stackdriver (not yet supported, but I can fix that if you like).
Remember that, you can always raise feature request if the options available are not enough.