Is there a way to check multiple pods in one command, perhaps something like
kubectl logs pods/pod1 pods/pod2 [container-name]
?
The kubectl logs can't print logs from multiple pods specified by names.
Although, you can use the -l, --selector='' flag as a label query to filter on. e.g.:
# Return snapshot logs from all containers in pods defined by label app=nginx
kubectl logs -lapp=nginx --all-containers=true
If you need to print logs from multiple different pods, there are some projects that can help:
Kubetail: Bash script that enables you to aggregate (tail/follow) logs from multiple pods into one stream. This is the same as running kubectl logs -f but for multiple pods.
Kubelogs: It is a bash script that uses your current kubectl context to interactively select namespaces and multiple pods to download logs from. It basically runs kubectl logs in a loop for all containers, redirecting the logs to local files.
Related
I have a Kubernetes cluster, in which different pods are running in different namespaces. How do I know if any pod failed?
Is there any single command to check the failed pod list or restated pod list?
And reason for the restart(logs)?
Depends if you want to have detailed information or you just want to check a few last failed pods.
I would recommend you to read about Logging Architecture.
In case you would like to have this detailed information you should use 3rd party software, as its described in Kubernetes Documentation - Logging Using Elasticsearch and Kibana or another one FluentD.
If you are using Cloud environment you can use Integrated with Cloud Logging tools (i.e. in Google Cloud Platform you can use Stackdriver).
In case you want to check logs to find reason why pod failed, it's good described in K8s docs Debug Running Pods.
If you want to get logs from specific pod
$ kubectl logs ${POD_NAME} -n {NAMESPACE}
First, look at the logs of the affected container:
$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}
If your container has previously crashed, you can access the previous container's crash log with:
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
Additional information you can obtain using
$ kubectl get events -o wide --all-namespaces | grep <your condition>
Similar question was posted in this SO thread, you can check if for more details.
This'll work: kubectl get pods --all-namespaces | | grep -Ev '([0-9]+)/\1'
Also, Lens is pretty good in these situations.
Most of the times, the reason for app failure is printed in the lasting logs of the previous pod. You can see them by simply putting --previous flag along with your kubectl logs ... cmd.
Pods on our k8s cluster are scheduled with Airflow's KubernetesExecutor, which runs all Tasks in a new pod.
I have a such a Task for which the pod instantly (after 1 or 2 seconds) crashes, and for which of course I want to see the logs.
This seems hard. As soon the pod crashes, it gets deleted, along with the ability to retrieve crash logs. I already tried all of:
kubectl logs -f <pod> -p: cannot be used since these pods are named uniquely
(courtesy of KubernetesExecutor).
kubectl logs -l label_name=label_value: I
struggle to apply the labels to the pod (if this is a known/used way of working, I'm happy to try further)
An shared nfs is mounted on all pods on a fixed log directory. The failing pod however, does not log to this folder.
When I am really quick I run kubectl logs -f -l dag_id=sample_dag --all-containers (dag_idlabel is added byAirflow)
between running and crashing and see Error from server (BadRequest): container "base" in pod "my_pod" is waiting to start: ContainerCreating. This might give me some clue but:
these are only but the last log lines
this is really backwards
I'm basically looking for the canonical way of retrieving logs from transient pods
You need to enable remote logging. Code sample below is for using S3. In airflow.cfg set the following:
remote_logging = True
remote_log_conn_id = my_s3_conn
remote_base_log_folder = s3://airflow/logs
The my_s3_conn can be set in airflow>Admin>Connections. In the Conn Type dropdown, select S3.
By running command kubectl logs pod -c container
I am getting continuous autoscrolling list of logs. Is there any way I can get to the end or see the latest log. I don't want go through all the logs.
I have tried using -f as well. Any suggestion?
According to kubectl logs --help
you can use --tail
e.g. kubectl logs pod --tail=10
You have two ways to see the recent log files, based on number of lines and based on time:
kubectl logs --tail=20 nginx
It will show you 20 lines of most recent logs
kubectl logs --since=1h nginx
It will show you logs of last one hour.
I am using the following command to check logs in Kubernetes.
kubectl logs pod_name -n namespace
It is printing all the logs from the beginning.
Is there anyway to tail the logs or check logs between the given window?
Is it possible to rotate docker logs based on size or date?
Yes, we can extract the log by using the since like below -
kubectl logs --since=48h podname > 24Logs.txt
Then you can easily check the logs for specific time within last 48 hours.
1: yes, you can tail or filter by date.
As easy as running kubectl logs --help
Options:
-c, --container='': Print the logs of this container
-f, --follow=false: Specify if the logs should be streamed.
--include-extended-apis=true: If true, include definitions of new APIs via calls to the API server. [default true]
--interactive=false: If true, prompt the user for input when required.
--limit-bytes=0: Maximum bytes of logs to return. Defaults to no limit.
--pod-running-timeout=20s: The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one
pod is running
-p, --previous=false: If true, print the logs for the previous instance of the container in a pod if it exists.
-l, --selector='': Selector (label query) to filter on.
--since=0s: Only return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to all logs. Only one of
since-time / since may be used.
--since-time='': Only return logs after a specific date (RFC3339). Defaults to all logs. Only one of since-time /
since may be used.
--tail=-1: Lines of recent log file to display. Defaults to -1 with no selector, showing all log lines otherwise
10, if a selector is provided.
--timestamps=false: Include timestamps on each line in the log output
2: Docker stores the container logs in host in the path /var/lib/docker/containers/{ContainerId} so you could copy/truncate the logs directly.
That won't have any impact in the container or the pod.
Is there anyway to tail the logs or check logs between the given window?
To tail the logs, use the -foption
kubectl logs pod_name -n namespace -f
Is it possible to role docker logs based on size or date?
You can query logs x-lines ago or since a time range. Take a look at the —tail and —since options
kubectl logs [-f] [-p] POD [-c CONTAINER]
Examples
Return snapshot logs from pod nginx with only one container
kubectl logs nginx
Return snapshot of previous terminated ruby container logs from pod web-1
kubectl logs -p -c ruby web-1
Begin streaming the logs of the ruby container in pod web-1
kubectl logs -f -c ruby web-1
Display only the most recent 20 lines of output in pod nginx
kubectl logs --tail=20 nginx
Show all logs from pod nginx written in the last hour
kubectl logs --since=1h nginx
https://kubernetes-v1-4.github.io/docs/user-guide/kubectl/kubectl_logs/
The "tail" functionality of "kubectl logs" can be used with this convenient GUI frontend: https://retrospective.centeractive.com/blog_retrospective_5_0_0.html
The frontend leverages several functions of "kubectl", for example:
allows to filter the "tail" output in several ways ("check logs between the given window" from question #1)
allows the visual configuration of a group of Kuberenetes pods via labels. The log data the pods in a group can then be "multi-tailed" in a single view.
Disclosure: I helped in making this frontend.
kubectl logs pod_name --since=2m --timestamps
Is there any way to get hold of the log file of the pod in Kubernetes cluster?
I know I can fetch logs using "kubectl exec log -f $POD_NAME" command but I want to get access to log file directly.
It depends on the logging driver you're using
I'm assuming you're using the default json logging driver here, but you can see the node the pod is scheduled on by using kubectl get po -o wide
Then, logon to that node and you'll see the docker logs of the container under /var/lib/docker/containers/<long_container_id>/<long_container_id>-json.log
You will need to use docker ps and docker inspect to determine the long container id.
Run kubectl get pod <pod_name> -n <namespace> -o jsonpath='{.spec.nodeName}' to get the node this Pod is running on.
ssh into the node and you'll find the logs for the Pod at /var/log/pods/<namespace>_<pod_name>_<pod_id>/<container_name>/.
The files within the /var/log/pods/<namespace>_<pod_name>_<pod_id>/<container_name>/ directory are symlinks to where your container runtime writes its container log files. So unlike jaxxstorm's answer, it doesn't matter which container runtime you're running.
I normally retrieve it from /var/log/containers where you will find all the containers' logs deployed on that particular machine