Access Logs inside k8 pods - kubernetes

We have a k8 cluster. I am trying to access logs from inside and kubectl won't work inside. Where would be the logs stored in k8?
We do not have systemd and found in the docs that:
If systemd is not present, they write to .log files in the /var/log directory. System components inside containers always write to the /var/log directory, bypassing the default logging mechanism.
But I could not find any logs in here. So how can I get access to these logs which I would get by kubectl logs from inside the pod?
How does default logging work in k8 without any logging mechanism setup?
PS: I did go through other similar posts and had no luck with those.

If the application does not log to a file, it may log to stdout sometimes (which kubectl logs <pod name> should also show).
You can try docker logs <name or ID of the container
If the /var/log directory does not persist in a volume mounted in the container, it will be lost when the pod restarts or moves in the cluster as the /var/log directory will be ephemeral. Check if the pod has restarted or moved in the cluster.
Find if the pod uses any volumes for persistent storage of /var/log by doing:
kubectl get <pod name> -o yaml | grep -i volume
kubectl get persistentvolumes --all-namespaces
kubectl get persistentvolumeclaims --all-namespaces

If you have the logs you want to access available via kubectl logs then it means they are ultimately output to stdout or stderr. Docker and kubelet work on top of standard outputs, processing these logs in their own fashion (ie. by logging plugins). When your process throws something to sdtout, it is obviously not stored anywhere on the local filesystem of the container.
That said, you can configure your app to log to files, but mind that you need to handle logrotation, cleanup etc. or your container will perpetualy grow. If you can't have both in parallel, you do loose them from docker/kubernetes logs though, which is not so nice. If that is the case, you can have a process in a sidecar that reads logfiles from mounted volume and sends them to stdout/stderr.
The real question is why do you need to access logs inside POD. Knowing that maybe there is a better way to achieve what you need (ie. pipe them first via some parser process).

I understood that the application just logs in a file/directory inside the container.
Could you use
kubectl exec -it <podname> bash
or
kubectl exec -it <podname> sh
to enter the container/pod and check the logs inside the container.

Related

how to see the kubernetes container servcie log with restart pod

Now my kubernetes (v1.15.x) deployment keeps restarting all the time. From the log ouput with kubernetes dashboard I could not see anything useful. Now I want to log into the pod and check the log from log dir of my service. But the pod keeps restarting all the time and I have no chance to log into the pod.
Is there any way to login restart pod or dump some file or see the file in the pod? I want to find why the pod restart all the time.
if you are running the GKE and logging is enabled you can get all container log by default into the dashboard of stack driver logging.
As of now you can run the kubectl describe pod <pod name> to check the status code of the container which got exited. Status code might be helpful to understand the reason for restart, is it due to Error or OOM killed.
you can also use the flag --previous and get logs of restarted POD
Example :
kubectl logs <POD name> --previous
in the above case of --previous your pod needs but still exist inside the cluster.
#HarshManvar is right but I would like to provide you with some more options:
Debugging with an ephemeral debug container: Ephemeral containers are useful for interactive troubleshooting when kubectl exec is insufficient because a container has crashed or a container image doesn't include debugging utilities, such as with distroless images.
Debugging via a shell on the node: If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host.
These two methods above can be found useful when checking logs or execing into the container would not be efficient.

How to login/enter in kubernetes pod

I have kubernetes pods running as shown in command "kubectl get all -A" :
and same pods are shown in command "kubectl get pod -A" :
I want to enter/login to any of these pod (all are in Running state). How can I do that please let me know the command?
Kubernetes Pods are not Virtual Machines, so not something you typically can "log in" to.
But you might be able to execute a command in a container. e.g. with:
kubectl exec <pod-name> -- <command>
Note that your container need to contain the binary for <command>, otherwise this will fail.
See also Getting a shell to a container.
In addition to Jonas' answer above;
If you have more than one namespace, you need to specify the namespace your pod is currently using i.e kubectl exec -n <name space here> <pod-name> -it -- /bin/sh
After successfully accessing your pod, you can go ahead and navigate through your container.

Minikube - /var/log/kubeproxy.log: No such file or directory

I am trying to find the kubeproxy logs on minikube, It doesn't seem they are located.
sudo cat: /var/log/kubeproxy.log: No such file or directory
A more generic way (besides what hoque described) that you can use on any kubernetes cluster is to check the logs using kubectl.
kubectl logs kube-proxy-s8lcb -n kube-system
Using this solution allow you to check logs for any K8s cluster even if you don't have access to your nodes.
Pod logs are located in /var/log/pods/.
Run
$ minikube ssh
$ ls /var/log/pods/
default_dapi-test-pod_1566526c-1051-4102-a23b-13b73b1dd904
kube-system_coredns-5d4dd4b4db-7ttnf_59d7b01c-4d7d-40f9-8d6a-ac62b1fa018e
kube-system_coredns-5d4dd4b4db-n8d5t_6aa36b9a-6539-4ef2-b163-c7e713861fa2
kube-system_etcd-minikube_188c8af9ff66b5060895a385b1bb50c2
kube-system_kube-addon-manager-minikube_f7d3bd9bbbbdd48d97a3437e231fff24
kube-system_kube-apiserver-minikube_b15fea5ed20174140af5049ecdd1c59e
kube-system_kube-controller-manager-minikube_d8cdb4170ab1aac172022591866bd7eb
kube-system_kube-proxy-qc4xl_30a6100a-db70-42c1-bbd5-4a818379a004
kube-system_kube-scheduler-minikube_14ff2730e74c595cd255e47190f474fd

How to access kube-apiserver on command line?

Looking at documentation for installing Knative requires a Kubernetes cluster v1.11 or newer with the MutatingAdmissionWebhook admission controller enabled. So checking the documentation for this I see the following command:
kube-apiserver -h | grep enable-admission-plugins
However, kube-apiserver is running inside a docker container on master. Logging in as admin to master, I am not seeing this on the command line after install. What steps do I need to take to to run this command? Its probably a basic docker question but I dont see this documented anywhere in Kubernetes documentation.
So what I really need to know is if this command line is the best way to set these plugins and also how exactly to enter the container to execute the command line.
Where is kube-apiserver located
Should I enter the container? What is name of container and how do I enter it to execute the command?
I think that answer from #embik that you've pointed out in the initial question is quite decent, but I'll try to shed light on some aspects that can be useful for you.
As #embik mentioned in his answer, kube-apiserver binary actually resides on particular container within K8s api-server Pod, therefore you can free to check it, just execute /bin/sh on that Pod:
kubectl exec -it $(kubectl get pods -n kube-system| grep kube-apiserver|awk '{print $1}') -n kube-system -- /bin/sh
You might be able to propagate the desired enable-admission-plugins through kube-apiserver command inside this Pod, however any modification will disappear once api-server Pod re-spawns, i.e. master node reboot, etc.
The essential api-server config located in /etc/kubernetes/manifests/kube-apiserver.yaml. Node agent kubelet controls kube-apiserver runtime Pod, and each time when health checks are not successful kubelet sents a request to K8s Scheduler in order to re-create this affected Pod from primary kube-apiserver.yaml file.
This is old, still if its in the benefit of a needy. The a #Nick_Kh's answer is good enough, just want to extend it.
In case the api-server pod fails to give you the shell access, you may directly execute the command using kubectl exec like this:
kubectl exec -it kube-apiserver-rhino -n kube-system -- kube-apiserver -h | grep enable-admission-plugins
In this case, I wanted to know what are the default admission plugins enabled and every time I tried accessing pod's shell (bash, sh, etc.), ended up with error like this:
[root#rhino]# kubectl exec -it kube-apiserver-rhino -n kube-system -- /bin/sh
OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown
command terminated with exit code 126

Restart container within pod

I have a pod test-1495806908-xn5jn with 2 containers. I'd like to restart one of them called container-test. Is it possible to restart a single container within a pod and how? If not, how do I restart the pod?
The pod was created using a deployment.yaml with:
kubectl create -f deployment.yaml
Is it possible to restart a single container
Not through kubectl, although depending on the setup of your cluster you can "cheat" and docker kill the-sha-goes-here, which will cause kubelet to restart the "failed" container (assuming, of course, the restart policy for the Pod says that is what it should do)
how do I restart the pod
That depends on how the Pod was created, but based on the Pod name you provided, it appears to be under the oversight of a ReplicaSet, so you can just kubectl delete pod test-1495806908-xn5jn and kubernetes will create a new one in its place (the new Pod will have a different name, so do not expect kubectl get pods to return test-1495806908-xn5jn ever again)
There are cases when you want to restart a specific container instead of deleting the pod and letting Kubernetes recreate it.
Doing a kubectl exec POD_NAME -c CONTAINER_NAME /sbin/killall5 worked for me.
(I changed the command from reboot to /sbin/killall5 based on the below recommendations.)
Both pod and container are ephemeral, try to use the following command to stop the specific container and the k8s cluster will restart a new container.
kubectl exec -it [POD_NAME] -c [CONTAINER_NAME] -- /bin/sh -c "kill 1"
This will send a SIGTERM signal to process 1, which is the main process running in the container. All other processes will be children of process 1, and will be terminated after process 1 exits. See the kill manpage for other signals you can send.
I m using
kubectl rollout restart deployment [deployment_name]
or
kubectl delete pod [pod_name]
The whole reason for having kubernetes is so it manages the containers for you so you don't have to care so much about the lifecyle of the containers in the pod.
Since you have a deployment setup that uses replica set. You can delete the pod using kubectl delete pod test-1495806908-xn5jn and kubernetes will manage the creation of a new pod with the 2 containers without any downtime. Trying to manually restart single containers in pods negates the whole benefits of kubernetes.
All the above answers have mentioned deleting the pod...but if you have many pods of the same service then it would be tedious to delete each one of them...
Therefore, I propose the following solution, restart:
1) Set scale to zero :
kubectl scale deployment <<name>> --replicas=0 -n service
The above command will terminate all your pods with the name <<name>>
2) To start the pod again, set the replicas to more than 0
kubectl scale deployment <<name>> --replicas=2 -n service
The above command will start your pods again with 2 replicas.
We use a pretty convenient command line to force re-deployment of fresh images on integration pod.
We noticed that our alpine containers all run their "sustaining" command on PID 5. Therefore, sending it a SIGTERM signal takes the container down. imagePullPolicy being set to Always has the kubelet re-pull the latest image when it brings the container back.
kubectl exec -i [pod name] -c [container-name] -- kill -15 5
There was an issue in coredns pod, I deleted such pod by
kubectl delete pod -n=kube-system coredns-fb8b8dccf-8ggcf
Its pod will restart automatically.
kubectl exec -it POD_NAME -c CONTAINER_NAME bash - then kill 1
Assuming the container is run as root which is not recommended.
In my case when I changed the application config, I had to reboot the container which was used in a sidecar pattern, I would kill the PID for the spring boot application which is owned by the docker user.
I realize this question is old and already answered, but I thought I'd chip in with my method.
Whenever I want to do this, I just make a minor change to the pod's container's image field, which causes kubernetes to restart just the container.
If you can't switch between 2 different, but equivalent tags (like :latest / :1.2.3 where latest is actually version 1.2.3) then you can always just switch it quickly to an invalid tag (I put an X at the end like :latestX or something) and then re-edit it and remove the X straight away afterwards, this does cause the container to fail starting with an image pull error for a few seconds though.
So for example:
kubectl edit po my-pod-name
Find the spec.containers[].name you want to kill, then find it's image
apiVersion: v1
kind: Pod
metadata:
#...
spec:
containers:
- name: main-container
#...
- name: container-to-restart
image: container/image:tag
#...
You would search for your container-to-restart and then update it's image to something different which will force kubernetes to do a controlled restart for you.
Killing the process specified in the Dockerfile's CMD / ENTRYPOINT works for me. (The container restarts automatically)
Rebooting was not allowed in my container, so I had to use this workaround.
The correct, but likely less popular answer, is that if you need to restart one container in a pod then it shouldn't be in the same pod. You can't restart single containers in a pod by design. Just move the container out into it's own pod. From the docs
Pods that run a single container. The "one-container-per-Pod" model is
the most common Kubernetes use case; in this case, you can think of a
Pod as a wrapper around a single container; Kubernetes manages Pods
rather than managing the containers directly.
Note: Grouping multiple co-located and co-managed containers in a
single Pod is a relatively advanced use case. You should use this
pattern only in specific instances in which your containers are
tightly coupled.
https://kubernetes.io/docs/concepts/workloads/pods/
I was playing around with ways to restart a container. What I found for me was this solution:
Dockerfile:
...
ENTRYPOINT [ "/app/bootstrap.sh" ]
/app/bootstrap.sh:
#!/bin/bash
/app/startWhatEverYouActuallyWantToStart.sh &
tail -f /dev/null
Whenever I want to restart the container, I kill the process with tail -f /dev/null which I find with
kill -TERM `ps --ppid 1 | grep tail | grep -v -e grep | awk '{print $1}'`
Following that command, all the processes except for the one with PID==1 will be killed and the entrypoint, in my case bootstrap.sh will be executed (again).
That's for the part "restart" - which is not really a restart but it does what you wish, in the end. For the part with limiting restarting the container named container-test you could pass on the container name to the container in question (as the container name would otherwise not be available inside the container) and then you can decide whether to do the above kill.
That would be something like this in your deployment.yaml:
env:
- name: YOUR_CONTAINER_NAME
value: container-test
/app/startWhatEverYouActuallyWantToStart.sh:
#!/bin/bash
...
CONDITION_TO_RESTART=0
...
if [ "$YOUR_CONTAINER_NAME" == "container-test" -a $CONDITION_TO_RESTART -eq 1 ]; then
kill -TERM `ps --ppid 1 | grep tail | grep -v -e grep | awk '{print $1}'`
fi
Sometimes no one knows which OS the pod has, pod might not have sudo or reboot altogether.
Safer option is to take a snapshot and recreate pod.
kubectl get <pod-name> -o yaml > pod-to-be-restarted.yaml;
kubectl delete po <pod-name>;
kubectl create -f pod-to-be-restarted.yaml
kubectl delete pods POD_NAME
This command will delete the pod and restart another automatically.