How to send the logs from kubernetes pod to host pc - kubernetes

I use k9s to access the bash from the pod where I keep the logs of my project.
Reading the logs with a cat is annoying, so I want to send them to my pc.
How can I do this?

You can use kubectl cp command.
kubectl cp default/<some-pod>:/logs/app.log app.log

Related

How to bind container's application file log with Kubernetes logs command?

I have developed a web microservice in Golang. I have used zap logger to log the application log in a file at location /var/log/myapp/myapp.log.
I want to see log information in file myapp.log through below command:
#kubectl logs mayappPod
But it is not working as by default STDOUT and STDERR output is redirected to kubectl logs command.
So my question is what exactly I am supposed to do to see the /var/log/myapp/myapp.log log through kubectl logs command.
Thanks,
Rohit
kubectl logs command just show you logs from container(s) you specified. And container got logs from STDOUT and STDERR.
The recommended way is to setup your logging library to write logs to STDOUT. But as a workaround you can create symlink from /var/log/myapp/myapp.log to /dev/stdout in your docker container.
Another option is not use kubectl logs at all. You could copy this log file from your pod using kubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar see

How do I know which pod a container is running in?

Sorry about a newbie question. I am trying to deploy an image into k3d (a dockerized version of k3s).
k3d image import -c my-cluster registry.gitlab.com/aaa/bbb/ccc/hello123
Now I can see the image on a node:
kubectl get node my-node -o json | grep hello123
However, the documentation doesn't say much about what "import" does. Is my image running? Is it allocated to a pod yet? Where can I find its logs?
If I knew what pod it's running in, I could do kubectl logs. The list of the cluster's pods doesn't show anything relevant.
I am beginning to think my image isn't running yet.
Edit: This if further confirmed by
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' |\
sort |\
uniq -c
showing nothing relevant.
What's the next step?
You have just pulled the image to the cluster registry, the image has not been yet assigned to a pod.
Once you create a pod with the same image and image tag, it will try to pull from the local registry.
If you could ssh to the k8s node (kubectl get nodes -o wide and ssh user#nodeip), you can run the docker commands like:
docker images
You can expect to see the image that you pulled in the list.
If non of the pods are running the docker ps will return you an empty list.

kubectl cp from a completed pod to local computer

I would like to use kubectl cp to copy a file from a completed pod to my local host(local computer). I used kubectl cp /:/ , however, it gave me an error: cannot exec into a container in a completed pod; current phase is Succeeded error. Is there a way I can copy a file from a completed pod? It does not need to be kubectl cp. Any help appreciated!
Nope. If the pod is gone, it's gone for good. Only possibility would be if the data is stored in a PV or some other external resource. Pods are cattle, not pets.
You can find the files, because the containers of a pod in the state Completed are not deleted, they are just not running.
I am not aware of any way to do it via Kubernetes itself, but here is how to do it if your container runtime is Docker:
$ ssh <node where the pod is>
$ docker ps -a | grep <pod name>
$ docker cp <pod name>:/your/files ./
The files in containers are just overlayfs mounts; if the container still exists, the files still exist.
So if you are using containerd runtime or something else, look at /var/lib/containers or something (don't know where different runtimes do their overlayfs mounts, but it can't not be at the node. you could check if you find out where via $ mount).

running a linux command against a pid inside k8 pod

Is it possible to run a linux command against a process which is running inside a kubernetes pod. Example: I want to grab heapdumps on a java process running inside a k8 pod. The pod comes with minimal installation and does not have that much disk space either, so I want to run jmap command from local machine (pointing to k8 cluster). Thanks.
As I have already mentioned in the comments, what you can use is the kubectl exec command:
Execute a command in a container.
Usage:
$ kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...]
The kubectl exec command is a tool that allows you to inspect and debug your applications, by executing commands inside your containers.
If you need more details and examples regarding how to use it, I recommend these two guides:
Get a Shell to a Running Container: This page shows how to use kubectl exec to get a shell to a running container.
How does kubectl exec work?
kubectl exec did it. It allows to run any command inside the container. For example:
kc exec <POD_NAME> -- jmap -dump:live,format=b,file=heapdump.bin <pid>

How to copy a file from host to Kubernetes container?

I want to copy a file from my Ubuntu machine to kube-controller-manager-ubuntu container. Currently I do that like this, but I think it has more straight solution in Kubernetes.
Does anyone know how to copy a file to a Kubernetes container?
it is similar to docker copy.
kubectl cp /tmp/foo_dir <some-pod>:/tmp/bar_dir
Please refer here for examples and documentation
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#cp
In case you are using namespace then you wanna go like this -
kubectl cp ./file.csv <CONTAINER_ID>:/path/to/copy -n <namespace>
e.g.
kubectl ./file.csv b81dd0b1745c:/usr/cloud_ms/ -n cloud