how to disabled kubernetes log to disk file? - kubernetes

I have tried
kubectl create -f x.yaml --logtostderr=true
but it didn't work.

The Kubernetes API doesn't currently expose a way to change the logging behavior. It'll rotate the log files as appropriate to avoid filling up the disk, but if you need more control, you'll have to modify the docker daemon on each node to change its logging driver.
Or if you want to do it for a specific application, change the command in your x.yaml file that you're using to start the app to redirect stdout and stderr to /dev/null inside the container.

Related

Kubernetes get log within container

Background: I use glog to register signal handler, but it cannot kill the init process (PID=1) with kill sigcall. That way, even though deadly signals like SIGABRT is raised, kubernetes controller manager won't be able to understand the pod is actually not functioning, thus kill the pod and restart a new one.
My idea is to add logic into my readiness/liveness probe: check the content for current container, whether it's in healthy state.
I'm trying to look into the logs on container's local filesystem /var/log, but haven't found anything useful.
I'm wondering if it's possible to issue a HTTP request to somewhere, to get the complete log? I assume it's stored somewhere.
You can find the kubernetes logs on Master machine at:
/var/log/pods
if using docker containers:
/var/lib/docker/containers
Containers are Ephemeral
Docker containers emit logs to the stdout and stderr output streams. Because containers are stateless, the logs are stored on the Docker host in JSON files by default.
The default logging driver is json-file. The logs are then annotated with the log origin, either stdout or stderr, and a timestamp. Each log file contains information about only one container.
As #Uri Loya said, You can find these JSON log files in /var/lib/docker/containers/ directory on a Linux Docker host. Here's how you can access them:
/var/lib/docker/containers/<container id>/<container id>-json.log
You can collect the logs with a log aggregator and store them in a place where they'll be available forever. It's dangerous to keep logs on the Docker host because they can build up over time and eat into your disk space. That's why you should use a central location for your logs and enable log rotation for your Docker containers.

Where is the log file for `kubectl log pod/yourpod`

When I type kubectl log pod/yourpod to get my pod's logs, behind the scene, k8s must read the log from somewhere in my pod.
What's the default path to the log generated by my container?
How to change the path?
Inside my container, my process uses sigs.k8s.io/controller-runtime/pkg/log to generate logs.
It is the console output (stdout / stderr) that is captured by the container runtime and made available by the kubelet running on the node to the API server.
So there is not real log file, though the container runtime usually has a means of buffering the logs to the file system.

Where does K8s store the logs that it prints when running kubectl logs -f

I am pretty sure it writes it on disk somewhere. Otherwise if the container runs for several hours and logs a lot, then it would exceed what the stderr can hold I think. No?
Is it possible to compress and download the logs of kubectl logs?i.e. comparess on the container without downloading them?
Firstly take a look on logging-kubernetes official documentation.
In most cases, Docker container logs are put in the /var/log/containers directory on your host (host they are deployed on). Docker supports multiple logging drivers but Kubernetes API does not support driver configuration.
Once a container terminates or restarts, kubelet keeps its logs on the node. To prevent these files from consuming all of the host’s storage, a log rotation mechanism should be set on the node.
You can use kubectl logs to retrieve logs from a previous instantiation of a container with --previous flag, in case the container has crashed.
If you want to take a look at additional logs. For example, in Linux journald logs can be retrieved using the journalctl command:
$ journalctl -u docker
You can implement cluster-level logging and expose or push logs directly from every application but the implementation for such a logging mechanism is not in the scope of Kubernetes.
Also there are many tools offered for Kubernetes for logging management and aggregation - see: logs-tools.

Changing run parameter for cockroachDB in kubernetes GKE

I have a running GKE cluster with cockroachDB active. It's been running for quite a while and I don't want to reinitialize it from scratch - it uses the (almost) standard cockroachDB supplied yaml file to start. I need to change a switch in the exec line to modify the logging level -- currently it's set to the below (but that is logging all information messages as well as errors)
exec /cockroach/cockroach start --logtostderr --insecure --advertise-host $(hostname -f) --http-host 0.0.0.0 --join cockroachdb-0.cockroachdb,cockroachdb-1.cockroachdb,cockroa
chdb-2.cockroachdb --cache 25% --max-sql-memory 25%"
How do I do this without completely stopping the DB?
Kubernetes allows you to update StatefulSets in a rolling manner, such that only one pod is brought down at a time.
The simplest way to make changes is to run kubectl edit statefulset cockroachdb. This will open up a text editor in which you can make the desired change to the command, then save and exit. After that, Kubernetes should handle replacing the pods one-by-one with new pods that use the new command.
For more information:
https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html#step-10-upgrade-the-cluster
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets
https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resources

Is there any way I can edit file in the container and restart it?

Is there any way I can exec into the container, then edit some code (ex: add some log, edit come configuration file, etc) and restart the container to see what happens?
I tried to search for this but found nothing helpful.
The point is, I want to do a quick debug, not to do a full cluster deployment.
Some programs (like ie. nginx) support configuration reload without restarting their process, with these you can just kubectl exec change config and send a signal to master process (ie. kubectl exec <nginx_pod> kill -HUP 1). It is a feature of the software though, so many will not take that into account.
Containers are immutable by design, so they restart with a clean state each time. That said, with no simple way of doing this, there are hackish ways to achieve it.
One I can think of involves modifying the image on the node that will then restart the container. If you can ssh into the node and access docker directly, you can identify the container with a modified file and commit these changes with docker commit under the same tag. At that point your local container with that tag has your changes baked in so if you restart it (not reschedule, as it could start on different node), it will come up with your changes (assuming you do not use pullPolicy: always).
Again, not the way it's meant to be used, but achievable.
Any changes to the local container file system will be lost if you restart the pod You would need to work out whether the application stack you are using can perform an internal restart without actually exiting.
What language/application stack are you using?
You should at least consider an hostPath volume, in order to share local files on your host with your Kubernetes instance, in order to be able to do that kind of test.
After that, it is up to your application running within your pod to detect the file change, and restart if needed (ie, this is not specific to Kubernetes at all)
You could put any configuration in a configmap then just apply that, obviously assuming what reads the configmap would re-read it.
Same issue i have faced in my container as well
I have done the below steps in kubernate conatiner, it worked.
logged into pod eg:
kubectl exec --stdin --tty nginx-6799fc88d8-mlvqx -- /bin/bash
Once i logged in to application pod ran the below commands
#apt-get update
#apt-get install vim
Now am able to use vim editor in kubernate container.