Kubernetes with logrotate sidecar mount point issue - kubernetes

I am trying to deploy a test pod with nginx and logrotate sidecar.
Logrotate sidecar taken from: logrotate
My Pod yaml configuration:
apiVersion: v1
kind: Pod
metadata:
name: nginx-apache-log
labels:
app: nginx-apache-log
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: logs
mountPath: /var/log
- name: logrotate
image: path/to/logrtr:sidecar
volumeMounts:
- name: logs
mountPath: /var/log
volumes:
- name: logs
emptyDir: {}
What I'd like to achieve is Logrotate container watching /var/log//.log, however with the configuration above, nginx container is failing because there is no /var/log/nginx:
nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (2: No such file or directory)
2018/10/15 10:22:12 [emerg] 1#1: open() "/var/log/nginx/error.log" failed (2: No such file or directory)
However if I change mountPath for nginx from
mountPath: /var/log
to:
mountPath: /var/log/nginx
then it is starting, logging to /var/log/nginx/access.log and error.log, but logrotate sidecar sees all logs in /var/log not /var/log/nginx/. It is not a problem with just one nginx container, but I am planning to have more container apps logging to their own /var/log/appname folders.
Is there any way to fix/workaround that? I don't want to run sidecar for each app.
If I change my pod configuration to:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: logs
mountPath: /var/log
initContainers:
- name: install
image: busybox
command:
- mkdir -p /var/log/nginx
volumeMounts:
- name: logs
mountPath: "/var/log"
then it is failing with:
Warning Failed 52s (x4 over 105s) kubelet, k8s-slave1 Error: failed to start container "install": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"mkdir -p /var/log/nginx\": stat mkdir -p /var/log/nginx: no such file or directory": unknown

Leave the mount path as /var/log. In your nginx container, execute mkdir /var/log/nginx in a startup script. You might have to tweak directory permissions a bit to make this work.

If you are running nginx in kubernetes, it is probably logging to stdout. When you run kubectl logs <nginx pod> nginx it will show you access and error logs. These logs are automatically logrotated by kubernetes, so you will not need a logrotate sidecar in this case.
If you are ever running pods that are not logging to stdout, this is a bit of an antipattern in kubernetes. It is more to your advantage to always log to stdout: kubernetes can take care of log rotation for you, and it is also easier to see logs with kubectl logs than by running kubectl exec and rummaging around in a running container

Related

kubectl copy logs from pod when terminating

We are trying to get the logs of pods after multiple restarts but we dont want to use any external solution like efk.
i tried below config but its not working. does the below cmd run on the pod or it will run on node level
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "kubectl logs appworks-0 > /container-stoped.txt"]
i tried below config but its not working. does the below cmd run on
the pod or it will run on node level
it will run on the POD level, not on Node level
You can use the Hostpath in POD configuration
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: alpine
name: test-container
command: ["tail"]
args: ["-f", "/dev/null"]
volumeMounts:
- mountPath: /host
name: test-volume
volumes:
- name: test-volume
hostPath:
path: /
type: Directory
Hostpath will directly will create one Dir at the Node level and save logs over there, if you don't want this solution you can add your solution of lifecycle hook also however when you can directly write app logs to Host don't add lifecycle hook extra.
Note : Make sure if your Node goes down hostpath or emptyDir logs you will miss.

create an empty file inside a volume in Kubernetes pod

I have a legacy app which keep checking an empty file inside a directory and perform certain action if the file timestamp is changed.
I am migrating this app to Kubernetes so I want to create an empty file inside the pod. I tried subpath like below but it doesn't create any file.
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
containers:
- name: demo
image: alpine
command: ["sleep", "3600"]
volumeMounts:
- name: volume-name
mountPath: '/volume-name-path'
subPath: emptyFile
volumes:
- name: volume-name
emptyDir: {}
describe pods shows
Containers:
demo:
Container ID: containerd://0b824265e96d75c5f77918326195d6029e22d17478ac54329deb47866bf8192d
Image: alpine
Image ID: docker.io/library/alpine#sha256:08d6ca16c60fe7490c03d10dc339d9fd8ea67c6466dea8d558526b1330a85930
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Running
Started: Wed, 10 Feb 2021 12:23:43 -0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4gp4x (ro)
/volume-name-path from volume-name (rw,path="emptyFile")
ls on the volume also shows nothing.
k8 exec -it demo-pod -c demo ls /volume-name-path
any suggestion??
PS: I don't want to use a ConfigMap and simply wants to create an empty file.
If the objective is to create a empty file when the Pod starts, then the most easy way is to either use the entrypoint of the docker image or an init container.
With the initContainer, you could go with something like the following (or with a more complex init image which you build and execute a whole bash script or something similar):
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
initContainers:
- name: create-empty-file
image: alpine
command: ["touch", "/path/to/the/directory/empty_file"]
volumeMounts:
- name: volume-name
mountPath: /path/to/the/directory
containers:
- name: demo
image: alpine
command: ["sleep", "3600"]
volumeMounts:
- name: volume-name
mountPath: /path/to/the/directory
volumes:
- name: volume-name
emptyDir: {}
Basically the init container gets executed first, runs its command and if it is successful, then it terminates and the main container starts running. They share the same volumes (and they can also mount them at different paths) so in the example, the init container mount the emptyDir volume, creates an empty file and then complete. When the main container starts, the file is already there.
Regarding your legacy application which is getting ported on Kubernetes:
If you have control of the Dockerfile, you could simply change it create an empty file at the path you are expecting it to be, so that when the app starts, the file is already created there, empty, from the beginning, just exactly as you add the application to the container, you can add also other files.
For more info on init container, please check the documentation (https://kubernetes.io/docs/concepts/workloads/pods/init-containers/)
I think you may be interested in Container Lifecycle Hooks.
In this case, the PostStart hook may help create an empty file as soon as the container is started:
This hook is executed immediately after a container is created.
In the example below, I will show you how you can use the PostStart hook to create an empty file-test file.
First I created a simple manifest file:
# demo-pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: demo-pod
name: demo-pod
spec:
containers:
- image: alpine
name: demo-pod
command: ["sleep", "3600"]
lifecycle:
postStart:
exec:
command: ["touch", "/mnt/file-test"]
After creating the Pod, we can check if the demo-pod container has an empty file-test file:
$ kubectl apply -f demo-pod.yml
pod/demo-pod created
$ kubectl exec -it demo-pod -- sh
/ # ls -l /mnt/file-test
-rw-r--r-- 1 root root 0 Feb 11 09:08 /mnt/file-test
/ # cat /mnt/file-test
/ #

container fails after executing command: [wget url]

i'm trying to download gzip file from a remote location, after download is complete the container Status changes to Completed then CrashLoopBackOff. the image below shows results of kubectl log my-service and kubectl describe pod my-service displays CrashLoopBackOff restarting failed container.
so i want this wget command to get executed during container initialization so i can gunzip and have the files accessed in a mounted volume. but this fails at container initialization
containers:
- name: my-service
image: docker.source.co.za/azp/my-service:1.0.0-SNAPSHOT
imagePullPolicy: Always
command:
- wget
- http://www.source.co.za/download/attachments/627674073/refpolicies.tar.gz
volumeMounts:
- name: my-service
mountPath: /test/
volumes:
- name: my-service
emptyDir: {}
The container stops after the command is executed. Kubernetes expects the container to run forever.
You can configure as below to achieve the same
command: ["/bin/sh","-c"]
args: ["wget url && sleep infinity"]
sleep infinity makes the container run forever doing nothing.

How can I save k8s pod logs to host disk

i'm stack at k8s log storage.we have logs that can't output to stdout,but have to save to dir.we want to save to glusterfs shared dir like /data/logs/./xxx.log our apps are written in java ,how can we do that
This is mostly up to your CRI plugin, usually Docker command line options. They already do write to local disk by default, you just need to mount your volume at the right place (probably /var/log/containers or similar, look at your Docker config).
I had the same problem with one 3rd party application. It was writing logs in a log file and I wanted Fluentd to be able to get them so I wanted somehow to print them on the stdout.
I found a workaround with one additional container running alongside with the app container in the same pod.
Lets say the 3rd party app is writing logs in the following file:
/some/folders/logs/app_log_file.log
The following pod will run two containers, one with the app and the other with busybox image which we will use to fetch the logs from the app container.
apiVersion: v1
kind: Pod
metadata:
name: application-pod
spec:
containers:
- name: app-container
image: <path-to-app-image>
imagePullPolicy: IfNotPresent
volumeMounts:
- name: log-volume
mountPath: /some/folders/logs
- name: log-fetcher-container
image: busybox
args: [/bin/sh, -c, 'sleep 60 && tail -n+1 -f /var/log/app_log_file.log']
volumeMounts:
- name: log-volume
mountPath: /var/log
volumes:
- name: log-volume
emptyDir: {}
As you can see this manifest is creating a empty volume and mounting volume to the /some/folders/logs folder in the app container and to the /var/log folder in the log fetcher container. Now every file that the app container writes to /some/folders/logs will be visible in /var/log also.
That's why the busybox image is running a shell command:
sleep 60 && tail -n+1 -f /var/log/app_log_file.log
First we wait 60 seconds because the app container must have time to start up and create the log file and then the tail command is going to print every new line in the log file to the stdout of the log fetcher container.
And now the fluentd will be able to get the logs from the log file of the app container getting the stdout logs of the log fetcher container.

Why the path does not get mount?

I've created the manifest file, that looks as follows:
apiVersion: v1
kind: Pod
metadata:
name: kuard
spec:
volumes:
- name: "kuard-data"
hostPath:
path: "/home/developer/kubernetes/exercises"
containers:
- image: gcr.io/kuar-demo/kuard-amd64:1
name: kuard
volumeMounts:
- mountPath: "/data"
name: "kuard-data"
ports:
- containerPort: 8080
name: http
protocol: TCP
As you can see, the hostpath is:
path: "/home/developer/kubernetes/exercises"
and the mountPath is:
mountPath: "/data"
I've created a hello.txt file in the folder /home/developer/kubernetes/exercises and when I enter into the pod via kubectl exec -it kuard ash I can not find the file hello.txt.
Where is the file?
kind is using Docker containers to simulate Kubernetes nodes. So when you are creating files on your host (your ubuntu machine) the containers will not automatically have access to them.
(This gets even more complicated when using macos or windows and docker is running in a separate virtual machine...)
I assume that there are some shared folders visible inside the kind-docker-nodes, but I could not find it documented.
You can verify the filesystem content of the docker node from inside the container using docker exec -it kind-control-plane /bin/sh and then work with the usual tools.
If you need to make content from your development machine available you might want to have a look at ksync: https://github.com/vapor-ware/ksync