I have a running k8s cluster and we integrated the k8s dashboard to view the logs and I am able to login and view the app logs.
One thing to note here is that our application logs have the current date stamp appended to it, for example : application-20221101.log
I tried to sort the logs in the log location using the below command and this displays the latest logs inside the pod
tail -f `/bin/ls -1td /log-location/application*.log| /usr/bin/head -n1`
but once I add this to the container startup script,
it just displays the current day's logs and after the date changes, i.e it becomes 20221102, it still just displays the previous day's
i.e application-20221101.log only.
I need it to display the latest logs of the current date even after the date changes.
The easiest approach was to just remove the timestamp from the log files, but that would not be possible for our application.
Is there any simple way for configuring this or some workaround would be required to set this up.
Related
I have a pod named 'sample_pod' and a container named 'sample_container' running inside the pod. sample_container's entry point is a python bin file (sample.py). Inside this container, I have CRL certificates which gets refreshed every one hour and sample.py does not know about the refreshed certificates without reloading it.
I need to reload that container every one hour without killing/restarting that container. This is exactly similar to systemd reload option. Is there any specific command to reload that I can run/schedule for every one hour inside sample_container?
If so, how can I schedule to run that command inside container every one hour? Or is there a kubernetes native approach to achieve this?
For your use case, just do not use containers but use instead classical server with a cron task. (cf. my comment under your question)
I'm trying to figure out how to show a timestamp for each line of STDOUT of an Argo Workflows pod. The init and wait containers by default show a timestamp, but never the main container.
The Argo CLI has a --timestamp flag when viewing logs.
Also the argo-java-client has a logOptionsTimestamps property that also enables timestamps.
However I cannot find a similar option when defining a Workflow in YAML. I've gone through the field reference guide but haven't been able to find something to enable timestamps in the main container.
Does anyone know if this is possible, or how to enable them?
Thanks,
Weldon
The reason init and wait log statements have timestamps is that the Argo executable's logger writes timestamps by default.
The --timestamps option does not cause the containers themselves to log timestamps. It just decorates each log line with a timestamp (kubectl has a similar option).
As far as I know, there's no way to declaratively cause code running in the main container to log timestamps. You'd have to modify the code itself to use a logger which inserts timestamps.
we are using OpenShift container platform (v3.11) for hosting our java application. We are writing application logs to standard pod console. However when I try to view pod logs or try to save logs to file, I am not getting complete log file instead getting only partial log (looks logs are truncated). I have tried to provide different options while viewing logs (like --since=48h etc..), but none of them worked.
Is there any way I can increase pod console buffer size or write complete log file contents to file.
The better way is configuring log aggrigation via fluentd/elastic (see elk_logging), however there's an option to change docker log driver settings on the node with the running container (see managing_docker_container_logs or docker_logging_configure)
I created an ELK vm using the bitnami template in Azure and can send events, but when I goto the discover tab, it's only showing the last event.
What filters are you using? You might be filtering to show just the last event.
Can you confirm that the events are created despite they are not shown in the Discover Tab of Kibana? You can check the logs shown in LogStash for that by browsing to http://YOUR-SERVER-IP/logstash/
I am creating an app in Origin 3.1 using my Docker image.
Whenever I create image new pod gets created but it restarts again and again and finally gives status as "CrashLoopBackOff".
I analysed logs for pod but it gives no error, all log data is as expected for a successfully running app. Hence, not able to determine the cause.
I came across below link today, which says "running an application inside of a container as root still has risks, OpenShift doesn't allow you to do that by default and will instead run as an arbitrary assigned user ID."
What is CrashLoopBackOff status for openshift pods?
Here my image is using root user only, what to do to make this work? as logs shows no error but pod keeps restarting.
Could anyone please help me with this.
You are seeing this because whatever process your image is starting isn't a long running process and finds no TTY and the container just exits and gets restarted repeatedly, which is a "crash loop" as far as openshift is concerned.
Your dockerfile mentions below :
ENTRYPOINT ["container-entrypoint"]
What actually this "container-entrypoint" doing ?
you need to check.
Did you use the -p or --previous flag to oc logs to see if the logs from the previous attempt to start the pod show anything
The recommendation of Red Hat is to make files group owned by GID 0 - the user in the container is always in the root group. You won't be able to chown, but you can selectively expose which files to write to.
A second option:
In order to allow images that use either named users or the root (0) user to build in OpenShift, you can add the project’s builder service account (system:serviceaccount::builder) to the privileged security context constraint (SCC). Alternatively, you can allow all images to run as any user.
Can you see the logs using
kubectl logs <podname> -p
This should give you the errors why the pod failed.
I am able to resolve this by creating a script as "run.sh" with the content at end:
while :; do
sleep 300
done
and in Dockerfile:
ADD run.sh /run.sh
RUN chmod +x /*.sh
CMD ["/run.sh"]
This way it works, thanks everybody for pointing out the reason ,which helped me in finding the resolution. But one doubt I still have why process gets exited in openshift in this case only, I have tried running tomcat server in the same way which just works fine without having sleep in script.