How can I log each time Kubernetes kills a process due to resource use? - kubernetes

Is there any way to log this action / event?
It seems that if a process is using too much resources and is not the main process, it gets a SIGKILL signal. I'd like to log this action.

You can use posthook to a run scripts when container is getting terminated.
Use that script to send notification or to log info to STDOUT.
Later when pod get killed, We can still get the older logs by using -p flag in kubectl logs.

Related

How to check if symfony messenger is working

I have a pod running in kubernetes / aws cloud. Due to limited configuration options in a custom deployment process (not my fault!!) I cannot start the symfony messenger as you usually would start it. What I have to do after a deployment is log into the shell and manually do
bin/console messenger:consume my_kafka_messages
Of course as soon as the pod for any reason is automatically restarted my worker isn't running. So until we can change the company deployment process I have to make sure to at least get notice if the worker isn't running.
Is there any option to e.g. run an symfony command which checks if the worker is running? If that was possible I could let the system start the worker or at least send me a notification.
How about
bin/console debug:messenger
?
If I do that and get e.g. this output is this sign that the worker is running? Or is it just the configuration of a worker, which could run, if it were started and may or may not run currently?
$ bin/console deb:mess
Messenger
=========
events
------
The following messages can be dispatched:
--------------------------------------------------
#codeCoverageIgnore
App\Domain\KafkaEvents\ProductPictureCollection
handled by App\Handler\ProductPictureHandler
--------------------------------------------------
Of course I can do a crude approach and check the db, which logs the processed datasets. But t is always possible that for e.g. 5 days there are no data to process. In that case I would get false alarms although everything is fine.
So checking directly if the worker is running would be much better, but I have no idea how to do it.

.NET Core / Kubernetes - SIGTERM, clean shutdown

I'm trying to verify that shutdown is completing cleanly on Kubernetes, with a .NET Core 2.0 app.
I have an app which can run in two "modes" - one using ASP.NET Core and one as a kind of worker process. Both use Console and JSON-which-ends-up-in-Elasticsearch-via-Filebeat-sidecar-container logger output which indicate startup and shutdown progress.
Additionally, I have console output which writes directly to stdout when a SIGTERM or Ctrl-C is received and shutdown begins.
Locally, the app works flawlessly - I get the direct console output, then the logger output flowing to stdout on Ctrl+C (on Windows).
My experiment scenario:
App deployed to GCS k8s cluster (using helm, though I imagine that doesn't make a difference)
Using kubectl logs -f to stream logs from the specific container
Killing the pod from GCS cloud console site, or deleting the resources via helm delete
Dockerfile is FROM microsoft/dotnet:2.1-aspnetcore-runtime and has ENTRYPOINT ["dotnet", "MyAppHere.dll"], so not wrapped in a bash process or anything
Not specifying a terminationGracePeriodSeconds so guess it defaults to 30 sec
Observing output returned
Results:
The API pod log streaming showed just the immediate console output, "[SIGTERM] Stop signal received", not the other Console logger output about shutdown process
The worker pod log streaming showed a little more - the same console output and some Console logger output about shutdown process
The JSON logs didn't seem to pick any of the shutdown log output
My conclusions:
I don't know if Kubernetes is allowing the process to complete before terminating it, or just issuing SIGTERM then killing things very quick. I think it should be waiting, but then, why no complete console logger output?
I don't know if console output is cut off when stdout log streaming at some point before processes finally terminates?
I would guess that the JSON stuff doesn't come through to ES because filebeat running in the sidecar terminates even if there's outstanding stuff in files to send
I would like to know:
Can anyone advise on points 1,2 above?
Any ideas for a way to allow a little extra time or leeway for the sidecar to send stuff up, like a pod container termination order, delay on shutdown for that container, etc?
SIGTERM does indeed signal termination. The less obvious part is that when the SIGTERM handler returns, everything is considered finished.
The fix is to not return from the SIGTERM handler until the app has finished shutting down. For example, using a ManualResetEvent and Wait()ing it in the handler.
I've started to look into this for my own purposes and have come across your question over a year after it was posted... This is a bit late, but have you tried GraceTerm?
There is an associated NuGET package for this.
From the description...
Graceterm middleware provides implementation to ensure graceful shutdown of AspNet Core applications. The basic concept is: After application received a SIGTERM (a signal asking it to terminate), Graceterm will hold it alive till all pending requests are completed or a timeout occur.
I haven't personally tried this yet, but it does look promising.
Try add STOPSIGNAL SIGINT to your Dockerfile

How to handle incorrectly configured applications?

normally I think it's a best practice for an incorrectly configured application to simply die on startup with a detailed error message describing the problem.
For example, if an expected environment variable is missing, meaning the application can't run properly, instead of letting it run in a zombie state where it will never function, I advocate for failing loudly and killing the application with an error message:
Critical Error: Environment variable [REDIS_HOST] not set.
In kubernetes, this ends up in a constant CrashLoop Backoff loop. This isn't great as it's hard to get to that error message as the pod keeps getting restarted and the logs disappear.
Any thoughts or suggestions on the proper way to handle this?
thanks
You can customise a container's termination message by writing to /dev/termination-log by default. When your container terminates you can use kubectl get pods <podName> -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.message}}{{end}}" to retrieve the message. More information about this can be found here.
You can also use kubectl logs <podName> -c <containerName> --previous to view the output of the previous instance of a particular container in a Pod - this may be more useful for you as you won't have to change your application to write error messages to /dev/termination-log.

cf stop command does not perform graceful shutdown on bluemix

I have a node app in bluemix which holds some transaction cache in memory and I would like to flush this cache to DB before the application goes down. So I have the appropriate event handlers to intercept SIGTERM/SIGINT signals and all works fine from my laptop, however, it seems like the cf stop command does not perform graceful shutdown.
Unfortunately, there is no clear documentation around this topic, at one place in the cloudfoundary app-lifecycle doc they do mention that first SIGTERM is issued and then wait for 10 secs etc but Im not seeing this happening. Probably a bug on their side. https://docs.cloudfoundry.org/devguide/deploy-apps/app-lifecycle.html
Has anyone noticed this issue and probably have a workaround pls?
CF is sending the SIGTERM first but because of how the app is started by other processes, it's not being correctly propagated to your app.
As a workaround, disable App Management by setting the CF environment variable BLUEMIX_APP_MGMT_INSTALL=false and prefix your app's start command in your package.json file with 'exec' (e.g. exec node app.js).

Console hanging when deleting Job in Kubernetes

I'm trying to delete a Job in Kubernetes, but every time I run "kubectl delete job [JOBNAME]" it just "hangs" indefinitely.
How can I diagnose this issue to try and determine why the Job's not able to be deleted?
Turn up your debugging by set the verbosity to 9. You will see that kubectl is actually clearing out a lot of different resources created by the job. ctrl-c out of it.
Use --cascade=false it will actually complete shortly. see issue 8598