I have a node app in bluemix which holds some transaction cache in memory and I would like to flush this cache to DB before the application goes down. So I have the appropriate event handlers to intercept SIGTERM/SIGINT signals and all works fine from my laptop, however, it seems like the cf stop command does not perform graceful shutdown.
Unfortunately, there is no clear documentation around this topic, at one place in the cloudfoundary app-lifecycle doc they do mention that first SIGTERM is issued and then wait for 10 secs etc but Im not seeing this happening. Probably a bug on their side. https://docs.cloudfoundry.org/devguide/deploy-apps/app-lifecycle.html
Has anyone noticed this issue and probably have a workaround pls?
CF is sending the SIGTERM first but because of how the app is started by other processes, it's not being correctly propagated to your app.
As a workaround, disable App Management by setting the CF environment variable BLUEMIX_APP_MGMT_INSTALL=false and prefix your app's start command in your package.json file with 'exec' (e.g. exec node app.js).
Related
I have a BE service in NestJS that is deployed in Vercel.
I need several schedulers, so I have used #nestjs/schedule lib, which is super easy to use.
Locally, everything works perfectly.
For some reason, the only thing that is not working in my production environment is those schedulers. Everything else is working - endpoints, data base access..
Does anyone has an idea why? is it something with my deployment? maybe Vercel has some issue with that? maybe this schedule library requires something the Vercel doesn't have?
I am clueless..
Cold boot is the process of starting a computer from shutdown or a powerless state and setting it to normal working condition.
Which means that the code you deployed in a serveless manner, will run when the endpoint is called. The platform you are using spins up a virtual machine, to execute your code. And keeps the machine running for a certain period of time, incase you get another API hit, it's cheaper and easier on them to keep the machine running for lets say 5 minutes or 60 seconds, than to redeploy it on every call after shutting the machine when function execution ends.
So in your case, most likely what is happening is that the machine that you are setting the cron on, is killed after a period of time. Crons are system specific tasks which run in the kernel. But if the machine is shutdown, the cron dies with it. The only case where the cron would run, is if the cron was triggered at a point of time, before the machine was shut down.
Certain cloud providers give you the option to keep the machines alive. I remember google cloud used to follow the path of that if a serveless function is called frequently, it shifts from cold boot to hot start, which doesn't kill the machine entirely, and if you have traffic the machines stay alive.
From quick research, vercel isn't the best to handle crons, due to the nature of the infrastructure, and this is what you are looking for. In general, crons aren't for serveless functions. You can deploy the crons using queues for example or another third party service, check out this link by vercel.
I'm trying to verify that shutdown is completing cleanly on Kubernetes, with a .NET Core 2.0 app.
I have an app which can run in two "modes" - one using ASP.NET Core and one as a kind of worker process. Both use Console and JSON-which-ends-up-in-Elasticsearch-via-Filebeat-sidecar-container logger output which indicate startup and shutdown progress.
Additionally, I have console output which writes directly to stdout when a SIGTERM or Ctrl-C is received and shutdown begins.
Locally, the app works flawlessly - I get the direct console output, then the logger output flowing to stdout on Ctrl+C (on Windows).
My experiment scenario:
App deployed to GCS k8s cluster (using helm, though I imagine that doesn't make a difference)
Using kubectl logs -f to stream logs from the specific container
Killing the pod from GCS cloud console site, or deleting the resources via helm delete
Dockerfile is FROM microsoft/dotnet:2.1-aspnetcore-runtime and has ENTRYPOINT ["dotnet", "MyAppHere.dll"], so not wrapped in a bash process or anything
Not specifying a terminationGracePeriodSeconds so guess it defaults to 30 sec
Observing output returned
Results:
The API pod log streaming showed just the immediate console output, "[SIGTERM] Stop signal received", not the other Console logger output about shutdown process
The worker pod log streaming showed a little more - the same console output and some Console logger output about shutdown process
The JSON logs didn't seem to pick any of the shutdown log output
My conclusions:
I don't know if Kubernetes is allowing the process to complete before terminating it, or just issuing SIGTERM then killing things very quick. I think it should be waiting, but then, why no complete console logger output?
I don't know if console output is cut off when stdout log streaming at some point before processes finally terminates?
I would guess that the JSON stuff doesn't come through to ES because filebeat running in the sidecar terminates even if there's outstanding stuff in files to send
I would like to know:
Can anyone advise on points 1,2 above?
Any ideas for a way to allow a little extra time or leeway for the sidecar to send stuff up, like a pod container termination order, delay on shutdown for that container, etc?
SIGTERM does indeed signal termination. The less obvious part is that when the SIGTERM handler returns, everything is considered finished.
The fix is to not return from the SIGTERM handler until the app has finished shutting down. For example, using a ManualResetEvent and Wait()ing it in the handler.
I've started to look into this for my own purposes and have come across your question over a year after it was posted... This is a bit late, but have you tried GraceTerm?
There is an associated NuGET package for this.
From the description...
Graceterm middleware provides implementation to ensure graceful shutdown of AspNet Core applications. The basic concept is: After application received a SIGTERM (a signal asking it to terminate), Graceterm will hold it alive till all pending requests are completed or a timeout occur.
I haven't personally tried this yet, but it does look promising.
Try add STOPSIGNAL SIGINT to your Dockerfile
I have already read the thread:
Wakanda Server scripted clean shutdown
This does not address my question.
We are running Wakanda Server 11.197492.
We want an automated, orderly, ensured shut-down of Wakanda Server - no matter which version we are running.
Before we give the "shutdown" command, we will stop inbound traffic for 1 to 2 minutes, to ensure that no httpHandlers are running when we shut-down.
We have scripted a single SharedWorker process to look for the "shutdown" command, and execute solution.quitServer().
At this time no other ShareWorker processes are running, and no active threads should be executing. This will likely not always be the case.
When this is executed, is a "solution quit" guaranteed?
Is solution.quitServer() the best way to initiate an automated solution shutdown?
Will there be a better way?
Is there a way to know of any of the Solution's Projects are currently executing threads prior to shutting down?
If more than 1 Project issues a solution.quitServer() method, within a few seconds of eachother, will that be a problem?
solution.quitServer() is probably not the best way to shutdown your server as it will be deprecated in the next major release.
I would recommend to send a sigkill as you point out in your question.
Wakanda Server scripted clean shutdown
Some fix have been done on v1.1.0 to safely close wakanda server after a kill.
I notice that when I am running a process under marathon and I restart it, the process automatically starts back up. The way the logic of the process works, if it is restarted, it enters a recovery mode where it tries to replay its state. The recovery mode is entered when a command-line flag is seen, such as "-r". I want to append this flag to cmd command that is initially used during startup in marathon. Is there an option somewhere in marathon for this capability?
I solved my issue by using event subscriber in marathon. By using PUT with curl rather than POST, you are able to modify a deployment rather than recreating a brand new one with POST.
I want to implement a automatic service restarting for several tomcat applications, applications that do take a lot of time to start, even over 10 minutes.
Mainly the test would check if the application is responding on HTTP with a valid response.
Still, this is not the problem, the problem is how to prevent this uptime check to fail while the service is under maintenance, scheduled or not.
I don't want for this service to be started if it was stopped manually, with `service appname stop".
I considered creating .maintenance files on stop or restart actions of the daemon and checking for them before triggering an automated restart.
So far the only problem that I wasn't able to properly solve was, how to detect that the app finished starting up and remove the .maintenance file, so the automatic restart would work properly.
Note, an init.d script is not supposed to wait, so the daemon should start a background command that solves this problem.