I try to build a little REST service with Julia and Genie library. The last command is up(8888).
When I start this from Julia REPL all is ok.
When I start it from command line like >julia myrestapi.jl the program starts and stops immediately, i.e. up() doesn't go into an infinite loop.
What can I do to keep the server running?
When the Genie server is initiated in asynchronous mode, it runs off the main Task, and allows script processing to continue. If the script ends, the whole process and its spawned Tasks are stopped. This behavior is not good for a running web-service. To keep this from happening, two suggestions are:
Don't run the server off the main Task, by running synchronously. In code:
Genie.config.run_as_server = true
...
Genie.Server.up()
Make sure the main process does not end until the server Task ends. In code:
Base.JLOptions().isinteractive == 0 && wait()
The isinteractive condition, runs the wait() only when it is running as a script, as the usual desire when a REPL is present in interactive session, is to issue more commands, and the REPL keeps the server Task running in the background.
Related
This is what i want to achieve
root> ./webserver start // Does not block the terminal after startup, runs in the background and the process is guarded
root>
My current implementation logic:
Logic running in the background
use std::process::Command;
use std::thread;
use std::env;
use std::time::Duration;
fn main() {
let args: Vec<String> = env::args().collect();
if args.len() == 2 {
if &args[1] == "start" {
// Main process start child process
let child = Command::new(&args[0])
.spawn().expect("Child process failed to start.");
println!("child pid: {}", child.id());
// Main process exit
}
} else {Is there any more elegant approach? Looking forward to your reply
// Main business logic
run webserver
}
}
In this way, rust will run in the background without blocking the terminal, but the information printed by rust in the background will still be displayed on the terminal, and the current rust program will exit when exiting the terminal
Process daemon logic
My idea is to monitor the exit signal of the system and not process the exit request
SIGHUP 1 /* Hangup (POSIX). */
SIGINT 2 /* Interrupt (ANSI). */
SIGQUIT 3 /* Quit (POSIX). */
SIGTERM 15 /* Termination (ANSI). */
code:
use signal_hook::{iterator::Signals, SIGHUP,SIGINT,SIGQUIT,SIGTERM};
use std::{thread, time::Duration};
pub fn process_daemon() {
let signals = match Signals::new(&[SIGHUP,SIGINT,SIGQUIT,SIGTERM]) {
Ok(t) => t,
Err(e) => panic!(e),
};Is there any more elegant approach? Looking forward to your reply
thread::spawn(move || {
for sig in signals.forever() {
println!("Received signal {:?}", sig);
}
});
thread::sleep(Duration::from_secs(2));
}
Is there any more elegant approach? Looking forward to your reply.
TLDR: if you really want your process to act like a service (and never quit), probably do the work to set up a service manager. Otherwise, just let it be a normal process.
Daemonizing a Process
One thing to notice right off the bat is that most of the considerations about daemonizing have nothing to do with Rust as a language and are more about:
The underlying system your processes are targeted for
The exact behavior of your daemon processes once spawned
By looking at your question, it seems you have realized most of this. Unfortunately to properly answer your question we have to delve a bit into the intricacies of processes and how they are managed. It should be noted that existing 'service' managers are a great solution if you are OK with significant platform dependence in your launching infrastructure.
Linux: systemd
FreeBSD: rc
MacOS: launchd
Windows: sc
As you can see, no simple feat if you want to have a simple deployment that just works (provided that it is compiled for the relevant system). These are just the bare metal service managers. If you want to support virtual environments you will have to think about Kubernetes services, dockerizing, etc.
These tools exist because there are many considerations to managing a long-running process on a system:
Should my daemon behave like a service and respawn if killed (or if the system is rebooted)? The tools above will allow for this.
If a service, should my daemon have status states associated with it to help with maintenance? This can help with managing outages and building tooling to scale horizontally.
If the daemon shouldn't be a service (unlikely in your case given your binary's name) there are even more questions: should it be attached to the parent process? Should it be attached to the login process group?
My guess for your process given how complex this can become, simply run the process directly. Don't daemonize at all.
For testing, (if you are in a unix-like environment) you can run your process in the background:
./webserver start &
This will spawn the new process in the background, but attach it to your shell's process list. This can be nice for testing, because if that shell goes away the system will clean up these attached processes along with it.
The above will direct stderr and stdout file descriptors back to your terminal and print them. If you wish to avoid that, you can always redirect the output somewhere else.
Disabling signals to a process like this doesn't seem like the right approach to me. Save these signals to gracefully exit your process once you need to save state or send a termination message to a client. If you do the above, your daemon will only be killable by a kill -9 <pid> or rebooting, or finding some non-standard signal you haven't overridden whose default behavior is to terminate.
Is it possible to subscribe to an event in powershell when a particular executable is run?
We have an application that hogs up memory and then causes the system to crash, and if I could attach an event that starts a timer when it starts running and just kills after a certain amount of time, that would fix the issue.
You can use task scheduler to trigger on an Windows Event.Task Scheduler Trigger
Then you can add your powershell script as an action. Even delay the task if you like
As Crusadin sugggests, have a script run when an eventlog entry is made. I have just done the same thing here in work.
If I run the following command in my session...
(Get-Process -Id $pid).CloseMainWindow()
I am able to gracefully shut down a process (no modal windows or other popups arise).
If, however, the pid is in another user's session on the same machine (running RDS), the process does not close, and CloseMainWindow() returns FALSE (it returns TRUE if it's running in my own session). It also works if I run the powershell from the other user's session.
I specifically need a way to gracefully shut down the program as the program has a few important cleanup actions required to keep its database in order. So stop-process or process.kill() will not work.
After lengthy research, it does not seem possible to do this. There is, however, a solution which met at least some of my requirements.
You can create a Windows Scheduled Task which is triggered on session disconnect. This allows you to run a cleanup job as the user, rather than as the administrator, which allows programs to exit gracefully.
It has two major drawbacks....
It is called even if the user just has a minor network interruption (so you have to build a wait() function in the script to sleep for a bit and then check if it is still disconnected - not a clean solution.
It isn't called during a log-off event. For that you need to use a logoff script triggered by GPO.
Hope this helps someone in the future.
When trying to use Gatling CLI mode, the gatling starts successfully and recording is also happening. But the problem is when stopping the recording. As mentioned by the documentation (https://gatling.io/docs/2.3/http/recorder/), it can be stopped either by CTRL-C or by killing the pid available in .gatling-recorder-pid file.
I have used the second approach. Though the recording is stopped successfully, it is unable to create the simulation file. After doing some trial and error the only understanding i have now is that unless CTRL-C is pressed, it can never create a simulation file and killing pid only stops the recorder just before the file creation. But i am unable to simulate the CTRL-C action in windows command prompt from java. Please help. Thanks in advance
The kill command sends a SIGTERM (termination) signal. The SIGINT (interrupt) signal is the one equivalent to Ctrl+C.
kill -SIGINT processPIDHere
I'm working with an embedded computer that has a Debian on it. I already manage to run a command just before it has booted and play the "bell" to tell that is ready to work, and for example try to connect to a service.
The problem is that I need to play the bell (or run any command/program) when the system is halted so is safe to un-plug the power. Is there any runscript that run just after halt?
If you have a look in /etc/init.d, you'll see a script called halt. I'm pretty certain that when /sbin/halt is called in a runlevel other than 0 or 6, it calls /sbin/shutdown, which runs this script (unless called with an -n flag). So maybe you could add your own hook into that script? Obviously, it would be before the final halt was called, but nothing runs after that, so maybe that's ok.
Another option would be to use the fact that all running processes get sent a SIGTERM followed (a second or so later) by a SIGKILL. So you could write a simple daemon that just sat there until given a SIGTERM, at which point it went "ping" and died.