How to make RUST run gracefully in the background and daemonize? - service

This is what i want to achieve
root> ./webserver start // Does not block the terminal after startup, runs in the background and the process is guarded
root>
My current implementation logic:
Logic running in the background
use std::process::Command;
use std::thread;
use std::env;
use std::time::Duration;
fn main() {
let args: Vec<String> = env::args().collect();
if args.len() == 2 {
if &args[1] == "start" {
// Main process start child process
let child = Command::new(&args[0])
.spawn().expect("Child process failed to start.");
println!("child pid: {}", child.id());
// Main process exit
}
} else {Is there any more elegant approach? Looking forward to your reply
// Main business logic
run webserver
}
}
In this way, rust will run in the background without blocking the terminal, but the information printed by rust in the background will still be displayed on the terminal, and the current rust program will exit when exiting the terminal
Process daemon logic
My idea is to monitor the exit signal of the system and not process the exit request
SIGHUP 1 /* Hangup (POSIX). */
SIGINT 2 /* Interrupt (ANSI). */
SIGQUIT 3 /* Quit (POSIX). */
SIGTERM 15 /* Termination (ANSI). */
code:
use signal_hook::{iterator::Signals, SIGHUP,SIGINT,SIGQUIT,SIGTERM};
use std::{thread, time::Duration};
pub fn process_daemon() {
let signals = match Signals::new(&[SIGHUP,SIGINT,SIGQUIT,SIGTERM]) {
Ok(t) => t,
Err(e) => panic!(e),
};Is there any more elegant approach? Looking forward to your reply
thread::spawn(move || {
for sig in signals.forever() {
println!("Received signal {:?}", sig);
}
});
thread::sleep(Duration::from_secs(2));
}
Is there any more elegant approach? Looking forward to your reply.

TLDR: if you really want your process to act like a service (and never quit), probably do the work to set up a service manager. Otherwise, just let it be a normal process.
Daemonizing a Process
One thing to notice right off the bat is that most of the considerations about daemonizing have nothing to do with Rust as a language and are more about:
The underlying system your processes are targeted for
The exact behavior of your daemon processes once spawned
By looking at your question, it seems you have realized most of this. Unfortunately to properly answer your question we have to delve a bit into the intricacies of processes and how they are managed. It should be noted that existing 'service' managers are a great solution if you are OK with significant platform dependence in your launching infrastructure.
Linux: systemd
FreeBSD: rc
MacOS: launchd
Windows: sc
As you can see, no simple feat if you want to have a simple deployment that just works (provided that it is compiled for the relevant system). These are just the bare metal service managers. If you want to support virtual environments you will have to think about Kubernetes services, dockerizing, etc.
These tools exist because there are many considerations to managing a long-running process on a system:
Should my daemon behave like a service and respawn if killed (or if the system is rebooted)? The tools above will allow for this.
If a service, should my daemon have status states associated with it to help with maintenance? This can help with managing outages and building tooling to scale horizontally.
If the daemon shouldn't be a service (unlikely in your case given your binary's name) there are even more questions: should it be attached to the parent process? Should it be attached to the login process group?
My guess for your process given how complex this can become, simply run the process directly. Don't daemonize at all.
For testing, (if you are in a unix-like environment) you can run your process in the background:
./webserver start &
This will spawn the new process in the background, but attach it to your shell's process list. This can be nice for testing, because if that shell goes away the system will clean up these attached processes along with it.
The above will direct stderr and stdout file descriptors back to your terminal and print them. If you wish to avoid that, you can always redirect the output somewhere else.
Disabling signals to a process like this doesn't seem like the right approach to me. Save these signals to gracefully exit your process once you need to save state or send a termination message to a client. If you do the above, your daemon will only be killable by a kill -9 <pid> or rebooting, or finding some non-standard signal you haven't overridden whose default behavior is to terminate.

Related

Avoid stopping of Julia for REST service

I try to build a little REST service with Julia and Genie library. The last command is up(8888).
When I start this from Julia REPL all is ok.
When I start it from command line like >julia myrestapi.jl the program starts and stops immediately, i.e. up() doesn't go into an infinite loop.
What can I do to keep the server running?
When the Genie server is initiated in asynchronous mode, it runs off the main Task, and allows script processing to continue. If the script ends, the whole process and its spawned Tasks are stopped. This behavior is not good for a running web-service. To keep this from happening, two suggestions are:
Don't run the server off the main Task, by running synchronously. In code:
Genie.config.run_as_server = true
...
Genie.Server.up()
Make sure the main process does not end until the server Task ends. In code:
Base.JLOptions().isinteractive == 0 && wait()
The isinteractive condition, runs the wait() only when it is running as a script, as the usual desire when a REPL is present in interactive session, is to issue more commands, and the REPL keeps the server Task running in the background.

Killing a process with swift programmatically

I am trying to find a way to identify and kill specific processes in a Mac Application. Is there a built-in class that has functions that can return the list of running processes? There is one way of running terminal commands using Process() and execute the /usr/bin/killall command to kill processes but I need to do it programmatically as running terminal commands using an application is not a good practice. For example, deleting a file can also be done by running a terminal command using Process() while the better way to do it is using FileManager.default.remomveItem().
If you're looking for an application (rather than a process), then see NSWorkspace.shared.runningApplications. You can call terminate() or forceTerminate() on those elements.
If you want a list of all BSD processes (what you would get from a call to ps for example), that's done with sysctl (the code in the Q&A is in C; you'd have to wrap it to Swift, or rewrite it). I don't believe there's any Cocoa wrapper for that. To kill a process, once you have its PID, use signal, which is what the kill Unix command uses. Typically you want to send SIGTERM, which is a normal shutdown. To force-kill a process, send SIGKILL.

Customise debugger stop/restart button behavior

I'm using VS Code to write and debug a C++ program binding to libfuse.
Unfortunately, if you kill a libfuse process with SIGKILL, you then have to use sudo umount -f <mountpoint> before you can start the program again. It's a minor nuisance if I have to do this every time I want to stop or restart debugging, especially as I shouldn't have to authenticate to do such a regular task (the sudo is somehow necessary despite the mount occurring as my user).
While I think this is mainly FUSE's fault (it should gracefully recover from a process being ungracefully killed and unmount automatically instead of leaving the directory saying Transport endpoint is not connected), I also think there should be a way to customise VS Code (or any IDE) to run some clean-up when you want to stop debugging.
I've found that entering -exec signal SIGTERM in the Debug Console will gracefully unmount the directory correctly, stop the process and tell VS Code it's no longer debugging (status bar changes back from orange to blue). But I can't seem to find a way to automate this. I've tried using a .gdbinit file, with some inspiration from this question:
handle SIGTERM nostop
# This doesn't work as hook-quit isn't run when quitting via MI mode, which VS Code uses...
define hook-quit
signal SIGTERM
end
But as noted in the linked question, GDB ignores quit hooks when in MI mode, and VS Code uses MI mode.
The ideal solution for me would be if I could put something in a .vscode configuration file telling it to send -exec signal SIGTERM when I click the stop or restart buttons (and then wait for whatever notification it's getting that debugging has stopped, before restarting if applicable) but I imagine there probably isn't an option for that.
Even if the buttons can't be customised, I'd be happy with the ability to have a keybinding that would just send -exec signal SIGTERM to the Debug Console without me having to open said console and enter the command, though the Command Palette doesn't show anything useful here (nothing that looks like it will send a specified Debug Console command), so I don't expect there's a bindable command for that either.
Does anyone have any suggestions? Or would these belong as feature requests over on the VS Code github? Any way to get GDB to respect its quit hook in MI mode, or to get FUSE to gracefully handle its process being killed would be appreciated too.

Pass SIGTSTP signal to all processes in a job in LSF

The problem statement in short: Is there a way in LSF to pass a signal SIGCONT/SIGTSTP to all processes running within a job?
I have a Perl wrapper script that runs on LSF (Version 9.1.2) and starts a tool (Source not available) on the same LSF machine as the Perl script.
The tool starts 2 processes, one for license management and another for doing the actual work. It also supports an option where sending SIGSTSP/SIGCONT to both processes will release/reacquire the license (which is what I wish to achieve).
Running bkill -s SIGCONT <JOB_ID> only resumes the tool process and not the license process, which is a problem.
I tried to see if I can send the signals to the Perl script's own PGID, but the license process starts its own process group.
Any suggestions to move forward through Perl or LSF options are welcome.
Thanks,
Abhishek
I tried to see if I can send the signals to the Perl script's own PGID, but the license process starts its own process group.
This is likely your problem right here. LSF keeps track of "processes running within the job" by process group. If your job spawns a process that runs within its own process group (say by daemonizing itself) then it essentially is a runaway process out of LSF's control -- it becomes your job's responsibility to manage it.
For reference, see the section on "Detached processes" here.
As for options:
I think the cgroups tracking functionality helps in a lot of these cases, you can ask your admin if LSF_PROCESS_TRACKING and LSF_LINUX_CGROUP_ACCT are set in lsf.conf. If they aren't, then you can ask him to set them and see if that helps for your case (you need to make sure the host you're running on supports cgroups). In 9.1.2 this feature is turned on at installation time, so this option might not actually help you for various reasons (your hosts don't have cgroups enabled for example).
Manage the license process yourself. If you can find out the PID/PGID of the license process from within your perl script, you can install custom signal handlers for SIGCONT/SIGSTP in your script using sigtrap or the like and forward them to the license process yourself when your script receives them through bkill. See here.

How to run a command just after shutdown or halt in Debian?

I'm working with an embedded computer that has a Debian on it. I already manage to run a command just before it has booted and play the "bell" to tell that is ready to work, and for example try to connect to a service.
The problem is that I need to play the bell (or run any command/program) when the system is halted so is safe to un-plug the power. Is there any runscript that run just after halt?
If you have a look in /etc/init.d, you'll see a script called halt. I'm pretty certain that when /sbin/halt is called in a runlevel other than 0 or 6, it calls /sbin/shutdown, which runs this script (unless called with an -n flag). So maybe you could add your own hook into that script? Obviously, it would be before the final halt was called, but nothing runs after that, so maybe that's ok.
Another option would be to use the fact that all running processes get sent a SIGTERM followed (a second or so later) by a SIGKILL. So you could write a simple daemon that just sat there until given a SIGTERM, at which point it went "ping" and died.