How do I run extra commands when a program in supervisor (re)starts? - supervisord

How do I run extra commands when a program in supervisor starts/restarts?
Particularly, In this case, I need to do a chmod on a file that the running process creates (socket file).

Actually, you can use the Event Listener. Supervisor provides a way for a specially written program (which it runs as a subprocess) called an “event listener” to subscribe to “event notifications”. When you receive the specific event, you can do whatever you want to do.
In you condition, you can listen the event which indicates that YOU_PROGRAM has been started/stopped, and then call the command to chmod the file.

Related

Killing a process with swift programmatically

I am trying to find a way to identify and kill specific processes in a Mac Application. Is there a built-in class that has functions that can return the list of running processes? There is one way of running terminal commands using Process() and execute the /usr/bin/killall command to kill processes but I need to do it programmatically as running terminal commands using an application is not a good practice. For example, deleting a file can also be done by running a terminal command using Process() while the better way to do it is using FileManager.default.remomveItem().
If you're looking for an application (rather than a process), then see NSWorkspace.shared.runningApplications. You can call terminate() or forceTerminate() on those elements.
If you want a list of all BSD processes (what you would get from a call to ps for example), that's done with sysctl (the code in the Q&A is in C; you'd have to wrap it to Swift, or rewrite it). I don't believe there's any Cocoa wrapper for that. To kill a process, once you have its PID, use signal, which is what the kill Unix command uses. Typically you want to send SIGTERM, which is a normal shutdown. To force-kill a process, send SIGKILL.

How to make RUST run gracefully in the background and daemonize?

This is what i want to achieve
root> ./webserver start // Does not block the terminal after startup, runs in the background and the process is guarded
root>
My current implementation logic:
Logic running in the background
use std::process::Command;
use std::thread;
use std::env;
use std::time::Duration;
fn main() {
let args: Vec<String> = env::args().collect();
if args.len() == 2 {
if &args[1] == "start" {
// Main process start child process
let child = Command::new(&args[0])
.spawn().expect("Child process failed to start.");
println!("child pid: {}", child.id());
// Main process exit
}
} else {Is there any more elegant approach? Looking forward to your reply
// Main business logic
run webserver
}
}
In this way, rust will run in the background without blocking the terminal, but the information printed by rust in the background will still be displayed on the terminal, and the current rust program will exit when exiting the terminal
Process daemon logic
My idea is to monitor the exit signal of the system and not process the exit request
SIGHUP 1 /* Hangup (POSIX). */
SIGINT 2 /* Interrupt (ANSI). */
SIGQUIT 3 /* Quit (POSIX). */
SIGTERM 15 /* Termination (ANSI). */
code:
use signal_hook::{iterator::Signals, SIGHUP,SIGINT,SIGQUIT,SIGTERM};
use std::{thread, time::Duration};
pub fn process_daemon() {
let signals = match Signals::new(&[SIGHUP,SIGINT,SIGQUIT,SIGTERM]) {
Ok(t) => t,
Err(e) => panic!(e),
};Is there any more elegant approach? Looking forward to your reply
thread::spawn(move || {
for sig in signals.forever() {
println!("Received signal {:?}", sig);
}
});
thread::sleep(Duration::from_secs(2));
}
Is there any more elegant approach? Looking forward to your reply.
TLDR: if you really want your process to act like a service (and never quit), probably do the work to set up a service manager. Otherwise, just let it be a normal process.
Daemonizing a Process
One thing to notice right off the bat is that most of the considerations about daemonizing have nothing to do with Rust as a language and are more about:
The underlying system your processes are targeted for
The exact behavior of your daemon processes once spawned
By looking at your question, it seems you have realized most of this. Unfortunately to properly answer your question we have to delve a bit into the intricacies of processes and how they are managed. It should be noted that existing 'service' managers are a great solution if you are OK with significant platform dependence in your launching infrastructure.
Linux: systemd
FreeBSD: rc
MacOS: launchd
Windows: sc
As you can see, no simple feat if you want to have a simple deployment that just works (provided that it is compiled for the relevant system). These are just the bare metal service managers. If you want to support virtual environments you will have to think about Kubernetes services, dockerizing, etc.
These tools exist because there are many considerations to managing a long-running process on a system:
Should my daemon behave like a service and respawn if killed (or if the system is rebooted)? The tools above will allow for this.
If a service, should my daemon have status states associated with it to help with maintenance? This can help with managing outages and building tooling to scale horizontally.
If the daemon shouldn't be a service (unlikely in your case given your binary's name) there are even more questions: should it be attached to the parent process? Should it be attached to the login process group?
My guess for your process given how complex this can become, simply run the process directly. Don't daemonize at all.
For testing, (if you are in a unix-like environment) you can run your process in the background:
./webserver start &
This will spawn the new process in the background, but attach it to your shell's process list. This can be nice for testing, because if that shell goes away the system will clean up these attached processes along with it.
The above will direct stderr and stdout file descriptors back to your terminal and print them. If you wish to avoid that, you can always redirect the output somewhere else.
Disabling signals to a process like this doesn't seem like the right approach to me. Save these signals to gracefully exit your process once you need to save state or send a termination message to a client. If you do the above, your daemon will only be killable by a kill -9 <pid> or rebooting, or finding some non-standard signal you haven't overridden whose default behavior is to terminate.

Pass SIGTSTP signal to all processes in a job in LSF

The problem statement in short: Is there a way in LSF to pass a signal SIGCONT/SIGTSTP to all processes running within a job?
I have a Perl wrapper script that runs on LSF (Version 9.1.2) and starts a tool (Source not available) on the same LSF machine as the Perl script.
The tool starts 2 processes, one for license management and another for doing the actual work. It also supports an option where sending SIGSTSP/SIGCONT to both processes will release/reacquire the license (which is what I wish to achieve).
Running bkill -s SIGCONT <JOB_ID> only resumes the tool process and not the license process, which is a problem.
I tried to see if I can send the signals to the Perl script's own PGID, but the license process starts its own process group.
Any suggestions to move forward through Perl or LSF options are welcome.
Thanks,
Abhishek
I tried to see if I can send the signals to the Perl script's own PGID, but the license process starts its own process group.
This is likely your problem right here. LSF keeps track of "processes running within the job" by process group. If your job spawns a process that runs within its own process group (say by daemonizing itself) then it essentially is a runaway process out of LSF's control -- it becomes your job's responsibility to manage it.
For reference, see the section on "Detached processes" here.
As for options:
I think the cgroups tracking functionality helps in a lot of these cases, you can ask your admin if LSF_PROCESS_TRACKING and LSF_LINUX_CGROUP_ACCT are set in lsf.conf. If they aren't, then you can ask him to set them and see if that helps for your case (you need to make sure the host you're running on supports cgroups). In 9.1.2 this feature is turned on at installation time, so this option might not actually help you for various reasons (your hosts don't have cgroups enabled for example).
Manage the license process yourself. If you can find out the PID/PGID of the license process from within your perl script, you can install custom signal handlers for SIGCONT/SIGSTP in your script using sigtrap or the like and forward them to the license process yourself when your script receives them through bkill. See here.

How to run a command just after shutdown or halt in Debian?

I'm working with an embedded computer that has a Debian on it. I already manage to run a command just before it has booted and play the "bell" to tell that is ready to work, and for example try to connect to a service.
The problem is that I need to play the bell (or run any command/program) when the system is halted so is safe to un-plug the power. Is there any runscript that run just after halt?
If you have a look in /etc/init.d, you'll see a script called halt. I'm pretty certain that when /sbin/halt is called in a runlevel other than 0 or 6, it calls /sbin/shutdown, which runs this script (unless called with an -n flag). So maybe you could add your own hook into that script? Obviously, it would be before the final halt was called, but nothing runs after that, so maybe that's ok.
Another option would be to use the fact that all running processes get sent a SIGTERM followed (a second or so later) by a SIGKILL. So you could write a simple daemon that just sat there until given a SIGTERM, at which point it went "ping" and died.

Tailing 'Jobs' with Perl under mod_perl

I've got this project running under mod_perl shows some information on a host. On this page is a text box with a dropdown that allows users to ping/nslookup/traceroute the host. The output is shown in the text box like a tail -f.
It works great under CGI. When the user requests a ping it would make an AJAX call to the server, where it essentially starts the ping with the output going to a temp file. Then subsequent ajax calls would 'tail' the file so that the output was updated until the ping finished. Once the job finished, the temp file would be removed.
However, under mod_perl no matter what I do I can's stop it from creating zombie processes. I've tried everything, double forking, using IPC::Run etc. In the end, system calls are not encouraged under mod_perl.
So my question is, maybe there's a better way to do this? Is there a CPAN module available for creating command line jobs and tailing output that will work under mod_perl? I'm just looking for some suggestions.
I know I could probably create some sort of 'job' daemon that I signal with details and get updates from. It would run the commands and keep track of their status etc. But is there a simpler way?
Thanks in advance.
I had a short timeframe on this one and had no luck with CPAN, so I'll provide my solution here (I probably re-invented the wheel). I had to get something done right away.
I'll use ping in this example.
When ping is requested by the user, the AJAX script creates a record in a database with the details of the ping (host, interval, count etc.). The record has an auto-incrementing ID field. It then sends a SIGHUP to to a job daemon, which is just a daemonised perl script.
This job daemon receives the SIGHUP, looks for new jobs in the database and processes each one. When it gets a new job, it forks, writes the PID and 'running' status to the DB record, opens up stdout/stderr files based on the unique job ID and uses IPC::Run to direct STDOUT/STDERR to these files.
The job daemon keeps track of the forked jobs, killing them if they run too long etc.
To tail the output, the AJAX script send back the job ID to the browser. Then on a Javascript timer, the AJAX script is called which basically checks the status of the job via the database record and tails the files.
When the ping finishes, the job daemon sets the record status to 'done'. The AJAX script checks for this on it's regular status checks.
One of the reasons I did it this way is that the AJAX script and the job daemon talk through and authenticated means (the DB).