I'm using sys.process inside REPL as kind of shell. There are many uses for scala in a shell. And I invoke some external programs, of course. But I discovered that I could not leave the REPL with a background proccess running. And if I kill the sbt by either Ctrl-C or sending signal, the background process is killed also. I'd like to leave sbt and keep all invoked processes running. How can I do so?
The problem isn't with SBT or Scala but with the child process you created. The child needs to "daemonize" to become independent of the parent process. How to do that depends on what kind of process you are invoking and which OS you are running on. On Linux, using the following script as a wrapper around whatever process you are calling works:
#!/bin/bash
nohup $# 2>&1 >/dev/null &
Related
I'm running two Perl scripts in parallel in Jenkins
some shell commands
perl script 1 &
perl script 2 &
wait
some more shell commands
If one of the perl scripts fail in the middle of the execution , the job waits until the other script runs (as it is executed in parallel in background).
I want the job to stop as soon as one of the script fails and not waste time by completing the execution of other script.
Please help.
You set up a signal handler for SIGCHLD, which is a signal that is always delivered to the parent process when a child exits. I'm not aware of a mechanism to see which child process exited, but you can save the subprocess process identifiers and just kill both of them when you receive SIGCHLD:
some shell commands
perl script 1 &
pid1=$!
perl script 2 &
pid2=$!
trap "kill $pid1 $pid2" CHLD
wait
some more shell commands
The script above has the downside that it will kill the other script regardless of the exit status of the subprocess. You could in the trap, if you want to, add a check for the exit status. The subprocess could e.g. create some temp file if it succeeds and the trap could check if the file exists.
Typically with Jenkins you would have the parallel steps running as separate jobs (or projects as they are sometimes known) rather than steps in a job. This would then allow the steps to run in parallel across different slave machines and it would keep the output for the jobs in a separate place.
You would then have a controlling job running the other parts.
I like the Multijob plugin for this sort of thing.
There are alternatives which may suit better, such as Build Flow Plugin which uses a DSL to describe the jobs you want to run
I have been running a program using nohup but I forgot to add & after the command so the terminal is stuck on the process that has been running for hours. the script I am running in python generates 5 processes each time.
Is there anyway I can make the entire script to continue in the background (get the same effect as an &) without killing and rerunning the process.
Hit Ctrl-Z to suspend the process.
Then bg to tell it to run again as a background process.
Some unix command such as tail -f or starting a python web server (i.e. cherrypy) will produce an endless output, i.e. the only way to stop it is to Ctrl-C. I'm working on a scala application which execute the command like that, my implemetation is:
import scala.sys.process._
def exe(command: String): Unit = {
command !
}
However, as the command produces an endless output stream, the application hangs there until I either terminate it or kill the process started by the command. I also try to add & at the end of the command in order to run it in background but my application still hangs.
Hence, I'm looking for another way to execute a command without hanging my application.
You can use a custom ProcessLogger to deal with output however you wish as soon as it is available.
val proc =
Process(command).run(ProcessLogger(line => (), err => println("Uh-oh: "+err)))
You may kill a process with the destroy method.
proc.destroy
If you are waiting to get a certain output before killing it, you can create a custom ProcessLogger that can call destroy on its own process once it has what it needs.
You may prefer to use lines (in 2.10; the name is changing to lineStream in 2.11) instead of run to gather standard output, since that will give you a stream that will block when no new output is available. Then you wrap the whole thing in a Future, read lines from the stream until you have what you need, and then kill the process--this simplifies blocking/waiting.
Seq("sh", "-c", "tail -f /var/log/syslog > /dev/null &") !
works for me. I think Randall's answer fails because scala is just executing the commands, and can't interpret shell operators like "&". If the command passed to scala is "sh" and the arguments are a complete shell command, we work around this issue. There also seems to be an issue with how scala parses/separates individual arguments, and using a Seq instead of single String works better for that.
The above is equivalent to the unix command:
sh -c 'tail -f /var/log/syslog > /dev/null &'
If you close the descriptor(s) from which you're reading the process' output, it will get a SIGPIPE and (usually) terminate.
If you just don't want the output, redirect to /dev/null:
command arg arg arg >/dev/null 2>&1
Addendum: This pertains only to Unix-alike systems, not Windows.
I'm using the Scala REPL to interactively test some hash functions I'm building. I'm constantly switching between the product code (Eclipse), the browser and the Scala interpreter, doing copy/paste of values and results. In the mix I'm often doing CTRL-C on the interpreter, exiting the session and loosing all my functions.
Is there any way to let the Scala REPL either ignore CTRL-C or, even better, perform "paste" with it? I'm working on Linux.
I only know how to prevent REPL from exiting. Remapping of CTRL+C to perform copy command could be done in the same way (if there is some command that ables to change keymap w/out restarting terminal -- I don't know is there one). Anyways, to block ^C wrap your REPL invocation in .sh script like this:
#!/bin/bash
#switch off sensitivity to ^C
trap '' 2
# here goes REPL invoke
scala
#get back sensitivity to ^C
trap 2
trap command
defines and activates handlers to be run when the shell receives
signals
or other conditions.
2 is a SIGINT value (that's the signal which is triggered when you press CTRL+C)
The repl already intercepts ctrl-C, but apparently it doesn't work on linux. It does work on osx. If someone who uses linux opens a ticket with sufficient detail to indicate why not, I can fix it.
As an alternative to the native Scala REPL, you can use Ammonite, which does handle Ctrl+C:
# while(true) ()
... hangs ...
^Ctrl-C
Interrupted!
#
The traditional Scala REPL doesn't handle runaway code, and gives you no option but to kill the process, losing all your work.
Ammonite-REPL lets you interrupt the thread, stop the runaway-command and keep going.
I've searched around but haven't quite found what I'm looking for. In a nutshell I have created a bash script to run in a infinite while loop, sleeping and checking if a process is running. The only problem is even if the process is running, it says it is not and opens another instance.
I know I should check by process name and not process id, since another process could jump in and take the id. However all perl programs are named Perl5.10.0 on my system, and I intend on having multiple instances of the same perl program open.
The following "if" always returns false, what am I doing wrong here???
while true; do
if [ ps -p $pid ]; then
echo "Program running fine"
sleep 10
else
echo "Program being restarted\n"
perl program_name.pl &
sleep 5
read -r pid < "${filename}_pid.txt"
fi
done
Get rid of the square brackets. It should be:
if ps -p $pid; then
The square brackets are syntactic sugar for the test command. This is an entirely different beast and does not invoke ps at all:
if test ps -p $pid; then
In fact that yields "-bash: [: -p: binary operator expected" when I run it.
Aside from the syntax error already pointed out, this is a lousy way to ensure that a process stays alive.
First, you should find out why your program is dying in the first place; this script doesn't fix a bug, it tries to hide one.
Secondly, if it is so important that a program remain running, why do you expect your (at least once already) buggy shell script will do the job? Use a system facility that is specifically designed to restart server processes. If you say what platform you are using and the nature of your server process. I can offer more concrete advice.
added in response to comment:
Sure, there are engineering exigencies, but as the OP noted in the OP, there is still a bug in this attempt at a solution:
I know I should check by process name
and not process id, since another
process could jump in and take the id.
So now you are left with a PID tracking script, not a process "nanny". Although the chances are small, the script as it now stands has a ten second window in which
the "monitored" process fails
I start up my week long emacs process which grabs the same PID
the nanny script continues on blissfully unaware that its dependent has failed
The script isn't merely buggy, it is invalid because it presumes that PIDs are stable identifiers of a process. There are ways that this could be better handled even at the shell script level. The simplest is to never detach the execution of perl from the script since the script is doing nothing other than watching the subprocess. For example:
while true ; do
if perl program_name.pl ; then
echo "program_name terminated normally, restarting"
else
echo "oops program_name died again, restarting"
fi
done
Which is not only shorter and simpler, but it actually blocks for the condition that you are really interested in: the run-state of the perl program. The original script repeatedly checks a bad proxy indication of the run state condition (the PID) and so can get it wrong. And, since the whole purpose of this nanny script is to handle faults, it would be bad if it were faulty itself by design.
I totally agree that fiddling with the PID is nearly always a bad idea. The while true ; do ... done script is quite good, however for production systems there a couple of process supervisors which do exactly this and much more, e.g.
enable you to send signals to the supervised process (without knowing it's PID)
check how long a service has been up or down
capturing its output and write it to a log file
Examples of such process supervisors are daemontools or runit. For a more elaborate discussion and examples see Init scripts considered harmful. Don't be disturbed by the title: Traditional init scripts suffer from exactly the same problem like you do (they start a daemon, keep it's PID in a file and then leave the daemon alone).
I agree that you should find out why your program is dying in the first place. However, an ever running shell script is probably not a good idea. What if this supervising shell script dies? (And yes, get rid of the square braces around ps -p $pid. You want the exit status of ps -p $pid command. The square brackets are a replacement for the test command.)
There are two possible solutions:
Use cron to run your "supervising" shell script to see if the process you're supervising is still running, and if it isn't, restart it. The supervised process can output it's PID into a file. Your supervising program can then cat this file and get the PID to check.
If the program you're supervising is providing a service upon a particular port, make it an inetd service. This way, it isn't running at all until there is a request upon that port. If you set it up correctly, it will terminate when not needed and restart when needed. Takes less resources and the OS will handle everything for you.
That's what kill -0 $pid is for. It returns success if a process with pid $pid exists.