I would like to spawn a child process in XP; for example:
system "start", "cmd.exe", "perl", "child.pl", "arg1";
When I run this, it tells me that "start doesn't exist." (Start works in Win 7).
When I run:
system "cmd.exe", "perl", "child.pl", "arg1";
The child process occurs in the same console as the parent process and upon completion the console session ends--so, I believe that the child simply takes over and the parent dies.
Normally, when I run these commands in Win 7, a new console appears and everything works fine.
When I type:
"start"
into the XP console, a new console appears--why can it find it then, but not when I call it from within a Perl script???
I've tried Win::Process and Win::Job to no avail: it still just kills the parent and starts the child, the whole tree dying upon completion.
Banging my head against this. Does anyone have a sure fire way to create an independent child process in XP (and not with fork).
Start is a built-in in cmd.exe, try system('cmd.exe /c "start perl child.pl arg1"').
Related
I set the path to a program, say "foo.exe", to my system path and so typing foo in cmd/powershell starts the program. However when I type exit to get out of cmd/powershell foo.exe also closes with it. Why does this happen and how do I prevent this from happening?
This doesn't happen for all programs, only certain ones which means those certain ones should be added to path in a different way probably or should be started in a different way I'm guessing. However, searching over the internet for a long time didn't give me anything so a little help would be appreciated.
If foo.exe is a console application (one compiled for the Windows console subsystem), it will run synchronously in cmd.exe / PowerShell: that is, control won't be returned to the calling shell until the application exits. This means that you won't even get a chance to type exit until foo.exe has already exited.
However, it is possible to run a console application asynchronously, namely if you use a job to run it, via Start-Job or Start-ThreadJob; that is, foo.exe will then run in the background.
In that event, exiting the calling shell with exit will terminate the foo.exe process.
To prevent that, you can use the Start-Process cmdlet instead; on Windows, you can use it to launch foo.exe directly, which will open in a new console window by default; on Unix-like platforms, you must launch it via the nohup utility (which sends the program's output to a file named nohup.out in the current directory).
By contrast, if foo.exe is a GUI-subsystem application, it launches asynchronously and independently of the calling shell: that is, control returns to the calling shell right after successful creation of the new process, and exiting the shell has no effect on that new process.
I have a perl script which takes a bunch of commands, say command1, command2. The commands supposedly take a long time to complete, about 3-4 hours.
In the script (in perl), for each command, I create (fork) a child process and in that process I execute, $ ssh s1 "command1". Then, I wait for the child to finish.
my $exec_command = "ssh $machine \"$command\"";
The $command is a long computationally intensive cplusplus exec. Nothing too fancy there. In the child,
my $out = `$exec_command`;
Now, sometimes, it so happens that the child does not quit by itself. I have to repeatedly press Enter and then, the children process quit.
I have googled stuff, but to no avail. I don't even know where the problem is, in ssh or in my child-parent relationship.
Thanks.
I'm using sys.process inside REPL as kind of shell. There are many uses for scala in a shell. And I invoke some external programs, of course. But I discovered that I could not leave the REPL with a background proccess running. And if I kill the sbt by either Ctrl-C or sending signal, the background process is killed also. I'd like to leave sbt and keep all invoked processes running. How can I do so?
The problem isn't with SBT or Scala but with the child process you created. The child needs to "daemonize" to become independent of the parent process. How to do that depends on what kind of process you are invoking and which OS you are running on. On Linux, using the following script as a wrapper around whatever process you are calling works:
#!/bin/bash
nohup $# 2>&1 >/dev/null &
I have a Perl script that run automatic tests on Windows.
It gets a directory as parameter and run each one of the exe files in it 3 times ( each time with different arguments )
The only problem I have is that sometimes the exe file may hangs for a long time or even crash.
I’m not sure how to handle this with Perl, do I must use fork for this ?
I tried using several examples like this: Perl fork and kill - kill(0, $pid) always returns 1, and can't kill the child but could not understand where should I embed the exe I need to run in the examples also assuming I need to run the exe under the child, if the child process is killed does it also kill the exe ?
If you know any Perl fork for Dummies tutorial...
I have an executable which can run perl scripts using the following command at the prompt:
blah.exe Launch.pl
The way we have our tests setup is that we call the Launch.pl from Parent.pl like this "blah.exe Launch.pl" - script within script. However, when executing the command with backticks/system command the parent .pl script execution waits till I get the handle back by closing and exiting out of the application (blah.exe). At this point the code in parent.pl continues to execute.
How do I return the handle back to the parent .pl script after I get done running the code that is contained in the Launch.pl
So, parent.pl calls "blah.exe Launch.pl"; but after running the code inside Launch.pl inside the application (blah.exe) it just sits there waiting to be exited out of so that the code in parent.pl can continue running. I need to keep the application (blah.exe) open till I am done running a bunch of scripts one after another.
Run blah.exe in the background. When you are done with the Parent.pl, terminate the application with kill.