PowerShell: close stdout handle - powershell

The task is to close the stdout handle a while before the process exits. With WinAPI functions, it'd be this:
CloseHandle(GetStdHandle(STD_OUTPUT_HANDLE))
I know I can do DllImport with Add-Type but I believe there must be a nicer way.
What's the simplest way to accomplish the same with PowerShell?
The wider task is to test a piece of a Python library that starts and interacts flexibly with local (with help of subprocess and _winapi modules) or remote (via WinRM) processes on Windows. One of the tests is to run a program that closes its ends of stdout and stderr pipes a while before it exits. (There was a similar issue on Linux.) Therefore, a script must closes stdout and stderr so that the calling code is signalled by the OS that they're closed. The only way I found is to call CloseHandle on stdout and stderr handles. Calling .Close or .Dispose on the stream objects doesn't help: they seem to be closed only internally to the called process. The script should be in some "native" language that needs no additional compilers and interpreters. Therefore, it's either cmd, VBScript or PowerShell. Only the last one is able to call WinAPI functions. (At the moment of this update I already wrote scripts both on Python, which works perfectly but needs an interpreter to be installed, and Powershell, which works without any additional installations but a bit cumbersome and very slow.)

Related

How can I have one perl script call another and get the return results?

How can I have one perl script call another perl script and get the return results?
I have perl Script B, which does a lot of database work, prints out nothing, and simply exits with a 0 or a 3.
So I would like perl Script A call Script B and get its results. But when I call:
my $result = system("perl importOrig.pl filename=$filename");
or
my $result = system("/usr/bin/perl /var/www/cgi-bin/importOrig.pl filename=$filename");
I get back a -1, and Script B is never called.
I have debugged Script B, and when called manually there are no glitches.
So obviously I am making an error in my call above, and not sure what it is.
There are many things to consider.
Zeroth, there's the perlipc docs for InterProcess Communication. What's the value in the error variable $!?
First, use $^X, which is the path to the perl you are executing. Since subprocesses inherit your environment, you want to use the same perl so it doesn't confuse itself with PERL5LIB and so on.
system("$^X /var/www/cgi-bin/importOrig.pl filename=$filename")
Second, CGI programs tend to expect particular environment variables to be set, such as REQUEST_METHOD. Calling them as normal command-line programs often leaves out those things. Try running the program from the command line to see how it complains. Check that it gets the environment it wants. You might also check the permissions of the program to see if you (or whatever user runs the calling program) are allowed to read it (or its directory, etc). You say there are no glitches, so maybe that's not your particular problem. But, do the two environments match in all the ways they should?
Third, consider making the second program a modulino. You could run it normally as a script from the command line, but you could also load it as a Perl library and use its features directly. This obviates all the IPC stuff. You could even fork so that stuff runs concurrently.

Redirect stdin of powershell script to subprocess

In my scenario, I have a powershell script that receives standard input. What I would like to do is start a subprocess using an arbitrary command line and redirect the standard input from the powershell script to the subprocess. In other words, I simply want to pass down the standard input to the subprocess.
I have a few ideas on how to do this with loops, but is there a more elegant way?
Not a full answer, but you might have some trouble with this in certain cases, because PowerShell.exe waits for all of the input before it even begins executing your script. I ran into an issue where a process calling PowerShell didn't close its stream so PowerShell waited forever.
The solution was to use an undocumented option (powershell.exe -InputFormat None) and then read the input manually byte by byte; all methods that read more than 1 byte at a time are susceptible to blocking forever, at least as far as I could tell.
You could probably work around that with asynchronous methods.

how to read texts on the terminal inside perl script

Is there any way to capture the texts on termianl screen inside a perl script. I know there are some functions like system,exec,backticks but the problem is that they execute commands FROM the script.For ex:- in terminal i write cd/ (or) ls,and after that i run my perl script which will read what was written on termianl screen(in this case, scipt will capture cd/ (or) ls-whichever was given to termianl). I came with one solution that by passing the commands which you wrote on termianl as a command line arguments to the script,but any other way???
Like this maybe:
history | perl -ne 'print $_'
As I understand it, in a situation where you've typed some stuff into a terminal like this:
[tai#littlerobot ~] echo "Hello"
Hello
[tai#littlerobot ~] perl myscript.pl
You want myscript.pl to be able to access the echo "Hello" part, and possibly also the Hello that was that command's output.
Perl does not provide such a feature. No programming language does or can provide such a feature because the process in which your script/program runs has no intrinsic knowledge about what happened in the same terminal before it was run. The only way it could access this text would be if it could ask the currently running terminal, which will have some record of this information (i.e. the scrollback buffer), even if it cannot distinguish between which characters in the text were typed by you, and which are output. However, I know of no terminal that exposes that information via any kind of public API.
So if you want myscript.pl to be able to access that echo "Hello", you'll need to pass it to your script. Piping history to your script (as shown by Mark Setchell in his answer) is one technique. history is a shell built-in, so it has as much knowledge as your shell has (which is not quite the same knowledge as your terminal has). In particular it can give you a list of what commands have been typed in this shell session. However, it cannot tell you about the output generated by those commands. And it cannot tell you about other shell sessions, so doing this in Perl is fairly useless:
my #history = `tcsh -c history`;
The last thing you could try (though it would be incredibly complicated to do) would be to ask the X server (or Windows if running on that operating system) for a screen shot and then attempt to locate which rectangle the current terminal is running in and perform OCR on it. This would be fraught with problems though, such as dealing with overlapping windows.
So, in summary, you cannot do this. It's nothing to do with Perl. You cannot do this in any programming language.

mod_perl and inheriting STDIN in child process

I have this old Perl script that is supposed to act as a proxy of sorts between HTTP-based clients and non-HTTP Java server: the client POSTs some data to this Perl script and the script would in turn call the Java server, get the response and return it to the client.
The Perl part calls the server like this:
$servervars = "-DREMOTE_HOST=$ENV{'REMOTE_HOST'}";
#(a few other server variables passed this way)
system "java $servervars -cp /var/www javaserver";
and then the Java server would go:
InputStream serverData = System.in;
serverData.read(); //and read, and read it on
//....
//print response:
System.out.print("Content-type: application/octet-stream\n\n");
System.out.write(...);
Problem is, this works just fine when the Perl script is invoked via CGI, but doesn't work at all if the Perl script is handled by mod_perl (mod_perl2 actually). Apparently the Java part doesn't get the STDIN from Perl (serverData.available() returns 0) and Perl doesn't get the STDOUT back. The latter can be remedied by doing print `java...` (i.e. backticks) instead of system "java...", but I don't know what to do about STDIN.
The Perl script itself is able to read the POSTed data in STDIN. I've also tried to spawn a test Perl script instead of the Java application, and that doesn't get the parent script's STDIN either.
Judging by the description, spawn_proc_prog from Apache2::SubProcess could do the trick (i.e. pass the POST data as STDIN to the child process and get back the child process' output), but it doesn't seem to work if I run anything but another Perl script.
Is there any way to make the child process inherit the parent script's STDIN? I can read the stream in the Perl script and pass its contents as a command-line parameter, but I presume that would be the subject to command-line length limitations, and sometimes there can be a lot of data (like an picture), so I would really like to figure out how to inherit the stream.
Wow, I hope this is a low volume load from the client. In mod_perl your stdin is tied to the socket handle from client and same with stdout. So to set your STDOUT to the java process, you need to set the *STDOUT to the Java server's socket handle, or in your case since you are opening a process do a select STDOUT and possibly also make it unbuffered by setting $|. Also when you want to stream data back to your client, you need to write either directly to the client's socket handle or reset STDOUT back to its original value.

In Perl, how can I do a non-blocking system call?

In Perl, without using the Thread library, what is the simplest way to spawn off a system call so that it is non-blocking? Can you do this while avoiding fork() as well?
EDIT
Clarification. I want to avoid an explicit and messy call to fork.
Do you mean like this?
system('my_command_which_will_not_block &');
As Chris Kloberdanz points out, this will call fork() implicitly -- there's really no other way for perl to do it; especially if you want the perl interpreter to continue running while the command executes.
The & character in the command is a shell meta-character -- perl sees this and passes the argument to system() to the shell (usually bash) for execution, rather than running it directly with an execv() call. & tells bash to fork again, run the command in the background, and exit immediately, returning control to perl while the command continues to execute.
The post above says "there's no other way for perl to do it", which is not true.
Since you mentioned file deletion, take a look at IO::AIO. This performs the system calls in another thread (POSIX thread, not Perl pseudothread); you schedule the request with aio_rmtree and when that's done, the module will call a function in your program. In the mean time, your program can do anything else it wants to.
Doing things in another POSIX thread is actually a generally useful technique. (A special hacked version of) Coro uses it to preempt coroutines (time slicing), and EV::Loop::Async uses it to deliver event notifications even when Perl is doing something other than waiting for events.