I have this old Perl script that is supposed to act as a proxy of sorts between HTTP-based clients and non-HTTP Java server: the client POSTs some data to this Perl script and the script would in turn call the Java server, get the response and return it to the client.
The Perl part calls the server like this:
$servervars = "-DREMOTE_HOST=$ENV{'REMOTE_HOST'}";
#(a few other server variables passed this way)
system "java $servervars -cp /var/www javaserver";
and then the Java server would go:
InputStream serverData = System.in;
serverData.read(); //and read, and read it on
//....
//print response:
System.out.print("Content-type: application/octet-stream\n\n");
System.out.write(...);
Problem is, this works just fine when the Perl script is invoked via CGI, but doesn't work at all if the Perl script is handled by mod_perl (mod_perl2 actually). Apparently the Java part doesn't get the STDIN from Perl (serverData.available() returns 0) and Perl doesn't get the STDOUT back. The latter can be remedied by doing print `java...` (i.e. backticks) instead of system "java...", but I don't know what to do about STDIN.
The Perl script itself is able to read the POSTed data in STDIN. I've also tried to spawn a test Perl script instead of the Java application, and that doesn't get the parent script's STDIN either.
Judging by the description, spawn_proc_prog from Apache2::SubProcess could do the trick (i.e. pass the POST data as STDIN to the child process and get back the child process' output), but it doesn't seem to work if I run anything but another Perl script.
Is there any way to make the child process inherit the parent script's STDIN? I can read the stream in the Perl script and pass its contents as a command-line parameter, but I presume that would be the subject to command-line length limitations, and sometimes there can be a lot of data (like an picture), so I would really like to figure out how to inherit the stream.
Wow, I hope this is a low volume load from the client. In mod_perl your stdin is tied to the socket handle from client and same with stdout. So to set your STDOUT to the java process, you need to set the *STDOUT to the Java server's socket handle, or in your case since you are opening a process do a select STDOUT and possibly also make it unbuffered by setting $|. Also when you want to stream data back to your client, you need to write either directly to the client's socket handle or reset STDOUT back to its original value.
Related
How can I have one perl script call another perl script and get the return results?
I have perl Script B, which does a lot of database work, prints out nothing, and simply exits with a 0 or a 3.
So I would like perl Script A call Script B and get its results. But when I call:
my $result = system("perl importOrig.pl filename=$filename");
or
my $result = system("/usr/bin/perl /var/www/cgi-bin/importOrig.pl filename=$filename");
I get back a -1, and Script B is never called.
I have debugged Script B, and when called manually there are no glitches.
So obviously I am making an error in my call above, and not sure what it is.
There are many things to consider.
Zeroth, there's the perlipc docs for InterProcess Communication. What's the value in the error variable $!?
First, use $^X, which is the path to the perl you are executing. Since subprocesses inherit your environment, you want to use the same perl so it doesn't confuse itself with PERL5LIB and so on.
system("$^X /var/www/cgi-bin/importOrig.pl filename=$filename")
Second, CGI programs tend to expect particular environment variables to be set, such as REQUEST_METHOD. Calling them as normal command-line programs often leaves out those things. Try running the program from the command line to see how it complains. Check that it gets the environment it wants. You might also check the permissions of the program to see if you (or whatever user runs the calling program) are allowed to read it (or its directory, etc). You say there are no glitches, so maybe that's not your particular problem. But, do the two environments match in all the ways they should?
Third, consider making the second program a modulino. You could run it normally as a script from the command line, but you could also load it as a Perl library and use its features directly. This obviates all the IPC stuff. You could even fork so that stuff runs concurrently.
The task is to close the stdout handle a while before the process exits. With WinAPI functions, it'd be this:
CloseHandle(GetStdHandle(STD_OUTPUT_HANDLE))
I know I can do DllImport with Add-Type but I believe there must be a nicer way.
What's the simplest way to accomplish the same with PowerShell?
The wider task is to test a piece of a Python library that starts and interacts flexibly with local (with help of subprocess and _winapi modules) or remote (via WinRM) processes on Windows. One of the tests is to run a program that closes its ends of stdout and stderr pipes a while before it exits. (There was a similar issue on Linux.) Therefore, a script must closes stdout and stderr so that the calling code is signalled by the OS that they're closed. The only way I found is to call CloseHandle on stdout and stderr handles. Calling .Close or .Dispose on the stream objects doesn't help: they seem to be closed only internally to the called process. The script should be in some "native" language that needs no additional compilers and interpreters. Therefore, it's either cmd, VBScript or PowerShell. Only the last one is able to call WinAPI functions. (At the moment of this update I already wrote scripts both on Python, which works perfectly but needs an interpreter to be installed, and Powershell, which works without any additional installations but a bit cumbersome and very slow.)
I usually think of UNIX pipes as a quick and dirty way to interact with the console, doing things such as:
ls | grep '\.pdf$' #list all pdfs
I understand that it's possible to use create named pipes using mkfifo and mknod.
Are named pipes still used significantly today, or are they a relic of the past?
They are still used, although you might not notice. As a first-class file-system object provided by the operating system, a named pipe can be used by any program, regardless of what language it is written in, that can read and write to the file system for interprocess communication.
Specific to bash (and other shells), process substitution can be implemented using named pipes, and on some platforms that may be how it actually is implemented. The following
command < <( some_other_command )
is roughly identical to
mkfifo named_pipe
some_other_command > named_pipe &
command < named_pipe
and so is useful for things like POSIX-compliant shell code, which does not recognize process substitution.
And it works in the other direction: command > >( some_other_command ) is
mkfifo named_pipe
some_other_command < named_pipe &
command > named_pipe
pianobar, the command line Pandora Radio player, uses a named pipe to provide a mechanism for arbitrary control input. If you want to control Pianobar with another app, such as PianoKeys, you set it up to write command strings to the FIFO file, which piano bar watches for incoming commands.
I wrote a MAMP stack app in college that we used to manage users in an LDAP database on a file server. The PHP scripts would write commands to a FIFO that a shell script running in launchd would read from and interact with the LDAP database. I had no idea what I was doing, so I don’t know if there was a better or more correct way to do it, but it worked great at the time.
Named pipes are useful where a program would expect a path to a file as an argument as opposed to being willing to read/write with stdin and stdout, though very modern versions of bash can get around this with < ( foo ) it is still sometimes useful to have a file like object which is usable as a pipe.
I have a perl script which calls another perl script using backticks. I want to instead call this script and have it daemonize. How do I go about doing this?
edit:
I dont care to communicate back with the process/daemon. I'll most likely just stick it in an sqlite3 table or something.
You refer to backticks, thus I suppose that you want to communicate with the daemon after it's started? Since daemons does not use STDOUT, you will have to think of some other way of passing information to and from it.
The Perl interprocess communication man page (perlipc) has several good examples of this, especially the section "Complete dissociation of child from parent".
The Proc::Daemon contains convenient functions for daemonizing a process.
I have simple web-form with a little js script that sends form values to a text box. This combined value becomes a database query.
This will be sendt to dsmadmc (TSM administrative command line).
How can I use perl to keep the dsmadmc process open for consecutive input/output without the dsmadmc process closing between each input command sent?
And how can I capture the output - this is to be sent back to the same web page, in a separate div.
Any thought, anyone?
Probably IPC::Open2 could help. It allows to read/write to/from both input and output of an external process.
Beware of deadlocks though (i.e. situations where both your code and the app wait for their counterpart). You might want to use IO::Select to handle that.
P.S. I don't know how these modules behave on windows (.exe?..), but from a quick google search it looks like they are compatible.