Why do I lose output when calling another perl program with backquotes - perl

If I run a perl program and call another perl program using backquotes, print statements from the called program don't appear at the terminal.
If I call the program using 'system', the print statements are displayed.
EG:
This is ProgA.pl
print "In ProgA.pl, about to call ProgB.pl";
my $dum=`ProgB.pl`; # print output doesn't appear
### $dum=system("ProgB.pl"); # this prints OK
print"\nBack in ProgA.pl";
print "\ndum = $dum"; # ProgB's output doesn't show here either
(No warnings or errors, perl.exe found through file association)
This is ProgB.pl:
print "\nPrinting from ProgB.pl";
What is the reason for the difference?
Why isn't the backquoted call output returned in $dum (I tried both STDOUT and STDERR)? If I call dir in backquotes, I get its output in $dum.

You have a path issue.
It works as expected ($dum is assigned the value "Printing from ProgB.pl") if I change the backticks from ProgB.pl to ./ProgB.pl. Without the explicit ./ path, it searches the system path and generates an error, as you can see if you change that line to
my $dum=`ProgB.pl` or die $!;
Which generates the output
In ProgA.pl, about to call ProgB.plNo such file or directory at ./ProgA.pl line 4.
Thus illustrating once again that you should always check the return values of your system calls for error conditions.

It appears that by failing to put a newline character at the end of the print command in ProgB, I failed to flush the buffer before returning to ProgA. Thanks to Chris Turner.

Related

perl print to a file and STDOUT which is the file

My program is trying to print to a file which for it is the STDOUT.
To say, print "text here"; prints to a file x.log , while I am also trying to print to the file x.log using file-handler method as in print FH1 "text here"; . I notice that when the file-handler method statement is provided first and then followed by the STDOUT procedure. My second print can override the first.I would like to know more on why this occurs.
This makes me think of a race condition or that the file-handler is relatively slow (if it goes through buffer?) than the STDOUT print statements. This I am not sure on how if that is the way Perl works. Perl version - 5.22.0
As far as I understand your program basically looks like this:
open(my $fh,'>','foobar.txt');
print $fh "foo\n";
print "bar\n"; # prints to STDOUT
And then you use it in a way that STDOUT is redirected in the shell to the same file which is already opened in your program:
$ perl test.pl > foobar.txt
This will open two independent file handles to the same file: one within your program and the other within the shell where you start the program. Both file handles manage their own file position for writing, start at position 0 and advance the position after each write.
Since these file handles are independent from each other they will not care if there are other file handles dealing currently with this file, no matter if these other file handles are inside or outside the program. This means that these writes will overwrite each other.
In addition to this there is also internal buffering done, i.e. each print will first result into a write into some internal buffer and might result immediately into a write to the file handle. When data are written to the file handle depends on the mode of the file handle, i.e. unbuffered, line-buffered or a buffer of a specific size. This makes the result kind of unpredictable.
If you don't want this behavior but still want to write to the same file using multiple file handle you better use the append-mode, i.e. open with >> instead of > in both Perl code and shell. This will make sure that all data will be appended to the end of the file instead of written to the file position maintained by the file handle. This way data will not get overwritten. Additionally you might want to make the file handles unbuffered so that data in the file end up in the same order as the print statements where done:
open(my $fh,'>>','foobar.txt');
$fh->autoflush(1); # make $fh unbuffered
$|=1; # make STDOUT unbuffered
print $fh "foo\n";
print "bar\n"; # prints to STDOUT
$ perl test.pl >> foobar.txt

Pass arguments from command line in Perl

I have written the Perl code and trying to pass the command line arguments and not getting expected output.
Here is my code:
my ($buildno, $appname, $ver) = #ARGV;
print values \#ARGV;
$Artifact_name = "flora-$appname-$ver";
mkdir "$target_dir/$Artifact_name";'
When I run the Perl script perl scripts\perl\test.pl "MobileApp", %ver%, I am getting the following output: flora-MobileApp-
And the log message is showing
'Use of uninitialized value $ver in concatenation (.) or string at
Jscripts\perl\test.pl line 31 (#3)'.
%ver% is the environment variable and its value is 1.0.1.23.
I am expecting the output flora-MobileApp-1.0.1.23.
So you run the program like this:
perl scripts\perl\test.pl "MobileApp", %ver%
And then, within the program you are accessing the command line arguments like this:
my ($buildno, $appname, $ver) = #ARGV;
There's an obvious mismatch here. You are passing in two arguments, but expecting three. You will end up with "MobileApp" in $buildno and the contents of %ver% in $appname. Your last variable, $ver, will remain undefined.
If you want three variables to be set, then you need to pass three arguments.
And, if you wanted to investigate this, then surely it would have been simple to print out the values of your three variables?
The "fix" is to pass another value as the first command line option so that the parameters line up with the variables.
perl scripts\perl\test.pl something_else "MobileApp" %ver%
But I'm fascinated to hear what level of confusion makes this problem hard to debug.
Update: Another option (as PerlDuck has reminded me in a comment) would be to fix the code to only expect two arguments.
my ($appname, $ver) = #ARGV;

Error when redirecting data to file

Frequently [not always] when i run procedure define a file handler i get strange error on internal function which i dont understand how to debug.
In My PERL code i have the following line [111]:
open V_FILE_SEC, ">>$file/V_$file$dir.csvT" or die $!;
And when i am operating the script [>myscript.pl DPX_*] i get:
"No such file or directory at myscript.pl line 111, line 18004."
What is the meaning of line 18004? How to start debug?
Thanks.
From perldoc -f die:
If the last element of LIST does not end in a newline, the current script line number and input line number (if any) are also printed, and a newline is supplied. [Emphasis added]
The "input line number" is the value in $., roughly the number of lines of input you have read from the most recent filehandle you accessed.
In your case, you could use to look at your program input and see if there is anything unusual around line 18004 that your program wasn't expecting.

How to run a local program with user input in Perl

I'm trying to get user input from a web page written in Perl and send it to a local program (blastp), then display the results.
This is what I have right now:
(input code)
print $q->p, "Your database: $bd",
$q->p, "Your protein is: $prot",
$q->p, "Executing...";
print $q->p, system("blastp","-db $bd","-query $prot","-out results.out");
Now, I've done a little research, but I can't quite grasp how you're supposed to do things like this in Perl. I've tried opening a file, writing to it, and sending it over to blastp as an input, but I was unsuccessful.
For reference, this line produces a successful output file:
kold#sazabi ~/BLAST/pataa $ blastp -db pataa -query ../teste.fs -out results.out
I may need to force the bd to load from an absolute path, but that shouldn't be difficult.
edit: Yeah, the DBs have an environmental variable, that's fixed. Ok, all I need is to get the input into a file, pass it to the command, and then print the output file to the CGI page.
edit2: for clarification:
I am receiving user input in $prot, I want to pass it over to blastp in -query, have the program blastp execute, and then print out to the user the results.out file (or just have a link to it, since blastp can output in HTML)
EDIT:
All right, fixed everything I needed to fix. The big problem was me not seeing what was going wrong: I had to install Tiny:Capture and print out stderr, which was when I realized the environmental variable wasn't getting set correctly, so BLAST wasn't finding my databases. Thanks for all the help!
Write $prot to the file. Assuming you need to do it as-is without processing the text to split it or something:
For a fixed file name (may be problematic):
use File::Slurp;
write_file("../teste.fs", $prot, "\n") or print_error_to_web();
# Implement the latter to print error in nice HTML format
For a temp file (better):
my ($fh, $filename) = tempfile( $template, DIR => "..", CLEANUP => 1);
# You can also create temp directory which is even better, via tempdir()
print $fh "$prot\n";
close $fh;
Step 2: Run your command as you indicated:
my $rc = system("$BLASTP_PATH/blastp", "-db", "pataa"
,"-query", "../teste.fs", "-out", "results.out");
# Process $rc for errors
# Use qx[] instead of system() if you want to capture
# standard output of the command
Step 3: Read the output file in:
use File::Slurp;
my $out_file_text = read_file("results.out");
Send back to web server
print $q->p, $out_file_text;
The above code has multiple issues (e.g. you need better file/directory paths, more error handling etc...) but should start you on the right track.

Reopen STDERR/STDOUT to write to combined logfile with timestamps

I basically want to reopen STDERR/STDOUT so they write to one logfile with both the stream and the timestamp included on every line. So print STDERR "Hello World" prints STDERR: 20130215123456: Hello World. I don't want to rewrite all my print statements into function calls, also some of the output will be coming from external processes via system() calls anyway which I won't be able to rewrite.
I also need for the output to be placed in the file "live", i.e. not only written when the process completes.
(p.s. I'm not asking particularly for details of how to generate timestamps, just how to redirect to a file and prepend a string)
I've worked out the following code, but it's messy:
my $mode = ">>";
my $file = "outerr.txt";
open(STDOUT, "|-", qq(perl -e 'open(FILE, "$mode", "$file"); while (<>) { print FILE "STDOUT: \$\_"; }'));
open(STDERR, "|-", qq(perl -e 'open(FILE, "$mode", "$file"); while (<>) { print FILE "STDERR: \$\_"; }'));
(The above doesn't add dates, but that should be trivial to add)
I'm looking for a cleaner solution, one that doesn't require quoting perl code and passing it on the command line, or at least module that hides some of the complexity. Looking at the code for Capture::Tiny it doesn't look like it can handle writing a part of output, though I'm not sure about that. annotate-output only works on an external command sadly, I need this to work on both external commands and ordinary perl printing.
The child launched via system doesn't write to STDOUT because it does not have access to variables in your program. Therefore, means having code run on a Perl file handle write (e.g. tie) won't work.
Write another script that runs your script with STDOUT and STDERR replaced with pipes. Read from those pipes and print out the modified output. I suggest using IPC::Run to do this, because it'll save you from using select. You can get away without it if you combine STDOUT and STDERR in one stream.