Perl command executing good when run on command line but not working in the Perl script - perl

Below is the code I'm trying to execute. I have mentioned the line 266 in the code. I have added that code to remove the blank lines in the log file. I'm not sure whether we can run the perl command inside a Perl script. Is there another way that I can run this so that I can remove the blank lines in the log file?
Below is the error I'm getting while running through the Perl script:
syntax error at ./reportJBossErrors.pl line 266, near "n -e "
Execution of ./reportJBossErrors.pl aborted due to compilation errors.
Here is a portion of the code, showing line 266:
sub main {
readConfiguration($config_file);
$short_hostname = `hostname | cut -f 1 -d.`;
chomp $short_hostname;
getFileandInstance($short_hostname);
$yesterday = getYesterday();
validateEnvironment();
$log_file = getLogFile($FMASK,$yesterday);
perl -i -n -e "print if /\S/" $log_file; # 266 line. This is where I'm getting the compilation error
processFile($log_file);
$html_out = writeEmail();
sendEmail($CONFIG{"FROMADDR"},$CONFIG{"TOADDR"},"Normal",
"JBOSS",$short_hostname,$log_file,$CONFIG{ENVTYPE},$html_out);
}

You can not call the perl command inside a Perl program as if it were a Perl builtin function. You can use system to run an external command:
my $cmd = 'perl -i -n -e "print if /\S/"';
system "$cmd $log_file";
You need to be careful of quoting. Since you have a file name/path in the Perl variable $logfile, which you want to interpolate, that can go inside double quotes. Since you do not want to interpolate \S, that should go in single quotes.

You cannot invoke the perl executable inside a Perl program as if it were a Perl builtin function. Instead, use the list form of system to run an external command. Don't forget to check if the command succeeded:
my #cmd = (perl => '-i', '-n', '-e', 'print if /\S/', $log_file);
system(#cmd) == 0
or die "system #cmd failed: $?";
In general, I would recommend using the full path to perl rather than relying on $PATH.
Also, if you need to keep track of status etc, use Capture::Tiny to get both STDOUT and STDERR of the command you are running so that you can log error information.

Related

Can I pass a string from perl back to the calling c-shell?

RHEL6
I have a c-shell script that runs a perl script. After dumping tons of stuff to stdout, it determines where (what dir) the parent shell should cd to when the perl script finishes. But that's a string, not an int which is all I can pass back with "exit()".
Storing the name of the dir in a file which the c-shell script can read is what I have now. It works, but is not elegant. Is there a better way to do this ? Maybe a little chunk of memory that I can share with the perl script ?
Short:
Redirect Perl's streams and restore in the end to print that info, taken by the shell script
Or, print that last and the shell script can pass output to the console and take the last line
Or, use a named pipe (either shell) or specific file descriptors (not csh) for that print
When the Perl script prints out that name you can assign it to a variable
in the shell script
#!/bin/csh
set DIR `perl -e'print "dir_name"'`
while in bash
#!/bin/bash
DIR="$(perl -e'print "dir_name"')"
where $(...) is preferred for the command substitution.
But those other prints to console from the Perl script then need be handled
One way is to redirect all output in Perl script other than that one print, what can be controlled by a command-line option (filename to which to redirect, which shell script can print out)
Or, take all Perl's output and pass it to console, the last line being the needed "return." This puts the burden on the Perl script to print that last (perhaps in an END block). The program's output can be printed from the shell script after it completes or line by line as it is emitted.
Or, use a named pipe (both shells) or a specific file descriptor (bash only) to which the Perl script can print that information. In this case its streams go straight to the console.
The question explicitly mentions csh so it is given below. But I must repeat the old and worn fact that shell scripting is far better done in bash than in csh. I strongly recommend to reconsider.
bash
If you need the program's output on the console as it goes, take and print it line by line
#!/bin/bash
while read line; do
echo "$line"
DIR=$line
done < <(perl script.pl)
echo "$DIR"
Or, if you don't need output on the console before the script is finished
#!/bin/bash
mapfile -t lines < <(perl script.pl)
DIR="${lines[-1]}"
printf '%s\n' "${lines[#]}" # print script.pl's output
Or, use file descriptors for that particular print
F=$(mktemp) # safe filename
exec 3> "$F" # open fd 3 to write to it
exec 4< "$F" # open fd 4 to read from it
rm -f "$F" # remove file(name) for safety; opened fd's can still access
perl -E'$fd=shift; say "...normal prints to STDOUT...";
open(FH, ">&=$fd") or die $!;
say FH "dirname";
close FH
' 3
read dir_name <&4
exec 3>&- # close them
exec 4<&-
echo "$dir_name"
I couldn't get it to work with a single file descriptor for both reading and writing (exec 3<> ...), I think because the read can't rewind after the write, thus separate descriptors are used.
With a Perl script (and not the demo one-liner above) pass the fd number as a command-line option. The script can then do this only if it's invoked with that option.
Or, use a named pipe very similarly to how it's done for csh below. This is probably best here, if the manipulation of the program's STDOUT isn't to your liking.
csh
Iterate over the program's (completed) output line by line
#!/bin/csh
foreach line ( "`perl script.pl`" )
echo "$line"
set dir_name = "$line"
end
echo "Directory name: $dir_name"
or extract the last line first and then print the whole output
#!/bin/csh
set lines = ( "`perl script.pl`" )
set dir_name = $lines[$#]
# Print program's output
while ( $#lines )
echo "$lines[1]"
shift lines
end
or use a named pipe
set fifo_name = "/tmp/fifo$$" # or use mktemp
mkfifo "$fifo_name"
( perl script.pl --fifo $fifo_name [other args] & )
set dir_name = `cat "$fifo_name"`
rm -f $fifo_name
echo "dir name from FIFO: $dir_name"
The Perl command is in the background since FIFO blocks until written and read. So if the shell script were to wait for perl ... to complete the Perl script would block as it's writing to FIFO (since that's not being read) so shell would never get to read it; we would deadlock. It is also in a subshell, with ( ), so to avoid the informational prints about the background job.
The --fifo NAME command-line option is needed so that Perl script knows what special file to use (and not to do this if the option is not there).
For an in-line example replace ( perl script ...) with this one-liner, used above as well
( perl -E'$ff = shift; say qq(\t...normal prints to STDOUT...);
open FF, ">$ff" or die $!;
say FF "dir_name_$$";
close FF
' $fifo_name
& )
(broken over lines for readability)

way to fetch the argument inside perl script

I am having some trouble of getting the argument passed in in the following script
echo "abc"|perl <<'EOF'
#how to get "abc". it seems not $ARGV[0] nor in <STDIN>
EOF
Thank you.
The precise command line you have there may be your problem, if that is what you're actually executing. What you are saying there is "put 'abc' on the standard input of the next thing in the pipeline. Now run a Perl script consisting of a single comment."
This will do nothing, because there's nothing executable in that Perl script. Try this:
echo "abc" | perl -e 'print <STDIN>'
If you have a short Perl script the -e option is the way to go.
Your example is not using argument, it's using standard input. You can read standard input with the I/O operators. If you actually mean that you want an argument like myscript.pl --arg then I would recommend using Getopt::Long.
You have not passed any argument to the Perl script.
You redirected the Perl script itself so it comes from standard input; that means that the piped output goes nowhere and cannot be seen by Perl.
Reconsider how you're invoking your script. Maybe:
perl script.pl "abc"
where script.pl is a file that contains the Perl script you used as a here-document. Or simply make that script executable (perhaps without the .pl suffix).
Your problem is that both the pipe and the here-document redirect the STDIN. And the here-document wins, so the perl process never sees the pipe; it gets the script on STDIN (and has read to EOF before running the script, so that will see STDIN at EOF).
Observe:
$ echo "abc" | perl <<'EOF'
print "[What have we here?]\n";
seek(STDIN, 0, 0);
print <STDIN>;
print "[Well, what do you know ...]\n";
EOF
[What have we here?]
print "[What have we here?]\n";
seek(STDIN, 0, 0);
print <STDIN>;
print "[Well, what do you know ...]\n";
[Well, what do you know ...]
$
Moral: Don't try to mix pipes and here-documents in the shell. :)

How to Call Perl script from tcl script

I have a file with 4 perl commands ,
I want to open the file from the tcl and execute each perl command.
TCL script
runcmds $file
proc runcmds {file} {
set fileid [open $file "r"]
set options [read $fileid]
close $fileid
set options [split $options "\n"] #saperating each commad with new line
foreach line $options {
exec perl $line
}
}
when executing the above script
I am getting the error as "can't open the perl script /../../ : No Such file or directory " Use -S to search $PATH for it.
tl;dr: You were missing -e, causing your script to be interpreted as a filename.
To run a perl command from inside Tcl:
proc perl {script args} {
exec perl -e $script {*}$args
# or in 8.4: eval [list perl -e $script] $args
}
Then you can do:
puts [perl {
print "Hello "
print "World\n"
}]
That's right, an arbitrary perl script inside Tcl. You can even pass in other arguments as necessary; access from perl via #ARGV. (You'll need to add other options like -p explicitly.)
Note that this can pass whole scripts; you don't need to split them up (and probably shouldn't; you can do lots with one-liners but they tend to be awful to maintain and there's no technical reason to require it).

Why does system call affect subsequent print behaviour in perl?

Here's my code
#!/usr/bin/perl
use strict;
use warnings;
use diagnostics;
my $file = $ARGV[0];
system('wc -l $file');
print "\nprinting alpha \n";
sleep 1;
exit;
After I run (in tcsh shell) perl script.pl /path/to/file I don't see printing alpha until I press Ctrl+C. Even when I add another statement $|=1 either before or after system call, the behaviour remains the same.
What is happening?
You are executing the shell command
wc -l $file
The shell has no variable $file defined, so that's the same as
wc -l
This causes the shell to execute wc with the lone arg -l. With no file name provided, wc in turn reads from STDIN until you kill it with SIGINT from Ctrl-C.
You were perhaps aiming for
system("wc -l $file"); # XXX
but that's wrong too. That doesn't pass the args -l and the value of $file to wc. Consider what would happen if a file name with a space in it was provided.
To build a shell literal that results in the correct file name, you could use
use String::ShellQuote qw( shell_quote );
system(shell_quote('wc', '-l', $file));
But a better option is to avoid the shell and execute wc directly, passing to it the values you want without having to build a shell command.
system('wc', '-l', $file);
Because the single quotes prevent interpolation of $file. Change to double quotes.
What is happening is that the string is being executed without substituting a value for $file. When the shell gets this it looks for a shell variable $file which does not exist, so it executes the command with no file. This causes wc to read from stdin, resulting in the behavior you see.

Execute a command using a perl script

I have simple command in unix like
cat myfile.txt >&mytemp.txt&
The above command will simply create a copy of the file myfile.txt.
when i execute the command on the command line it returns me the process id like below:
> cat myfile.txt > & mytemp.txt &
[1] 769
>
I am forming the same command inside a perl script and calling it with system as below:
my $cmd="cat myfile.txt>&mytemp.txt&";
my $info = system("$cmd");
but the sytem command fails with the below error message:
sh: mytemp.txt: bad number
I even tried with escaping the > and &.But there is no change in the error message.
May i know the reason for this?where am i wrong here?
I'm pretty sure that you can't use the trailing & on this. If you want your program to continue while the command runs, then fork and have the child process run the call, then exit. Possibly exec can do this, though I haven't tried doing that with output redirection before...
Like the message says, that's not a valid sh command. Is it perhaps a csh command?
system('csh', '-c', $cmd);
Try this:
perl -e "`cat myfile.txt>&mytemp.txt&`;"
It's executing the command and returning the command output.
So it's possible to do:
#!/usr/bin/perl
my $content = `cat /etc/passwd`;
print $content;
If you put the code into a perl script:
cat-test.pl
#!/usr/bin/perl
use strict;
use warnings;
my $res = `cat myfile.txt>&mytemp.txt&`;