way to fetch the argument inside perl script - perl

I am having some trouble of getting the argument passed in in the following script
echo "abc"|perl <<'EOF'
#how to get "abc". it seems not $ARGV[0] nor in <STDIN>
EOF
Thank you.

The precise command line you have there may be your problem, if that is what you're actually executing. What you are saying there is "put 'abc' on the standard input of the next thing in the pipeline. Now run a Perl script consisting of a single comment."
This will do nothing, because there's nothing executable in that Perl script. Try this:
echo "abc" | perl -e 'print <STDIN>'
If you have a short Perl script the -e option is the way to go.

Your example is not using argument, it's using standard input. You can read standard input with the I/O operators. If you actually mean that you want an argument like myscript.pl --arg then I would recommend using Getopt::Long.

You have not passed any argument to the Perl script.
You redirected the Perl script itself so it comes from standard input; that means that the piped output goes nowhere and cannot be seen by Perl.
Reconsider how you're invoking your script. Maybe:
perl script.pl "abc"
where script.pl is a file that contains the Perl script you used as a here-document. Or simply make that script executable (perhaps without the .pl suffix).

Your problem is that both the pipe and the here-document redirect the STDIN. And the here-document wins, so the perl process never sees the pipe; it gets the script on STDIN (and has read to EOF before running the script, so that will see STDIN at EOF).
Observe:
$ echo "abc" | perl <<'EOF'
print "[What have we here?]\n";
seek(STDIN, 0, 0);
print <STDIN>;
print "[Well, what do you know ...]\n";
EOF
[What have we here?]
print "[What have we here?]\n";
seek(STDIN, 0, 0);
print <STDIN>;
print "[Well, what do you know ...]\n";
[Well, what do you know ...]
$
Moral: Don't try to mix pipes and here-documents in the shell. :)

Related

Can I pass a string from perl back to the calling c-shell?

RHEL6
I have a c-shell script that runs a perl script. After dumping tons of stuff to stdout, it determines where (what dir) the parent shell should cd to when the perl script finishes. But that's a string, not an int which is all I can pass back with "exit()".
Storing the name of the dir in a file which the c-shell script can read is what I have now. It works, but is not elegant. Is there a better way to do this ? Maybe a little chunk of memory that I can share with the perl script ?
Short:
Redirect Perl's streams and restore in the end to print that info, taken by the shell script
Or, print that last and the shell script can pass output to the console and take the last line
Or, use a named pipe (either shell) or specific file descriptors (not csh) for that print
When the Perl script prints out that name you can assign it to a variable
in the shell script
#!/bin/csh
set DIR `perl -e'print "dir_name"'`
while in bash
#!/bin/bash
DIR="$(perl -e'print "dir_name"')"
where $(...) is preferred for the command substitution.
But those other prints to console from the Perl script then need be handled
One way is to redirect all output in Perl script other than that one print, what can be controlled by a command-line option (filename to which to redirect, which shell script can print out)
Or, take all Perl's output and pass it to console, the last line being the needed "return." This puts the burden on the Perl script to print that last (perhaps in an END block). The program's output can be printed from the shell script after it completes or line by line as it is emitted.
Or, use a named pipe (both shells) or a specific file descriptor (bash only) to which the Perl script can print that information. In this case its streams go straight to the console.
The question explicitly mentions csh so it is given below. But I must repeat the old and worn fact that shell scripting is far better done in bash than in csh. I strongly recommend to reconsider.
bash
If you need the program's output on the console as it goes, take and print it line by line
#!/bin/bash
while read line; do
echo "$line"
DIR=$line
done < <(perl script.pl)
echo "$DIR"
Or, if you don't need output on the console before the script is finished
#!/bin/bash
mapfile -t lines < <(perl script.pl)
DIR="${lines[-1]}"
printf '%s\n' "${lines[#]}" # print script.pl's output
Or, use file descriptors for that particular print
F=$(mktemp) # safe filename
exec 3> "$F" # open fd 3 to write to it
exec 4< "$F" # open fd 4 to read from it
rm -f "$F" # remove file(name) for safety; opened fd's can still access
perl -E'$fd=shift; say "...normal prints to STDOUT...";
open(FH, ">&=$fd") or die $!;
say FH "dirname";
close FH
' 3
read dir_name <&4
exec 3>&- # close them
exec 4<&-
echo "$dir_name"
I couldn't get it to work with a single file descriptor for both reading and writing (exec 3<> ...), I think because the read can't rewind after the write, thus separate descriptors are used.
With a Perl script (and not the demo one-liner above) pass the fd number as a command-line option. The script can then do this only if it's invoked with that option.
Or, use a named pipe very similarly to how it's done for csh below. This is probably best here, if the manipulation of the program's STDOUT isn't to your liking.
csh
Iterate over the program's (completed) output line by line
#!/bin/csh
foreach line ( "`perl script.pl`" )
echo "$line"
set dir_name = "$line"
end
echo "Directory name: $dir_name"
or extract the last line first and then print the whole output
#!/bin/csh
set lines = ( "`perl script.pl`" )
set dir_name = $lines[$#]
# Print program's output
while ( $#lines )
echo "$lines[1]"
shift lines
end
or use a named pipe
set fifo_name = "/tmp/fifo$$" # or use mktemp
mkfifo "$fifo_name"
( perl script.pl --fifo $fifo_name [other args] & )
set dir_name = `cat "$fifo_name"`
rm -f $fifo_name
echo "dir name from FIFO: $dir_name"
The Perl command is in the background since FIFO blocks until written and read. So if the shell script were to wait for perl ... to complete the Perl script would block as it's writing to FIFO (since that's not being read) so shell would never get to read it; we would deadlock. It is also in a subshell, with ( ), so to avoid the informational prints about the background job.
The --fifo NAME command-line option is needed so that Perl script knows what special file to use (and not to do this if the option is not there).
For an in-line example replace ( perl script ...) with this one-liner, used above as well
( perl -E'$ff = shift; say qq(\t...normal prints to STDOUT...);
open FF, ">$ff" or die $!;
say FF "dir_name_$$";
close FF
' $fifo_name
& )
(broken over lines for readability)

Run Perl Script From Unix Shell Script

Hi I have a Unix Shell Script call Food.sh ; I have a Perl Script call Track.pl. Is there a way where I can put Track.pl's code in to Food.sh code and still have it work ? Track.pl requires one arugement to label a name of a folder.
Basically it will run like this.
Unix Shell Script codes RUN
step into
Perl Script codes RUN
Put in name of folder for Perl script
Rest of script runs
exit out.
You have a few options.
You can pass the code to Perl using -e/-E.
...
perl -e'
print "Hello, World!\n";
'
...
Con: Escaping can be a pain.[1]
You can pass the code via STDIN.
...
perl <<'END_OF_PERL'
print "Hello, World!\n";
END_OF_PERL
...
Con: The script can't use STDIN.
You can create a virtual file.
...
perl <(
cat <<'END_OF_PERL'
print "Hello, World!\n";
END_OF_PERL
)
...
Con: Wordy.
You can take advantage of perl's -x option.
...
perl -x -- "$0"
...
exit
#!perl
print "Hello, World!\n";
Con: Can only have one snippet.
$0 is the path to the shell script being executed. It's passed to perl as the program to run. The -x tells Perl to start executing at the #!perl line rather than the first line.
Ref: perlrun
Instances of ' in the program needs to escaped using '\''.
You could also rewrite the program to avoid using '. For example, you could use double-quoted string literals instead of single-quoted string literals. Or replace the delimiter of single-quoted string literals (e.g. q{...} instead of '...'). As for single-quoted inside of double-quoted and regex literals, these can be replaced with \x27, which you might find nicer than '\''.
(I'm assuming your goal is just to have all of the code in a single file so that you don't have multiple files to install)
Sure, there's a way to do this, but it's cumbersome. You might want to consider converting the shell script entirely to Perl (or the Perl script entirely to shell).
So ... A way to do this might be:
#!/bin/sh
echo "shell"
perl -E '
say "perl with arg=$ARGV[0]"
' fred
echo "shell again"
Of course, you'd have to be careful with your quotes within the Perl part of the program.
You might also be able to use a heredoc for the Perl part to avoid quoting issues, but I'm not sure about that.

How can I convert Perl one-liners into complete scripts?

I find a lot of Perl one-liners online. Sometimes I want to convert these one-liners into a script, because otherwise I'll forget the syntax of the one-liner.
For example, I'm using the following command (from nagios.com):
tail -f /var/log/nagios/nagios.log | perl -pe 's/(\d+)/localtime($1)/e'
I'd to replace it with something like this:
tail -f /var/log/nagios/nagios.log | ~/bin/nagiostime.pl
However, I can't figure out the best way to quickly throw this stuff into a script. Does anyone have a quick way to throw these one-liners into a Bash or Perl script?
You can convert any Perl one-liner into a full script by passing it through the B::Deparse compiler backend that generates Perl source code:
perl -MO=Deparse -pe 's/(\d+)/localtime($1)/e'
outputs:
LINE: while (defined($_ = <ARGV>)) {
s/(\d+)/localtime($1);/e;
}
continue {
print $_;
}
The advantage of this approach over decoding the command line flags manually is that this is exactly the way Perl interprets your script, so there is no guesswork. B::Deparse is a core module, so there is nothing to install.
Take a look at perlrun:
-p
causes Perl to assume the following loop around your program, which makes it iterate over filename arguments somewhat like sed:
LINE:
while (<>) {
... # your program goes here
} continue {
print or die "-p destination: $!\n";
}
If a file named by an argument cannot be opened for some reason, Perl warns you about it, and moves on to the next file. Note that the lines are printed automatically. An error occurring during printing is treated as fatal. To suppress printing use the -n switch. A -p overrides a -n switch.
BEGIN and END blocks may be used to capture control before or after the implicit loop, just as in awk.
So, simply take this chunk of code, insertyour code at the "# your program goes here" line, and viola, your script is ready!
Thus, it would be:
#!/usr/bin/perl -w
use strict; # or use 5.012 if you've got newer perls
while (<>) {
s/(\d+)/localtime($1)/e
} continue {
print or die "-p destination: $!\n";
}
That one's really easy to store in a script!
#! /usr/bin/perl -p
s/(\d+)/localtime($1)/e
The -e option introduces Perl code to be executed—which you might think of as a script on the command line—so drop it and stick the code in the body. Leave -p in the shebang (#!) line.
In general, it's safest to stick to at most one "clump" of options in the shebang line. If you need more, you could always throw their equivalents inside a BEGIN {} block.
Don't forget chmod +x ~/bin/nagiostime.pl
You could get a little fancier and embed the tail part too:
#! /usr/bin/perl -p
BEGIN {
die "Usage: $0 [ nagios-log ]\n" if #ARGV > 1;
my $log = #ARGV ? shift : "/var/log/nagios/nagios.log";
#ARGV = ("tail -f '$log' |");
}
s/(\d+)/localtime($1)/e
This works because the code written for you by -p uses Perl's "magic" (2-argument) open that processes pipes specially.
With no arguments, it transforms nagios.log, but you can also specify a different log file, e.g.,
$ ~/bin/nagiostime.pl /tmp/other-nagios.log
Robert has the "real" answer above, but it's not very practical. The -p switch does a bit of magic, and other options have even more magic (e.g. check out the logic behind the -i flag). In practice, I'd simply just make a bash alias/function to wrap around the oneliner, rather than convert it to a script.
Alternatively, here's your oneliner as a script: :)
#!/usr/bin/bash
# takes any number of arguments: the filenames to pipe to the perl filter
tail -f $# | perl -pe 's/(\d+)/localtime($1)/e'
There are some good answers here if you want to keep the one-liner-turned-script around and possibly even expand upon it, but the simplest thing that could possibly work is just:
#!/usr/bin/perl -p
s/(\d+)/localtime($1)/e
Perl will recognize parameters on the hashbang line of the script, so instead of writing out the loop in full, you can just continue to do the implicit loop with -p.
But writing the loop explicitly and using -w and "use strict;" are good if plan to use it as a starting point for writing a longer script.
#!/usr/bin/env perl
while(<>) {
s/(\d+)/localtime($1)/e;
print;
}
The while loop and the print is what -p does automatically for you.

How can I pass arguments from one Perl script to another?

I have a script which I run and after it's run it has some information that I need to pass to the next script to run.
The Unix/DOS commands are like so:
perl -x -s param_send.pl
perl -x -s param_receive.pl
param_send.pl is:
# Send param
my $send_var = "This is a variable in param_send.pl...\n";
$ARGV[0] = $send_var;
print "Argument: $ARGV[0]\n";
param_receive.pl is:
# Receive param
my $receive_var = $ARGV[0];
print "Parameter received: $receive_var";
But nothing is printed. I know I am doing it wrong but from the tutorials I can't figure out how to pass a parameter from one script to the next!
You can use a pipe character on the command line to connect stdout from the first program to stdin on the second program, which you can then write to (using print) or read from (using the <> operator).
perl param_send.pl | perl param_receive.pl
If you want the output of the first command to be the arguments to the second command, you can use xargs:
perl param_send.pl | xargs perl param_receive.pl
The %ENV hash in Perl holds the environment variables such as PATH, USER, etc. Any modifications to these variables is reflected 'only' in the current process and any child process that it may spawn. The parent process (which happens to be the shell in this particular instance) does not reflect these changes so when the 'param_send.pl' script ends all changes are lost.
For e.g. if you were to do something like,
#!/usr/bin/perl
# param_send.pl
$ENV{'VAL'} = "Value to send to param_recv";
#!/usr/bin/perl
# param_recv.pl
print $ENV{'VAL'};
This wouldn't work since VAL is lost when param_send exits. One workaround is to call param_recv.pl from param_send.pl and pass the value as an environment variable or an argument,
#!/usr/bin/perl
# param_send.pl
$ENV{'VAL'} = "Value to send to param_recv";
system( $^X, "param_recv.pl");
OR
#!/usr/bin/perl
# param_send.pl
system( $^X, qw(param_recv.pl 'VAL') );
Other options include piping the output or you could check out this Perlmonks node for a more esoteric solution.
#ARGV is created at runtime and does not persist. So your second script will not be able to see the $ARGV[0] you assigned in the first script. As crashmstr points out you either need to execute the second script from the first using one of the many methods for doing so. For example:
my $send_var = "This is a variable in param_send.pl...\n";
`perl param_receive.pl $send_var`;
or use an environment variable using %ENV:
my $send_var = "This is a variable in param_send.pl...\n";
$ENV['send_var'] = $send_var;
For a more advanced solutions think about using sockets or IPC.

How do I use Perl on the command line to search the output of other programs?

As I understand (Perl is new to me) Perl can be used to script against a Unix command line. What I want to do is run (hardcoded) command line calls, and search the output of these calls for RegEx matches. Is there a way to do this simply in Perl? How?
EDIT: Sequence here is:
-Call another program.
-Run a regex against its output.
my $command = "ls -l /";
my #output = `$command`;
for (#output) {
print if /^d/;
}
The qx// quasi-quoting operator (for which backticks are a shortcut) is stolen from shell syntax: run the string as a command in a new shell, and return its output (as a string or a list, depending on context). See perlop for details.
You can also open a pipe:
open my $pipe, "$command |";
while (<$pipe>) {
# do stuff
}
close $pipe;
This allows you to (a) avoid gathering the entire command's output into memory at once, and (b) gives you finer control over running the command. For example, you can avoid having the command be parsed by the shell:
open my $pipe, '-|', #command, '< single argument not mangled by shell >';
See perlipc for more details on that.
You might be able to get away without Perl, as others have mentioned. However, if there is some Perl feature you need, such as extended regex features or additional text manipulation, you can pipe your output to perl then do what you need. Perl's -e switch let's you specify the Perl program on the command line:
command | perl -ne 'print if /.../'
There are several other switches you can pass to perl to make it very powerful on the command line. These are documented in perlrun. Also check out some of the articles in Randal Schwartz's Unix Review column, especially his first article for them. You can also google for Perl one liners to find lots of examples.
Do you need Perl at all? How about
command -I use | grep "myregexp" && dosomething
right in the shell?
#!/usr/bin/perl
sub my_action() {
print "Implement some action here\n";
}
open PROG, "/path/to/your/command|" or die $!;
while (<PROG>) {
/your_regexp_here/ and my_action();
print $_;
}
close PROG;
This will scan output from your command, match regexps and do some action (which now is printing the line)
In Perl you can use backticks to execute commands on the shell. Here is a document on using backticks. I'm not sure about how to capture the output, but I'm sure there's more than a way to do it.
You indeed use a one-liner in a case like this. I recently coded up one that I use, among other ways, to produce output which lists the directory structure present in a .zip archive (one dir entry per line). So using that output as an example of command output that we'd like to filter, we could put a pipe in and then use perl with the -n -e flags to filter the incoming data (and/or do other things with it):
[command_producing_text_output] | perl -MFile::Path -n -e \
"BEGIN{#PTM=()} if (m{^perl/(bin|lib(?!/site))}) {chomp;push #PTM,$_}" ^
-e "END{#WDD=mkpath (\#PTM,1);" ^
-e "printf qq/Created %u dirs to reflect part of structure present in the .ZIP file\n/, scalar(#WDD);}"
the shell syntax used, including: quoting of perl code and escaping of newlines, reflects CMD.exe usage in Windows NT-like consoles. If you need to, mentally replace
"^" with "\" and " with ' in the appropriate places.
The one-liner above adds only the directory names that start with "perl/bin" or
"perl/lib (not followed by "/site"); it then creates those directories. You wind
up with a (empty) tree that you can use for whatever evil purposes you desire.
The main point is to illustrate that there are flags available (-n, -p) to
allow perl to loop over each input record (line), and that what you can do is unlimited in terms of complexity.