This question already has an answer here:
How do I use backticks to capture the elapsed time output from time(1)?
(1 answer)
Closed 9 years ago.
I'm trying to do some very-rough benchmarking and so I'd like to run the time command from my script. I have the following:
#!/usr/bin/perl
use strict;
my $command = "/usr/bin/time -f \"%U,%S,%E,%P,%K,%M\" ...";
my $stats = `$command`;
print "stats: $stats\n";
Unfortunately, it looks like the result of the command is never assigned to $stats. When I execute the script, I get something like the following:
0.15,0.03,0:00.44,43%,0,143808
stats:
So it looks like it runs the time command successfully, but prints out the value to STDOUT instead of assigning the value to $stats. When I use another command, like ls, it seems to work as expected. What am I doing wrong here?
time prints to stderr.
$ /usr/bin/time -f "%U,%S,%E,%P,%K,%M" echo foo >/dev/null
0.00,0.00,0:00.03,10%,0,2352
$ /usr/bin/time -f "%U,%S,%E,%P,%K,%M" echo foo >/dev/null 2>/dev/null
$
So just add 2>&1 to your command.
time writes to standard error, so you need to redirect it to standard output to capture it with Perl's backticks
my $command = "/usr/bin/time -f \"%U,%S,%E,%P,%K,%M\" ... 2>&1";
Related
Hi would like to pass a parameter to my perl script that should be executed trough qsub.
So I run:
qsub -l nodes=node01 -v "i=500" Test.pl
While in Test.pl I try to call i parameter in several way:
use Getopt::Long;
$result = GetOptions ("i" => \$num);
open(FILE,">/data/home/FILEout.txt");
print FILE "$num\n";
print FILE "$ARGV[0]";
close(FILE);
Unfortunatelly output file of the perl script is always empty.
Do you have any suggestions? Where I'm wrong? Help please
According to all documentation I can find -v sets an environment var, so you'd use $ENV{i} to get 500. (Check your own documentation to confirm.)
If you wanted to actually pass an arg to your script, you could try using
qsub ... Test.pl -i=500
But based on my web search, that might only work for some versions of qsub. Others would require that you make a helper script (say Test.sh)
#!/bin/sh
Test.pl "-i=$i"
along with the command
qsub ... -v 'i=500' Test.sh
If qsub ... Test.pl ...args... is supported and you can change your script, the simplest solution is
qsub ... Test.pl 500
and
my ($i) = #ARGV;
I Finally get the solution that works with PBRProfessional 10.4.
There are two way to solve it:
First one is the following
echo "perl /path/to/Test.pl -i 500" | qsub -l nodes=node06
Second one is two use
qsub -l nodes=node06 -v i=500 Test.pl
and read the parameter in the Test.pl through $ENV{i}
This works fine
system("perl -c C:/Users/mytest/scripts/file_name.pm")
This command gives many lines of output in cygwin and a single syntak ok line in centos. since ill be using cygwin, what am trying to do is to get this output into a variable and use it later in my program. How can i do it?
Thanks in advance for your time.
Instead of system, use backticks:
my $output = `perl -c C:/Users/mytest/scripts/file_name.pm`;
if you want to also include STDERR output, use:
my $output = `perl -c C:/Users/mytest/scripts/file_name.pm 2>&1`;
i want to trim an output of wipe command with sed.
i try to use this one:
wipe -vx7 /dev/sdb 2>&1 | sed -u 's/.*\ \([0-9]\+\).*/\1/g'
but it don't work for some reason.
when i use echo & sed to print the output of wipe command it works!
echo "/dev/sdb: 10%" | sed -u 's/.*\ \([0-9]\+\).*/\1/g'
what i'm doing wrong?
Thanks!
That looks like a progress indicator. They are often output directly to the tty instead of to stdout or stderr. You may be able to use the expect script called unbuffer (source) or some other method to create a pseudo tty. Be aware that there will probably be more junk such as \r, etc., that you may need to filter out.
Demonstration:
$ cat foo
#!/bin/sh
echo hello > /dev/tty
$ a=$(./foo)
hello
$ echo $a
$ a=$(unbuffer ./foo)
$ echo $a
hello
$ cat test.pl
my $pid = 5892;
my $not = system("top -H -p $pid -n 1 | grep myprocess | wc -l");
print "not = $not\n";
$ perl test.pl
11
not = 0
$
I want to capture the result i.e. 11 into a variable. How can I do that?
From Perlfaq8:
You're confusing the purpose of system() and backticks (``). system() runs a command and returns exit status information (as a 16 bit value: the low 7 bits are the signal the process died from, if any, and the high 8 bits are the actual exit value). Backticks (``) run a command and return what it sent to STDOUT.
$exit_status = system("mail-users");
$output_string = `ls`;
There are many ways to execute external commands from Perl. The most commons with their meanings are:
system() : you want to execute a command and don't want to capture its output
exec: you don't want to return to the
calling perl script
backticks : you want to capture the
output of the command
open: you want to pipe the command (as
input or output) to your script
Also see How can I capture STDERR from an external command?
The easiest way is to use the `` feature in Perl. This will execute what is inside and return what was printed to stdout:
my $pid = 5892;
my $var = `top -H -p $pid -n 1 | grep myprocess | wc -l`;
print "not = $var\n";
This should do it.
Try using qx{command} rather than backticks. To me, it's a bit better because: you can do SQL with it and not worry about escaping quotes and such. Depending on the editor and screen, my old eyes tend to miss the tiny back ticks, and it shouldn't ever have an issue with being overloaded like using angle brackets versus glob.
Using backtick or qx helps, thanks everybody for the answers. However, I found that if you use backtick or qx, the output contains trailing newline and I need to remove that. So I used chomp.
chomp($host = `hostname`);
chomp($domain = `domainname`);
$fqdn = $host.".".$domain;
More information here:
http://irouble.blogspot.in/2011/04/perl-chomp-backticks.html
Use backticks for system commands, which helps to store their results into Perl variables.
my $pid = 5892;
my $not = ``top -H -p $pid -n 1 | grep myprocess | wc -l`;
print "not = $not\n";
Also for eg. you can use IPC::Run:
use IPC::Run qw(run);
my $pid = 5892;
run [qw(top -H -n 1 -p), $pid],
'|', sub { print grep { /myprocess/ } <STDIN> },
'|', [qw(wc -l)],
'>', \my $out;
print $out;
processes are running without bash subprocess
can be piped to perl subs
very similar to shell
Is there an idiomatic way to simulate Perl's diamond operator in bash? With the diamond operator,
script.sh | ...
reads stdin for its input and
script.sh file1 file2 | ...
reads file1 and file2 for its input.
One other constraint is that I want to use the stdin in script.sh for something else other than input to my own script. The below code does what I want for the file1 file2 ... case above, but not for data provided on stdin.
command - $# <<EOF
some_code_for_first_argument_of_command_here
EOF
I'd prefer a Bash solution but any Unix shell is OK.
Edit: for clarification, here is the content of script.sh:
#!/bin/bash
command - $# <<EOF
some_code_for_first_argument_of_command_here
EOF
I want this to work the way the diamond operator would work in Perl, but it only handles filenames-as-arguments right now.
Edit 2: I can't do anything that goes
cat XXX | command
because the stdin for command is not the user's data. The stdin for command is my data in the here-doc. I would like the user data to come in on the stdin of my script, but it can't be the stdin of the call to command inside my script.
Sure, this is totally doable:
#!/bin/bash
cat $# | some_command_goes_here
Users can then call your script with no arguments (or '-') to read from stdin, or multiple files, all of which will be read.
If you want to process the contents of those files (say, line-by-line), you could do something like this:
for line in $(cat $#); do
echo "I read: $line"
done
Edit: Changed $* to $# to handle spaces in filenames, thanks to a helpful comment.
Kind of cheezy, but how about
cat file1 file2 | script.sh
I am (like everyone else, it seems) a bit confused about exactly what the goal is here, so I'll give three possible answers that may cover what you actually want. First, the relatively simple goal of getting the script to read from either a list of files (supplied on the command line) or from its regular stdin:
if [ $# -gt 0 ]; then
exec < <(cat "$#")
fi
# From this point on, the script's stdin is redirected from the files
# (if any) supplied on the command line
Note: the double-quoted use of $# is the best way to avoid problems with funny characters (e.g. spaces) in filenames -- $* and unquoted $# both mess this up. The <() trick I'm using here is a bash-only feature; it fires off cat in the background to feed data from files supplied on the command line, and then we use exec to replace the script's stdin with the output from cat.
...but that doesn't seem to be what you actually want. What you seem to really want is to pass the supplied filenames or the script's stdin as arguments to a command inside the script. This requires sort of the opposite process: converting the script's stdin into a file (actually a named pipe) whose name can be passed to the command. Like this:
if [[ $# -gt 0 ]]; then
command "$#" <<EOF
here-doc goes here
EOF
else
command <(cat) <<EOF
here-doc goes here
EOF
fi
This uses <() to launder the script's stdin through cat to a named pipe, which is then passed to command as an argument. Meanwhile, command's stdin is taken from the here-doc.
Now, I think that's what you want to do, but it's not quite what you've asked for, which is to both redirect the script's stdin from the supplied files and pass stdin to the command inside the script. This can be done by combining the above techniques:
if [ $# -gt 0 ]; then
exec < <(cat "$#")
fi
command <(cat) <<EOF
here-doc goes here
EOF
...although I can't think why you'd actually want to do this.
The Perl diamond operator essentially loops across all the command line arguments, treating each as a filename. It opens each file and reads them line-by-line. Here's some bash code that will do approximately the same.
for f in "$#"
do
# Do something with $f, such as...
cat $f | command1 | command2
-or-
command1 < $f
-or-
# Read $f line-by-line
cat $f | while read line_from_f
do
# Do stuff with $line_from_f
done
done
You want to take the first argument and do something with it, and then either read from any files specified or stdin if no files?
Personally, I'd suggest using getopt to indicate arguments using the "-a value" syntax to help disambiguate, but that's just me. Here's how I'd do it in bash without getopts:
firstarg=${1?:usage: $0 arg [file1 .. fileN]}
shift
typeset -a files
if [[ ${##} -gt 0 ]]
then
files=( "$#" )
else
files=( "/dev/stdin" )
fi
for file in "${files[#]}"
do
whatever_you_want < "$file"
done
The ?: operator will die if there are no args specified, since you seem to want at least one arg either way. After grabbing that, shift the args over by one, and then either use the remaining args as your file list, or the bash special filehandle "/dev/stdin" if there were no other args.
I think that the "if no files are specified, use /dev/stdin - otherwise use the files on the command line" piece is probably what you're looking for, but the rest of the code is at least useful for context.
Also a little cheezy, but how about this:
if [[ $# -eq 0 ]]
then
# read from stdin
else
# read from $* (args)
fi
If you need to read and process line-by-line (which is likely) and don't want to copy/paste the same code twice (which is likely), define a function in your script and just pass the lines one-by-one to this function, and process them in said function.
Why not use ``cat #* in the script? For example:
x=`cat $*`
echo $x