I would like to execute two or more commands back to back . But these commands are stored in a variable in my script. For example,
var="/usr/bin/ls ; pwd ; pooladm -d; pooladm -e"
The problem arises when I execute this variable via my script.
Suppose I go:
#!/bin/ksh -p
..
..
var="/usr/bin/ls ; pwd;pooladm -d; pooladm -e"
..
..
$var # DOES NOT WORK ..BUT WORKS WITH EVAL
It doesn't work ..
But the moment I use eval :
eval $var
It works brilliantly.
I was just wondering if there is any other way to execute a bunch of commands stored in a variable without using eval.
Also , Is eval usage considered a bad programming practice because my coding standards appear to shun its usage than embrace it . Please do let me know.
Remember that the shell only parses the line once. So when you expand your $var, it becomes one string containing blanks. Since you have no executable named '/usr/bin/ls ; pwd;pooladm -d; pooladm -e', it can't run it.
On the other hand, eval takes its arguments are re-scans them, now you get '/usr/bin/ls', 'pwd', and so on. It works.
eval is a little chancy because it leaves a possible security hole -- consider if someone managed to get 'rm -rf /' into the string. But it's a useful tool.
Use backticks and echo. In your case
`echo $var`
You could invoke another copy of the shell to run the command:
sh -c "$var"
This isn't necessarily better than using eval. The main practical difference is that eval will run the commands in the context of the current shell, while "sh -c" runs the commands in a separate shell instance. If var contains commands to set environment variables or change the current directory, you or may not want those commands to affect the current shell.
Related
I have a script with a line like this:
$foo = $bar if -t;
Near as I can tell, this is saying,
if this script is run from a terminal, set $foo to $bar.
If this script was run from cron, that would evaluate to false.
Have I got this right?
In perldoc perlfunc for the collection of functions called -X, you can read:
-t Filehandle is opened to a tty.
Also
If the argument is omitted, tests $_, except for -t, which tests STDIN.
Which is to say your code does -t STDIN.
The -t file test is documented in perlfunc, although you get to by looking up -X instead of the specific file test:
% perldoc -f -X
Depending on your task, IO::Interactive may do the job better since there can be a few gotchas with figuring out if something is truly interactive.
If you want to know that you are running under cron (and not non-interactive in some other way), you might consider have a variable set in your crontab (or using one already set) and simply looking for it. In your crontab:
IN_CRON=1
Then, in the script:
do_something() if $ENV{IN_CRON};
I am writing a code in perl with embedded shell script in it:
#!/usr/bin/perl
use strict;
use warnings;
our sub main {
my $n;
my $n2=0;
$n=chdir("/home/directory/");
if($n){
print "change directory successful $n \n";
$n2 = system("cd", "section");
system("ls");
print "$n2 \n";
}
else {
print "no success $n \n";
}
print "$n\n";
}
main();
But it doesn't work. When I do the ls. The ls doesn't show new files. Anyone knows another way of doing it. I know I can use chdir(), but that is not the only problem, as I have other commands which I have created, which are simply shell commands put together. So does anyone know how to exactly use cli in perl, so that my compiler will keep the shell script attached to the same process rather than making a new process for each system ... I really don't know what to do.
The edits have been used to improve the question. Please don't mind the edits if the question is clear.
edits: good point made by mob that the system is a single process so it dies everytime. But, What I am trying to do is create a perl script which follows an algorithm which decides the flow of control of the shell script. So how do I make all these shell commands to follow the same process?
system spawns a new process, and any changes made to the environment of the new process are lost when the process exits. So calling system("cd foo") will change the directory to foo inside of a very short-lived process, but won't have any effect on the current process or any future subprocesses.
To do what you want to do (*), combine your commands into a single system call.
$n2 = system("cd section; ls");
You can use Perl's rich quoting features to pass longer sequences of commands, if necessary.
$n2 = system q{cd section
if ls foo ; then
echo we found foo in section
./process foo
else
echo we did not find foo\!
./create_foo > foo
fi};
$n2 = system << "EOSH";
cd section
./process bar > /tmp/log
cd ../sekshun
./process foo >> /tmp/log
if grep -q Error /tmp/log ; then
echo there were errors ...
fi
EOSH
(*) of course there are easy ways to do this entirely in Perl, but let's assume that the OP eventually will need some function only available in an external program
system("cd", "section"); attempts to execute the program cd, but there is no such program on your machine.
There is no such program because each process has its own current work directory, and one process cannot change another process's current work directory. (Programs would malfunction left and right if it was possible.)
It looks like you are attempting to have a Perl program execute a shell script. That requires recreating the shell in Perl. (More specific goals might have simpler solutions.)
What I am trying to do is create a perl script which follows an algorithm which decides the flow of control of the shell script.
Minimal change:
Create a shell script that prompts for instructions/commands. Have your Perl script launch the shell script using Expect and feed it answers/commands.
I need to get an example file file from a find command in a Perl script to create another system call afterwards. For some reason, the find command gets stuck when I call it from the script. Here is what I need to do:
my $search_dir = "/something/like/this/??/??/??";
# the triple '??' are needed here
my $cmd = "find $search_dir -name \"\*.$var1.token1.$var2.ext\" | head -n 1";
my $first_example_file = `$cmd`; chomp $first_example_file;
This gets stuck when I run it through Perl, it never finishes executing the command, whereas the constructed $cmd runs in no time if I copy+paste it and run in in my bash terminal. Any ideas?
Try using the File::Find perl module for finding files. If you would like to use bash's find in your perl then you might have to use $(..) in your command.
I am not in to perl … just trying to help out.
Update:
As stated in the comments by Rohaq you can also use File::Find::Rule
I'd wager globbing (shell metacharacter expansion) is involved. But regardless, try and chop the command up. Does it work without the pipe? What about without the ?? in the pathname? What happens if you prepend 'echo' ("echo find ...")? Still hanging? Then you can try it under perl -d - the debugger; perldoc perldebug is your friend.
I have a shell script, with a list of shell variables, which is executed before entering a programming environment.
I want to use a Perl script to enter the programming environment:
system("environment_defaults.sh");
system("obe");
But when I enter the environment the variables are not set.
When you call your second command, it's not done in the environment you modified in the first command. In fact, there is no environment remaining from the first command, because the shell used to invoke "environment_defaults.sh" has already exited.
To keep the context of the first command in the second, invoke them in the same shell:
system("source environment_defaults.sh && obe");
Note that you need to invoke the shell script with source in order to perform its actions in the current shell, rather than invoking a new shell to execute them.
Alternatively, modify your environment at the beginning of every shell (e.g. with .bash_profile, if using bash), or make your environment variable changes in perl itself:
$ENV{FOO} = "hello";
system('echo $FOO');
Different sh -c processes will be called and environment variables are isolated within these.
Also doesn't calling environment_defaults.sh also make another sh process within what these variables will be set to in isolation?
Or start the Perl script with these environment variables exported and these will be set for all its child processes.
Each process gets its own environment, and each time you call "system" it runs a new process. So, what you are doing won't work. You'll have to run both commands in a single process.
Be aware, however, that after your Perl script exists, any environment variables it sets won't be available to you at the command line, because your Perl script is also a process with its own environment.
(UPDATE: Oh, this is not exactly what you asked for, but it might be useful for someone.)
If GDB is installed, you can set/modify parent shell variables with the following hack (non-strict style is used for clarity):
#!/usr/bin/perl
# export.pl
use File::Temp qw( tempfile );
%vars = (
a => 3,
b => 'pigs'
);
$ppid = getppid;
my #putvars = map { "call putenv (\"$_=$vars{$_}\")" } keys %vars;
$" = "\n";
$cmds = <<EOF;
attach $ppid
#putvars
detach
quit
EOF
($tmpfh, $tmpfn) = tempfile( UNLINK => 1 );
print $tmpfh $cmds;
`gdb -x $tmpfn`
Test:
$ echo "$a $b"
$ ./export.pl
$ echo "$a $b"
3 pigs
This can now be done with the Env::Modify module
use Env::Modify 'source'; # or use Env::Modify qw(source system);
source("environment_defaults.sh");
... environment from environment_defaults.sh is now available
... to Perl and to the following 'system' call
system("obe");
In Perl, is it possible to make 'exec', 'system', and 'qx' use a shell other than /bin/sh (without using a construct like 'exec "$SHELL -c ..."', and without recompiling perl)?
EDIT: The motivation for this question is a bash script that does 'export -f foo' and then uses perl in a subshell to invoke the function directly via 'system "foo"'. I am not sure that this technique will work with all sh, and although 'system "/bin/bash -c foo"' may work in that scenario, I wouldn't expect the exported function to propagate through all variants of /bin/sh. But mostly I was just curious, and am now curious about how to extend the solution to qx. Also, since I know nothing about non-unix platforms, I'd like to avoid hard coding the path to an alternate shell in the solution.
You can override exec and system. See perldoc perlsub for the details, but here is roughly what you want (modulo some quoting bugs I don't feel like trying to fix):
#!/usr/bin/perl
use strict;
use warnings;
use subs qw/system/;
sub system {
#handle one arg version:
if (#_ == 1) {
return CORE::system "$ENV{SHELL} -c $_[0]";
}
#handle the multi argument version
return CORE::system #_;
}
print "normal system:\n";
system "perl", "-e", q{system q/ps -ef | grep $$/};
print "overloaded system:\n";
system 'ps -ef | grep $$';
exec and system will use the shell (which will likely not be /bin/sh on non-UNIX systems) if you only pass one argument to it. (Details are described in perlfunc)
You may want to have a look at IPC::Run3 as an alternative to system
Why don't you want to use 'exec "$SHELL -c ..."'? If you don't want see that code every time you call exec or system, just hide it in a subroutine. That's what they're there for. :)
sub my_exec {
exec $ENV{SHELL}, '-c', #_;
}
If you want to do that, however, I suggest somehow sanitizing $ENV{SHELL} so that people don't do odd things to your script by setting weird values. You might want to ensure that the shell is listed in /etc/shells or whatever way your system lists approved login shells. You also need to do a bit more work to make this taint-clean, which you should probably do if you are going to send data to another process.
exec doesn't use /bin/sh
It just execs the program you specify. No shells.
If you want it to go through a shell you have to do that yourself.