Shell Programming inside Perl - perl

I am writing a code in perl with embedded shell script in it:
#!/usr/bin/perl
use strict;
use warnings;
our sub main {
my $n;
my $n2=0;
$n=chdir("/home/directory/");
if($n){
print "change directory successful $n \n";
$n2 = system("cd", "section");
system("ls");
print "$n2 \n";
}
else {
print "no success $n \n";
}
print "$n\n";
}
main();
But it doesn't work. When I do the ls. The ls doesn't show new files. Anyone knows another way of doing it. I know I can use chdir(), but that is not the only problem, as I have other commands which I have created, which are simply shell commands put together. So does anyone know how to exactly use cli in perl, so that my compiler will keep the shell script attached to the same process rather than making a new process for each system ... I really don't know what to do.
The edits have been used to improve the question. Please don't mind the edits if the question is clear.
edits: good point made by mob that the system is a single process so it dies everytime. But, What I am trying to do is create a perl script which follows an algorithm which decides the flow of control of the shell script. So how do I make all these shell commands to follow the same process?

system spawns a new process, and any changes made to the environment of the new process are lost when the process exits. So calling system("cd foo") will change the directory to foo inside of a very short-lived process, but won't have any effect on the current process or any future subprocesses.
To do what you want to do (*), combine your commands into a single system call.
$n2 = system("cd section; ls");
You can use Perl's rich quoting features to pass longer sequences of commands, if necessary.
$n2 = system q{cd section
if ls foo ; then
echo we found foo in section
./process foo
else
echo we did not find foo\!
./create_foo > foo
fi};
$n2 = system << "EOSH";
cd section
./process bar > /tmp/log
cd ../sekshun
./process foo >> /tmp/log
if grep -q Error /tmp/log ; then
echo there were errors ...
fi
EOSH
(*) of course there are easy ways to do this entirely in Perl, but let's assume that the OP eventually will need some function only available in an external program

system("cd", "section"); attempts to execute the program cd, but there is no such program on your machine.
There is no such program because each process has its own current work directory, and one process cannot change another process's current work directory. (Programs would malfunction left and right if it was possible.)
It looks like you are attempting to have a Perl program execute a shell script. That requires recreating the shell in Perl. (More specific goals might have simpler solutions.)
What I am trying to do is create a perl script which follows an algorithm which decides the flow of control of the shell script.
Minimal change:
Create a shell script that prompts for instructions/commands. Have your Perl script launch the shell script using Expect and feed it answers/commands.

Related

Persistant effects of modifying process environment via system

I am making a few calls to the system, mainly cd commands, as certain functions need to called from certain directories on my system. However, I have noticed that once a call is finished, the effects of that call are lost.
For example, lets say that I start in /home/project and then call:
system("setenv home/project/env/NeededEnvironment");
system("make cfile.o");
The second system call doesn’t know about the first call setting the environment needed for the file to compile. I have tried putting them into one system call separated by ; as well, but have the same problem. Is there anyway to get the effect of the first call to be saved?
That is how system works: it creates a subshell to execute your command, and when the command is complete, the subshell exits leaving your perl process unaffected.
Section 8 of the Perl FAQ also answers this question.
I {changed directory, modified my environment} in a perl script. How come the change disappeared when I exited the script? How do I get my changes to be visible?
Unix
In the strictest sense, it can't be done—the script executes as a different process from the shell it was started from. Changes to a process are not reflected in its parent—only in any children created after the change. There is shell magic that may allow you to fake it by eval()ing the script's output in your shell; check out the comp.unix.questions FAQ for details.
You want code along the lines of
system("cd /home/project/env/NeededEnvironment && make cfile.o") == 0
or warn "$0: make failed";
or use the -C option to make and avoid shell argument parsing as in
system("make", "-C", "/home/project/env/NeededEnvironment", "cfile.o") == 0
or warn "$0: make failed";
If you are writing a Perl script, use Perl itself and shell-out as rarely as possible.
If you need to change your directory:
chdir 'some/other/dir';
If you need to set an environment variable:
$ENV{ SOME_VAR } = 'Some value';
Update
Here are some more commands where the shell equivalent should not be used:
mkdir
unlink
rmdir
Modules everyone should know about:
File::Copy
File::Path
File::Basename
File::Spec

Perl Shell Execution

I'm sure you have all used Metasploit.
In Metasploit when the user presses the enter key, or types any command Metasploit executes it, and returns back with a msf:>.
I was wondering how I could do this in Perl (pretty much make a Perl shell, which executes commands and returns back with that little identifier).
while (1) {
if (<STDIN> eq defined) {
print ">>"
}
$command = <STDIN>;
if ($command =~ m/help/) {
print "Help is on its way";
} elsif ($command =~ m/exit/) {
exit (1);
}
}
Take a look at Term::* modules
Term::ReadLine
Term::Shell
Following David's answer, its time for me to promote Zoidberg. Zoidberg is another Perl shell (like PSh) but it is modular, embeddable, and extendable.
You can use Zoidberg::Shell to build a shell for your application, or
you can use the Zoidberg::Fish plugin system to build a plugin for your needs which would run inside Zoidberg itself. It would most likely define some commands, and possibly a syntax and operation mode. The cannonical example of this is a SQL plugin which allows Zoidberg to recognize SQL statements, and then pass them to a waiting db handle and return results, directly from inside the shell!
As it happens, I am the new maintainer. Zoidberg just had its first release in several years which corrected several bugs that had popped up over the years. So while I am not an expert in it yet, I am probably the closest to being one that exists.
Start your reading about Zoidberg at the zoiduser man page, then read more about plugins at zoiddevel.
There's really something called Perl Shell (psh) and its available from the CPAN archive.
I haven't tried it, but the documentation is all there:
$ cpan
cpan> install Psh
EDIT
I've played with it a bit. I had to change PS1 so it wouldn't interfere with Psh. Originally, my PS1 was set to:
PS1=$(print -n "`logname`#`hostname`:";if [[ "${PWD#$HOME}" != "$PWD" ]] then; print -n "~${PWD#$HOME}"; else; print -n "$PWD";fi;print "\n$ ")
But, Psh didn't like it. Instead, if I use the Bash settings, it works great:
PS1="\u#\h:\W: PSH> "
I also get the following warnings when starting:
Using an array as a reference is deprecated at /Library/Perl/5.12/Psh/StrategyBunch.pm line 260.
Using an array as a reference is deprecated at /Library/Perl/5.12/Psh/Strategy/Darwin_apps.pm line 47.
But it does start up. I haven't figured out shell history editing, but it does take Perl scripts:
david#DaveBook:david: PSH> foreach $foo (<*>) {
> print "$foo\n";
> }

Eval for multiple command execution in ksh93, Solaris

I would like to execute two or more commands back to back . But these commands are stored in a variable in my script. For example,
var="/usr/bin/ls ; pwd ; pooladm -d; pooladm -e"
The problem arises when I execute this variable via my script.
Suppose I go:
#!/bin/ksh -p
..
..
var="/usr/bin/ls ; pwd;pooladm -d; pooladm -e"
..
..
$var # DOES NOT WORK ..BUT WORKS WITH EVAL
It doesn't work ..
But the moment I use eval :
eval $var
It works brilliantly.
I was just wondering if there is any other way to execute a bunch of commands stored in a variable without using eval.
Also , Is eval usage considered a bad programming practice because my coding standards appear to shun its usage than embrace it . Please do let me know.
Remember that the shell only parses the line once. So when you expand your $var, it becomes one string containing blanks. Since you have no executable named '/usr/bin/ls ; pwd;pooladm -d; pooladm -e', it can't run it.
On the other hand, eval takes its arguments are re-scans them, now you get '/usr/bin/ls', 'pwd', and so on. It works.
eval is a little chancy because it leaves a possible security hole -- consider if someone managed to get 'rm -rf /' into the string. But it's a useful tool.
Use backticks and echo. In your case
`echo $var`
You could invoke another copy of the shell to run the command:
sh -c "$var"
This isn't necessarily better than using eval. The main practical difference is that eval will run the commands in the context of the current shell, while "sh -c" runs the commands in a separate shell instance. If var contains commands to set environment variables or change the current directory, you or may not want those commands to affect the current shell.

Perl as a batch-script tool - fully piping child processes?

Apologies if some of the terminology may be slighlty off here. Feel free to correct me if I use a wrong term for something.
Is it possible to use Perl as an "advanced shell" for running "batch" scripts? (on Windows)
The problem I face when replacing a .bat/.cmd script that's getting too complicated with a perl script is that I can't easily run sub processes as a shell does.
That is, I would like to do the same thing from my perl script as a shell does when invoking a child process, that is, fully "connecting" STDIN, STDOUT and STDERR.
Example:
foo.bat -
#echo off
echo Hello, this is a simple script.
set PARAM_X=really-simple
:: The next line will allow me to simply "use" the tool on the shell I have open,
:: that is STDOUT + STDERR of the tool are displayed on my STDOUT + STDERR and if
:: I enter something on the keyboard it is sent to the tools STDIN
interactive_commandline_tool.exe %PARAM_X%
echo The tool returned %ERROLEVEL%
However, I have no clue how to fully implement this in perl (is it possible at all?):
foo.pl -
print "Hello, this is a not so simple script.\n";
my $param_x = get_more_complicated_parameter();
# Magic: This sub executes the process and hooks up my STDIN/OUT/ERR and
# returns the process error code when done
my $errlvl = run_executable("interactive_commandline_tool.exe", $param_x);
print "The tool returned $errlvl\n";
How can I achieve this in perl? I played around with IPC::Open3 but it seems this doesn't do the trick ...
Probably you'll find IPC::Run3 useful. It allow you to capture both STDOUT and STDERR (but not pipe them in real time). Command error level will be returned in $?.
Why not this way:
print "Hello, this is a not so simple script.\n";
my $param_x = get_more_complicated_parameter();
system('cmd.exe', $param_x);
my $errorlevel = $? >> 8;
print "The tool returned $errorlevel\n";
sub get_more_complicated_parameter { 42 }
I don't have your interactive program, but the shell executed allowed me to enter commands, it has inherited environment defined in perl, etc.
I am using perl as replacement for more complicated shell scripts on Windows for long time and so far everything I needed was possible.

How does Perl interact with the scripts it is running?

I have a Perl script that runs a different utility (called Radmind, for those interested) that has the capability to edit the filesystem. The Perl script monitors output from this process, so it would be running throughout this whole situation.
What would happen if the utility being run by the script tried to edit the script file itself, that is, replace it with a newer version? Does Perl load the script and any linked libraries at the start of its execution and then ignore the script file itself unless told specifically to mess with it? Or perhaps, would all hell break loose, and executions might or might not fail depending on how the new file differed from the one being run?
Or maybe something else entirely? Apologies if this belongs on SuperUser—seems like a gray area to me.
It's not quite as simple as pavel's answer states, because Perl doesn't actually have a clean division of "first you compile the source, then you run the compiled code"[1], but the basic point stands: Each source file is read from disk in its entirety before any code in that file is compiled or executed and any subsequent changes to the source file will have no effect on the running program unless you specifically instruct perl to re-load the file and execute the new version's code[2].
[1] BEGIN blocks will run code during compilation, while commands such as eval and require will compile additional code at run-time
[2] Most likely by using eval or do, since require and use check whether the file has been loaded already and ignore it if it has.
For a fun demonstration, consider
#! /usr/bin/perl
die "$0: where am I?\n" unless -e $0;
unlink $0 or die "$0: unlink $0: $!\n";
print "$0: deleted!\n";
for (1 .. 5) {
sleep 1;
print "$0: still running!\n";
}
Sample run:
$ ./prog.pl
./prog.pl: deleted!
./prog.pl: still running!
./prog.pl: still running!
./prog.pl: still running!
./prog.pl: still running!
./prog.pl: still running!
Your Perl script will be compiled first, then run; so changing your script while it runs won't change the running compiled code.
Consider this example:
#!/usr/bin/perl
use strict;
use warnings;
push #ARGV, $0;
$^I = '';
my $foo = 42;
my $bar = 56;
my %switch = (
foo => 'bar',
bar => 'foo',
);
while (<ARGV>) {
s/my \$(foo|bar)/my \$$switch{$1}/;
print;
}
print "\$foo: $foo, \$bar: $bar\n";
and watch the result when run multiple times.
The script file is read once into memory. You can edit the file from another utility after that -- or from the Perl script itself -- if you wish.
As the others said, the script is read into memory, compiled and run. GBacon shows that you can delete the file and it will run. This code below shows that you can change the file and do it and get the new behavior.
use strict;
use warnings;
use English qw<$PROGRAM_NAME>;
open my $ph, '>', $PROGRAM_NAME;
print $ph q[print "!!!!!!\n";];
close $ph;
do $PROGRAM_NAME;
... DON'T DO THIS!!!
Perl scripts are simple text files that are read into memory, compiled in memory, and the text file script is not read again. (Exceptions are modules that come into lexical scope after compilation and do and eval statements in some cases...)
There is a well known utility that exploits this behavior. Look at CPAN and its many versions which is probably in your /usr/bin directory. There is a CPAN version for each version of Perl on your system. CPAN will sense when a new version of CPAN itself is available, ask if you want to install it, and if you say "y" it will download the newer version and respawn itself right where you left off without loosing any data.
The logic of this is not hard to follow. Read /usr/bin/CPAN and then follow the individualized versions related to what $Config::Config{version} would generate on your system.
Cheers.