I have a Perl script which needs to be called from a clean environment. The way I call the script is as follows.
env -i test.pl
After invoking env -i, certain environment variables are left intact, such as PATH. However, according to the ENV hash, the PATH variable is empty.
Here is the script (abridged):
#!/usr/bin/perl -w
system('echo "system: $PATH"');
print "perl hash: $ENV{'PATH'}\n";
This is the output when I run env -i test.pl:
system: /usr/local/bin:/bin:/usr/bin
Use of uninitialized value in concatenation (.) or string at test.pl line 4.
perl hash:
How can I get the ENV hash to be correct when I invoke the script under env -i?
The -i option to env does in fact clear $PATH out of the environment. The reason you see inconsistent results from the system call is because of how system works:
If there is only one scalar argument, the argument is checked for
shell metacharacters, and if there are any, the entire argument is
passed to the system's command shell for parsing (this is /bin/sh -c
on Unix platforms, but varies on other platforms). If there are no
shell metacharacters in the argument, it is split into words and
passed directly to execvp , which is more efficient.
Because you are calling system with a single commandline that must be parsed by a shell, one is started, which initializes its own $PATH upon startup (because shells do that), and that's the one you're seeing.
After invoking env -i, certain environment variables are left intact, such as PATH. However, according to the ENV hash, the PATH variable is empty.
On my rhel system, env -i does:
-i, --ignore-environment
start with an empty environment
So when you run the Perl program, the system() call is forking a shell, whic is probably reading an rc file that is setting PATH. but when you print $ENV{PATH} directly form the Perl program, it is empty as expected.
You should either manually set PATH, or determine the full paths to the various executables that you intend to run.
If you're going to spend a lot of time running system() commands, then maybe a shell script, with some Perl to process intermediate results is a better processing model in this case.
Related
I have a perl script as below, where I want to access a network path on a remote windows machine from a linux machine using rsh.
$cmd = "rsh -l $username $host \"pushd \\\\network\\path\\to\\the\\directory && dir\"";
print $cmd, "\n";
print qx($cmd);
When I run the script the third line prints output The system cannot find the path specified. However, if I run the command printed by the second line directly from the terminal it works fine.
I'm not able to understand why the script is not working. If the command works from the terminal, it should work using qx() too.
While you escape your meta-characters against interpolation by double-quotes and by the remote shell, qx might itself interpolate the string again, in which case you might need to add another level of escaping. From the documentation of qx:
A string which is (possibly) interpolated and then executed as a system command with /bin/sh or its equivalent. ...
How that string gets evaluated is entirely subject to the command interpreter on your system. On most platforms, you will have to protect shell metacharacters if you want them treated literally. This is in practice difficult to do, as it's unclear how to escape which characters. See perlsec for a clean and safe example of a manual fork() and exec() to emulate backticks safely.
I have a series of perl scripts that I want to run one after another on a unix system. What type of file would this be / could I reference it as in documentation? BASH, BATCH, Shell Script File?
Any help would be appreciated.
Simply put the commands you would use to run them manually in a file (say, perlScripts.sh):
#!/bin/sh
perl script1.pl
perl script2.pl
perl script3.pl
Then from the command line:
$ sh perlScripts.sh
Consider using Perl itself to run all of the scripts. If the scripts don't take command line arguments, you can simply use:
do 'script1.pl';
do 'script2.pl';
etc.
do 'file_name' basically copies the file's code into the current script and executes it. It gives each file its own scope, however, so variables won't clash.
This approach is more efficient, because it starts only one instance of the Perl interpreter. It will also avoid repeated loading of modules.
If you do need to pass arguments or capture the output, you can still do it in a Perl file with backquotes or system:
my $output = `script3.pl file1.txt`; #If the output is needed.
system("script3.pl","file1.txt"); #If the output is not needed.
This is similar to using a shell script. However, it is cross-platform compatible. It means your scripts only rely on Perl being present, and no other external programs. And it allows you to easily add functionality to the calling script.
I have a Korn shell script at a location like /opt/apps/abc/folder/properties.env. I can execute it from Unix bash using the dot command:
. /opt/apps/abc/folder/properties.env
This works.
I have a Perl script abc.pl from which I am calling the script properties.env. I tried the following different:
system('/usr/bin/ksh','-c', '. /opt/apps/abc/folder/properties.env');
/usr/bin/ksh -c /opt/apps/abc/folder/properties.env;
system('. /opt/apps/abc/folder/properties.env');
None of the above work. I don't want to use exec because I want to return to the Perl script. What am I doing wrong?
The environment changes will only last as long as the life of the ksh session spawned by the system command. If you want the environment changes to affect the Perl script, then you have to source that file before you launch the Perl program.
If you need those environment variables in your perl code, (not in the environment where you called perl), you can also read and parse that properties.env and set the environment in the %ENV variable.
e.g
$ENV{'ENV_VAR1'}=VALUE_OF_ENV_VAR1
using system() spawn another process, as the other poster said. changing environment in the child does not affect the parent.
How would I determine what script, program, or shell executed my Perl script?
Example: I might want to have human readable output if executed from shell (customized for each type of shell), a different type of output if called as a script from another perl script, and a machine readable format if executed from a program such as a continuous integration server.
Motivation: I have a tool that changes its output based on which shell executes it. I'd normally implement this behavior as an option to the script, but this tool's design doesn't allow for options. Other shells have environment variables that indicate what shell is running. I'm working on a patch to support Powershell, which has no such special variable.
Edit: Many of these answers happen to be linux specific. Unfortuantely, Powershell is for Windows. getppid, the $ENV{SHELL} variable, and shelling out to ps won't help in this case. This script needs to run cross-platform.
You use getppid(). Take this snippet in child.pl:
my $ppid = getppid();
system("ps --no-headers $ppid");
If you run it from the command line, system will show bash or similar (among other things). Execute it with system("perl child.pl"); in another script, e.g. parent.pl, and you will see that perl parent.pl executed it.
To capture just the name of the process with arguments (thanks to ikegami for the correct ps syntax):
my $ppid = getppid();
my $ps = `ps --no-headers -o cmd $ppid`;
chomp $ps;
EDIT: An alternative to this approach, might be to create soft links to your script, make the different contexts use different links to access your script and inspect $0 to build logic around that.
I would suggest a different approach to accomplish your goal. Instead of guessing at the context, make it more explicit. Each use case is wholly separate, so have three different interfaces.
A function which can be called inside a Perl program. This would likely return a Perl data structure. This is far easier, faster and more reliable than parsing script output. It would also serve as the basis for the scripts.
A script which outputs for the current shell. It can look at $ENV{SHELL} to discover what shell is running. For bonus points, provide a switch to explicitly override.
A script which can be called inside a non-Perl program, such as your continuous integration server, and issue machine readable output. XML and/or JSON or whatever.
2 and 3 would be just thin wrappers to format the data coming out of 1.
Each is tailored to fit its specific need. Each will work without heuristics. Each will be far simpler than trying to guess the context and what the user wants.
If you can't separate 2 and 3, have the continuous integration server set an environment variable and look for it.
Depending on your environment, you may be able to pick it up from the environment variables. Consider the following code:
/usr/bin/perl -MData::Dumper -e 'print Dumper(\%ENV);' | grep sh
On my Ubuntu system, it gets me:
'SHELL' => '/bin/bash',
So I guess that says I'm running perl from a bash shell. If you use something else, the SHELL variable may give you a hint.
But let's say you know you're in bash, but perl is run from a subshell. Then try:
/bin/sh -c "/usr/bin/perl -MData::Dumper -e 'print Dumper(\%ENV);'" | grep sh
You will find:
'_' => '/bin/sh',
'SHELL' => '/bin/bash',
So the shell is still bash, but bash has a variable $_ which also show the absolute filename of the shell or script being executed, which may also give a valuable hint. Similarily, for other environments there will most probably be clues left in the perl %ENV hash that should give you valuable hints.
If you're running PowerShell 2.0 or above (most likely), you can infer the shell as a parent process by examining the environment variable %psmodulepath%. By default, it points to the system modules under %windir%\system32\windowspowershell\v1.0\modules; this is what you would see if you examine the variable from cmd.exe.
However, when PowerShell starts up, it prepends the user's default module search path to this environment variable which looks like: %userprofile%\documents\windowspowershell\modules. This is inherited by child processes. So, your logic would be to test if %psmodulepath% starts with %userprofile% to detect powershell 2.0 or higher. This won't work in PowerShell 1.0 because it does not support modules.
This is on Windows XP with PowerShell v2.0, so take it with a grain of salt.
In a cmd.exe shell, I get:
PSModulePath=C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules\
whereas in the PowerShell console window, I get:
PSModulePath=E:\Home\user\WindowsPowerShell\Modules;C:\WINDOWS\system32\WindowsP
owerShell\v1.0\Modules\
where E:\Home\user is where my "My Documents" folder is. So, one heuristic may be to check if PSModulePath contains a user dependent path.
In addition, in a console window, I get:
!::=::\
in the environment. From the PowerShell ISE, I get:
!::=::\
!C:=C:\Documents and Settings\user
I am using a system call to do some tasks
system('myframework mycode');
but it complains of missing environment variables.
Those environment variables are set at my bash shell (from where I run the Perl code).
What am I doing wrong?
Does the system call create a brand new shell (without environment variable settings)? How can I avoid that?
It's complicated. Perl does not necessarily invoke a shell. Perldoc says:
If there is only one scalar argument, the argument is checked for shell metacharacters, and if there are any, the entire argument is passed to the system's command shell for parsing (this is /bin/sh -c on Unix platforms, but varies on other platforms). If there are no shell metacharacters in the argument, it is split into words and passed directly to execvp , which is more efficient.
So it actually looks like you would have the arguments passed right to execvp. Furthermore, whether the shell loaded your .bashrc, .profile, or .bash_profile depends on whether the shell is interactive. Likely it isn't, but you can check like this.
If you don't want to invoke a shell, call system with a list:
system 'mycommand', 'arg1', '...';
system qw{mycommand arg1 ...};
If you want a specific shell, call it explicitly:
system "/path/to/mysh -c 'mycommand arg1 ...'";
I think it's not the question of shell choice, since environment variables are always inherited by subprocesses unless cleaned up explicitly.
Are you sure you have exported your variables?
This will work:
$ A=5 perl -e 'system(q{echo $A});'
5
$
This will work too:
$ export A=5
$ perl -e 'system(q{echo $A});'
5
$
This wouldn't:
$ A=5
$ perl -e 'system(q{echo $A});'
$
system() calls /bin/sh as a shell. If you are on a somewhat different box like ARM it would be good to read the man page for the exec family of calls -- default behavior. You can invoke your .profile if you need to, since system() takes a command
system(" . myhome/me/.profile && /path/to/mycommand")
I've struggled for 2 days working on this. In my case, environment variables were correctly set under linux but not cygwin.
From mkb's answer I thought to check out man perlrun and it mentions a variable called PERL5SHELL (specific to the Win32 port). The following then solved the problem:
$ENV{PERL5SHELL} = "sh";
As is often the case - all I can really say is "it works for me", although the documentation does imply that this might be a sensible solution:
May be set to an alternative shell that perl must use internally for executing "backtick" commands or system().
If the shell used by perl does not implicitly inherit the environment variables then they will not be set for you.
I messed with environment variables being set for my script on this post where I needed the env variable $DBUS_SESSION_BUS_ADDRESS to be set, but it wouldn't when I called the script as root. You can read through that, but in the end you can check whether %ENV contains your needed variables and if not add them.
From perlvar
%ENV
$ENV{expr}
The hash %ENV contains your current environment. Setting a value in "ENV" changes
the environment for any child processes you subsequently fork() off.
My problem was that I was running the script under sudo and that didn't preserve all my user's env variables, are you running the script under sudo or as some other user, say www-data (apache)?
Simple test:
user#host:~$ perl -e 'print $ENV{q/MY_ENV_VARIABLE/} . "\n"'
and if that doesn't work then you will need to add it to %ENV at the top of your script.
try system("echo \$SHELL"); on your system.