Are environment variables preserved through qx in a perl script - perl

I have some legacy perl script, which sets environment variable
$ENV{"ENV_VAR_NAME"} = $envVar;
and then uses qx() to execute another shell command
$command = "$xyz";
$result = qx($command);
Will the modified ENV_VAR_NAME be available when
qx executes the new command.

Yes.
perlvar says about %ENV:
Setting a value in ENV changes the environment for any child processes you subsequently fork() off.
And qx does indeed spawn a child process, which can therefore access your modified environment variables.
This can be easily tested:
print "1: ", qx(echo \$X); # Prints "1: "
$ENV{X} = 42;
print "2: ", qx(echo \$X); # Prints "2: 42"

Related

Perl backticks subprocess is causing EOF on STDIN

I'm having this issue with my perl program that is reading from a file (which I open on STDIN and read each line one at a time using $line = <>). After I execute a `backtick` command, and then I go to read the next line from STDIN, I get an undef, signaling EOF. I isolated it to the backtick command using debugging code as follows:
my $dir = dirname(__FILE__);
say STDERR "before: tell(STDIN)=" . tell(STDIN) . ", eof(STDIN)=" . eof(STDIN);
say STDERR "\#export_info = `echo nostdin | perl $dir/pythonizer_importer.pl $fullfile`;";
#export_info = `echo nostdin | perl $dir/pythonizer_importer.pl $fullfile`;
say STDERR "after: tell(STDIN)=" . tell(STDIN) . ", eof(STDIN)=" . eof(STDIN);
The output is:
before: tell(STDIN)=15146, eof(STDIN)=
#export_info = `echo nostdin | perl ../pythonizer_importer.pl ./Pscan.pm`;
after: tell(STDIN)=15146, eof(STDIN)=1
I recently added the echo nostdin | to the perl command which had no effect. How do I run this command and get the STDOUT without messing up my STDIN? BTW, this is all running on Windows. I fire off the main program from a git bash if that matters.
Try locally undefining STDIN before running the backticks command, like this example script does. Note that any subroutines called from the sub that calls local will see the new value. You can also do open STDIN, "<", "file for child process to read"; after the local *STDIN but before the backticks but remember to close() the file before restoring STDIN to its old value.
The child process is affecting your STDIN because "the STDIN filehandle used by the command is inherited from Perl's STDIN." – perlop manual
This is just an example; in your actual script, replace the sed command with your actual command to run.
use strict;
use warnings;
#Run a command and get its output
sub get_output {
# Prevent passing our STDIN to child process
local *STDIN = undef;
print "Running sed\n";
#replace the sed command with the actual command you want to run
return `sed 's/a/b/'`;
}
my $output = get_output();
print $output;
#We can still read STDIN even after running a child process
print "Waiting for input\n";
print "Readline is " . scalar readline;
Input:
a
b
c
^D
line
Output:
Running sed
b
b
c
Waiting for input
Readline is line

How Perl can execute a command in the same shell with it?

I am not sure whether the title is really make sense to this problem. My problem is simple, I want to write a perl script to change my current directory and hope the result can be kept after calling the perl script. The script looks like this:
if ($#ARGV != 0) {
print "usage: mycd <dir symbol>";
exit -1;
}
my $dn = shift #ARGV;
if ($dn eq "kite") {
my $cl = `cd ./private`;
print $cl."\n";
}
else {
print "unknown directory symbol";
exit -1;
}
However, my current directory doesn't change after calling the script. What is the reason? How can I resolve it?
No, the Perl script will be run in a subprocess so it will not be able to affect the environment of the process that called it.
There are various tricks you can use such as sourcing shell scripts (in the context of the current shell rather than a sub-process), or using bash functions and aliases, but they won't work here.
How Perl can execute a command in the same shell with it?
Unless you have a very atypical shell, shells can only receive commands via STDIN, via its command line, and possibly via a command evaluation builtin.
The first two are out unless the Perl script is the parent of the shell, but you could use the third one indirectly as in the following example.
script.pl:
#!/usr/bin/perl
print "chdir 'private'\n";
bash script:
echo "$PWD" # /some/dir
eval "$( script.pl )"
echo "$PWD" # /some/dir/private
Of course, if you use bash, you could hide the details in a shell function.
mycd () {
eval "$( mycd.pl "$#" )"
}
Allowing you use to use
mycd
or even
mycd foo

Export variable from a shell script into a perl script

Perl Code
`. /home/chronicles/logon.sh `;
print "DATA : $ENV{ID}\n";
In logon.sh , we are exporting the variable "ID" (sourcing of shell script).
Manual run
$> . /home/chronicles/logon.sh
$> echo $LOG
While I am running in terminal manually (not from script). I am getting the output. (But not working from the script)
I followed this post :
How to export a shell variable within a Perl script?
But didnt solve the problem.
Note
I am not allowed to change "logon.sh" script.
The script inside the backticks is executed in a child process. While environment variables are inherited from parent processes, the parent can't access the environment of child processes.
However, you could return the contents of the child environment variable and put it into a Perl variable like
use strict; use warnings; use feature 'say';
my $var = `ID=42; echo \$ID`;
chomp $var;
say "DATA: $var";
output:
DATA: 42
Here an example shell session:
$ cat test_script
echo foo
export test_var=42
$ perl -E'my $cmd = q(test_var=0; . test_script >/dev/null; echo $test_var); my $var = qx($cmd); chomp $var; say "DATA: $var"'
DATA: 42
The normal output is redirected into /dev/null, so only the echo $test_var shows.
It won't work.
An environment variable can't be inherited from a child process.
The environment variable can be updated in your "manual run" is because it's in the same "bash" process.
Source command is just to run every command in login.sh under current shell.
More info you can refer to: can we source a shell script in perl script
You could do something like:
#/usr/bin/perl
use strict;
use warnings;
chomp(my #values = `. myscript.sh; env`);
foreach my $value (#values) {
my ($k, $v) = split /=/, $value;
$ENV{$k} = $v;
}
foreach my $key (keys %ENV) {
print "$key => $ENV{$key}\n";
}
Well, I've find a solution, that sound nice for me: This seem robust, as this use widely tested mechanism to bind shell environment to perl (running perl) and robust library to export them in a perl variable syntax for re-injecting in root perl session.
The line export COLOR tty was usefull to ask my bash to export newer variables... This seem work fine.
#!/usr/bin/perl -w
my $perldumpenv='perl -MData::Dumper -e '."'".
'\$Data::Dumper::Terse=1;print Dumper(\%ENV);'."'";
eval '%ENV=('.$1.')' if `bash -c "
. ./home/chronicles/logon.sh;
export COLOR tty ID;
$perldumpenv"`
=~ /^\s*\{(.*)\}\s*$/mxs;
# map { printf "%-30s::%s\n",$_,$ENV{$_} } keys %ENV;
printf "%s\n", $ENV{'ID'};
Anyway, if you don't have access to logon.sh, you have to trust it before running such a solution.
Old...
There is my first post... for history purpose, don't look further.
The only way is to parse result command, while asking command to dump environ:
my #lines=split("\n",`. /home/chronicles/logon.sh;set`);
map { $ENV{$1}=$2 if /^([^ =])=(.*)$/; } #lines;
This can now be done with the Env::Modify module
use Env::Modify 'source';
source("/home/chronicles/logon.sh");
... environment setup in logon.sh is now available to Perl ...
Your Perl process is the parent of the shell process, so it won't inherit environment variables from it. Inheritance works the other way, from parent to child.
But when you run the script with backticks, as shown, the standard output of the script is returned to the Perl script. So, either modify the shell script to end with the echo $LOG statement you show, or create a new shell script that runs the login.sh and then has echo $LOG. Your Perl script would then be:
my $value = `./myscript.sh`;
print $value;

How to export a shell variable within a Perl script?

I have a shell script, with a list of shell variables, which is executed before entering a programming environment.
I want to use a Perl script to enter the programming environment:
system("environment_defaults.sh");
system("obe");
But when I enter the environment the variables are not set.
When you call your second command, it's not done in the environment you modified in the first command. In fact, there is no environment remaining from the first command, because the shell used to invoke "environment_defaults.sh" has already exited.
To keep the context of the first command in the second, invoke them in the same shell:
system("source environment_defaults.sh && obe");
Note that you need to invoke the shell script with source in order to perform its actions in the current shell, rather than invoking a new shell to execute them.
Alternatively, modify your environment at the beginning of every shell (e.g. with .bash_profile, if using bash), or make your environment variable changes in perl itself:
$ENV{FOO} = "hello";
system('echo $FOO');
Different sh -c processes will be called and environment variables are isolated within these.
Also doesn't calling environment_defaults.sh also make another sh process within what these variables will be set to in isolation?
Or start the Perl script with these environment variables exported and these will be set for all its child processes.
Each process gets its own environment, and each time you call "system" it runs a new process. So, what you are doing won't work. You'll have to run both commands in a single process.
Be aware, however, that after your Perl script exists, any environment variables it sets won't be available to you at the command line, because your Perl script is also a process with its own environment.
(UPDATE: Oh, this is not exactly what you asked for, but it might be useful for someone.)
If GDB is installed, you can set/modify parent shell variables with the following hack (non-strict style is used for clarity):
#!/usr/bin/perl
# export.pl
use File::Temp qw( tempfile );
%vars = (
a => 3,
b => 'pigs'
);
$ppid = getppid;
my #putvars = map { "call putenv (\"$_=$vars{$_}\")" } keys %vars;
$" = "\n";
$cmds = <<EOF;
attach $ppid
#putvars
detach
quit
EOF
($tmpfh, $tmpfn) = tempfile( UNLINK => 1 );
print $tmpfh $cmds;
`gdb -x $tmpfn`
Test:
$ echo "$a $b"
$ ./export.pl
$ echo "$a $b"
3 pigs
This can now be done with the Env::Modify module
use Env::Modify 'source'; # or use Env::Modify qw(source system);
source("environment_defaults.sh");
... environment from environment_defaults.sh is now available
... to Perl and to the following 'system' call
system("obe");

How can I pass arguments from one Perl script to another?

I have a script which I run and after it's run it has some information that I need to pass to the next script to run.
The Unix/DOS commands are like so:
perl -x -s param_send.pl
perl -x -s param_receive.pl
param_send.pl is:
# Send param
my $send_var = "This is a variable in param_send.pl...\n";
$ARGV[0] = $send_var;
print "Argument: $ARGV[0]\n";
param_receive.pl is:
# Receive param
my $receive_var = $ARGV[0];
print "Parameter received: $receive_var";
But nothing is printed. I know I am doing it wrong but from the tutorials I can't figure out how to pass a parameter from one script to the next!
You can use a pipe character on the command line to connect stdout from the first program to stdin on the second program, which you can then write to (using print) or read from (using the <> operator).
perl param_send.pl | perl param_receive.pl
If you want the output of the first command to be the arguments to the second command, you can use xargs:
perl param_send.pl | xargs perl param_receive.pl
The %ENV hash in Perl holds the environment variables such as PATH, USER, etc. Any modifications to these variables is reflected 'only' in the current process and any child process that it may spawn. The parent process (which happens to be the shell in this particular instance) does not reflect these changes so when the 'param_send.pl' script ends all changes are lost.
For e.g. if you were to do something like,
#!/usr/bin/perl
# param_send.pl
$ENV{'VAL'} = "Value to send to param_recv";
#!/usr/bin/perl
# param_recv.pl
print $ENV{'VAL'};
This wouldn't work since VAL is lost when param_send exits. One workaround is to call param_recv.pl from param_send.pl and pass the value as an environment variable or an argument,
#!/usr/bin/perl
# param_send.pl
$ENV{'VAL'} = "Value to send to param_recv";
system( $^X, "param_recv.pl");
OR
#!/usr/bin/perl
# param_send.pl
system( $^X, qw(param_recv.pl 'VAL') );
Other options include piping the output or you could check out this Perlmonks node for a more esoteric solution.
#ARGV is created at runtime and does not persist. So your second script will not be able to see the $ARGV[0] you assigned in the first script. As crashmstr points out you either need to execute the second script from the first using one of the many methods for doing so. For example:
my $send_var = "This is a variable in param_send.pl...\n";
`perl param_receive.pl $send_var`;
or use an environment variable using %ENV:
my $send_var = "This is a variable in param_send.pl...\n";
$ENV['send_var'] = $send_var;
For a more advanced solutions think about using sockets or IPC.