Today, I write a bash script to open and close a special python virtual envionment. You want to close the virtual environment, you must in the same bash process with command deactivate, so you can use command ~source~ to execute you bash script.(Preface)
Perl question
Here is my perl script
#!/usr/bin/perl
BEGIN{
$\="";
}
use warnings;
use strict;
use feature "switch";
use Cwd qw(chdir cwd);
no warnings "experimental::smartmatch";
our $path = $ENV{"PATH"};
if($path =~ /sspider\/bin/){
print "Scrapy virtual environment opened already\n";
print "Do you want to close it? [y/n]:";
chomp(my $answer = <STDIN>);
given($answer){
when(/[yY\n]/){
my #path = split(/:/, $path);
my $scrapy_path;
for (#path){
if(/sspider/){
$scrapy_path = $_;
}
};
print $ENV{PWD}, "\n";
chdir("$scrapy_path") or die "Can't goto scrapy bin directory";
print $ENV{PWD}, "\n";
system("deactivate");
print "Closed successfully\n";
}
}
}
Because when I execute the perl script, it is running in a new process, so it can't close the virtual environment.
So, I want to know how to solve it.(Execute a command in the current bash process in perl script.
Process
Perl
It is impossible to run Perl code as part of the current bash process. Bash cannot execute Perl code by its own so it needs to run the Perl interpreter - which is separate program and thus will run as a new process.
What might be done though is to create some file by the Perl program which then gets sourced by the shell - thus running the shell-instructions in this created file in the context of the current shell.
Related
I am running a perl script over ssh like the below:
[dipuh#local ~]$ ssh dipuh#myremote_001 'perl /home/dipuh/a.pl'
The content of a.pl is the below:
print "Sleeping \n";
sleep(60);
print "Waking Up";
Here my local terminal waits for the perl script to execute completely and once finished displays the complete output. The initial "Sleeping" text also will be printed only with the final output.
Is there any way, in my local terminal, I can display the output of each command in the perl script at the run time, instead of waiting for the whole perl script to finish.
You are suffering from buffering.
You may either set $| to 1 for the block.
{
local $| = 1;
print "Sleeping \n";
sleep(60);
print "Waking Up";
}
Or use IO::Handle
use IO::Handle;
STDOUT->autoflush(1);
You can try to turn the autoflush mode on. The old fashioned way to it is by adding the following at the top of your script:
$| = 1;
or you can do it with the more modern way:
use IO::Handle;
STDOUT->autoflush(1);
Alternatively, you can flush the STDOUT on demand with:
use IO::Handle;
print "Sleeping \n";
STDOUT->flush;
sleep(60);
print "Waking Up";
STDOUT->flush;
I'm trying to get a perl script to loop very quickly (in Solaris).
I have something like this:
#! /bin/perl
while ('true')
{
use strict;
use warnings;
use Time::HiRes;
system("sh", "shell script.sh");
Time::HiRes::usleep(10);
}
I want the perl script to execute a shell script every 10 microseconds. The script doesn't fail but no matter how much I change the precision of usleep within the script, the script is still only being executed approx 10 times per second. I need it to loop much faster than that.
Am I missing something fundamental here? I've never used perl before but I can't get the sleep speed I want in Solaris so I've opted for perl.
TIA
Huskie.
EDIT:
Revised script idea thanks to user comments - I'm now trying to do it all within perl and failing miserably!
Basically I'm trying to run the PS command to capture processes - if the process exists I want to capture the line and output to a text file.
#! /bin/perl
while ('true')
{
use strict;
use warnings;
use Time::HiRes;
open(PS,"ps -ef | grep <program> |egrep -v 'shl|grep' >> grep_out.txt");
Time::HiRes::usleep(10);
}
This returns the following error:
Name "main::PS" used only once: possible typo at ./ps_test_loop.pl line 9.
This is a pure perl program (not launching any external process) that looks for processes running some particular executable:
#!/usr/bin/perl
use strict;
use warnings;
my $cmd = 'lxc-start';
my $cmd_re = qr|/\Q$cmd\E$|;
$| = 1;
while (1) {
opendir PROC, "/proc" or die $!;
while (defined(my $pid = readdir PROC)) {
next unless $pid =~ /^\d+$/;
if (defined(my $exe = readlink "/proc/$pid/exe")) {
if ($exe =~ $cmd_re) {
print "pid: $pid\n";
}
}
}
closedir PROC;
# sleep 1;
}
On my computer this runs at 250 times/second.
The bottleneck is the creation of processes, pipes, and opening the output file. You should be doing that at most once, instead of doing it in each iteration. That's why you need to do everything in Perl if you want to make this faster. Which means: don't call the ps command, or any other command. Instead, read from /proc or use Proc::ProcessTable, as the comments suggest.
Incidentally: the use statement is executed only once (it is essentially a shorthand for a require statement wrapped in a BEGIN { } clause), so you might as well put that at the top of the file for clarity.
Perl Code
`. /home/chronicles/logon.sh `;
print "DATA : $ENV{ID}\n";
In logon.sh , we are exporting the variable "ID" (sourcing of shell script).
Manual run
$> . /home/chronicles/logon.sh
$> echo $LOG
While I am running in terminal manually (not from script). I am getting the output. (But not working from the script)
I followed this post :
How to export a shell variable within a Perl script?
But didnt solve the problem.
Note
I am not allowed to change "logon.sh" script.
The script inside the backticks is executed in a child process. While environment variables are inherited from parent processes, the parent can't access the environment of child processes.
However, you could return the contents of the child environment variable and put it into a Perl variable like
use strict; use warnings; use feature 'say';
my $var = `ID=42; echo \$ID`;
chomp $var;
say "DATA: $var";
output:
DATA: 42
Here an example shell session:
$ cat test_script
echo foo
export test_var=42
$ perl -E'my $cmd = q(test_var=0; . test_script >/dev/null; echo $test_var); my $var = qx($cmd); chomp $var; say "DATA: $var"'
DATA: 42
The normal output is redirected into /dev/null, so only the echo $test_var shows.
It won't work.
An environment variable can't be inherited from a child process.
The environment variable can be updated in your "manual run" is because it's in the same "bash" process.
Source command is just to run every command in login.sh under current shell.
More info you can refer to: can we source a shell script in perl script
You could do something like:
#/usr/bin/perl
use strict;
use warnings;
chomp(my #values = `. myscript.sh; env`);
foreach my $value (#values) {
my ($k, $v) = split /=/, $value;
$ENV{$k} = $v;
}
foreach my $key (keys %ENV) {
print "$key => $ENV{$key}\n";
}
Well, I've find a solution, that sound nice for me: This seem robust, as this use widely tested mechanism to bind shell environment to perl (running perl) and robust library to export them in a perl variable syntax for re-injecting in root perl session.
The line export COLOR tty was usefull to ask my bash to export newer variables... This seem work fine.
#!/usr/bin/perl -w
my $perldumpenv='perl -MData::Dumper -e '."'".
'\$Data::Dumper::Terse=1;print Dumper(\%ENV);'."'";
eval '%ENV=('.$1.')' if `bash -c "
. ./home/chronicles/logon.sh;
export COLOR tty ID;
$perldumpenv"`
=~ /^\s*\{(.*)\}\s*$/mxs;
# map { printf "%-30s::%s\n",$_,$ENV{$_} } keys %ENV;
printf "%s\n", $ENV{'ID'};
Anyway, if you don't have access to logon.sh, you have to trust it before running such a solution.
Old...
There is my first post... for history purpose, don't look further.
The only way is to parse result command, while asking command to dump environ:
my #lines=split("\n",`. /home/chronicles/logon.sh;set`);
map { $ENV{$1}=$2 if /^([^ =])=(.*)$/; } #lines;
This can now be done with the Env::Modify module
use Env::Modify 'source';
source("/home/chronicles/logon.sh");
... environment setup in logon.sh is now available to Perl ...
Your Perl process is the parent of the shell process, so it won't inherit environment variables from it. Inheritance works the other way, from parent to child.
But when you run the script with backticks, as shown, the standard output of the script is returned to the Perl script. So, either modify the shell script to end with the echo $LOG statement you show, or create a new shell script that runs the login.sh and then has echo $LOG. Your Perl script would then be:
my $value = `./myscript.sh`;
print $value;
Is there any method to execute foo2.pl from foo1.pl in Perl, and append the foo2.txt to the foo1.txt then create foo3.txt? thanks.
i.e.
foo1.pl
print "Hello"; # output to foo1.txt;
foo2.pl
print "World"; # output to foo2.txt;
How to create foo3.txt file based on foo1.pl.
foo3.txt
Hello
World
Something like append foo2.txt to foo1.txt.
As i know, I can open foo1.txt and foo2.txt, then include the lines in foo3.pl.
print FOO3_TXT (<FOO1_TXT>);
print FOO3_TXT (<FOO2_TXT>);
Is there any good method?
Update my test (ActivePerl 5.10.1)
My foo.pl
#!C:\Perl\bin\perl.exe
use strict;
use warnings;
print "world\n";
my hw.pl (foo.pl and hw.pl at the same directory)
#!C:\Perl\bin\perl.exe
use strict;
use warnings;
print 'hello ';
print `./foo.pl`;
Output
**D:\learning\perl>hw.pl
hello '.' is not recognized as an internal or external command,
operable program or batch file.**
If hw.pl updated {}:
#!C:\Perl\bin\perl.exe
use strict;
use warnings;
print q{hello }, qx{./foo.pl};
Now Output. (a little different for the loacation of hello)
D:\learning\perl>hw.pl
'.' is not recognized as an internal or external command,
operable program or batch file.
hello
[Update].
Fixed. see answer,
run this as a script
perl foo1.pl > foo3.txt;
perl foo2.pl >> foo3.txt;
contents of foo1.pl
!#/bin/perl
print "Hello";
contents of foo2.pl
!#/bin/perl
print "World";
or
simply use the cat command if you are running linux to append foo2.txt to foo1.txt.
Just in case you are being literal about execute foo2.pl from foo1.pl in Perl then this is what you can do:
print 'hello ';
print qx(perl foo2.pl);
qx is another way to run system commands like backticks. Thus perl foo2.pl is run with the output being sent back to your calling perl script.
So here the same using backticks. Also it uses a direct call to script (which is better):
print 'hello ';
print `./foo2.pl`;
And if you are expecting lots of output from the script then its best not to load it all into memory like above two examples. Instead use open like so:
print 'hello ';
open my $foo2, '-|', './foo2.pl';
print <$foo2>;
close $foo2;
And you can wrap this up into one print statement for "hello world" example:
print 'hello ', do {
open my $foo2, '-|', './foo2.pl';
<$foo2>;
};
/I3az/
Using a shell script (for example, a .bat file on Windows) to run various Perl scripts and combine their output is one way to solve the problem. However, I usually find that Perl itself provides a more powerful and flexible environment than shell scripts. To use Perl in this way, one place to start is by learning about the system and exec commands.
For example:
# In hello.pl
print "Hello\n";
# In world.pl
print "World\n";
# In hello_world.pl.
use strict;
use warnings;
system 'perl hello.pl > hello_world.txt';
system 'perl world.pl >> hello_world.txt';
You can use the following code also
file1.pl
use strict;
use warnings;
open (FH,">file") or die "$! can't open";
print FH "WELCOME\n";
file2.pl
use strict;
use warnings;
open (FH,">>file") or die "$! can't open";
print FH "WELCOME2\n";
The file content is
WELCOME
WELCOME2
If you know beforhand that the script you want to execute from inside the other script is also Perl, you should use do EXPR (https://perldoc.perl.org/functions/do.html).
This executes the contents of the file EXPR in the context of the running perl process and saves you from starting new cmd.exe and perl.exe instances.
hello.pl:
print "Hello";
do "world.pl";
wordl.pl:
print "World";
I run my Perl script (gettotal.pl) and it works fine. I managed to get the sum of TOTAL.txt and append the output value to test.txt. But when I run it inside shell script (test.sh) I got this error:
Too many arguments for open at /home/daily/scripts/per_NODE_HR/gettotal.pl line 29, near ""$dir")"
Execution of /home/daily/scripts/per_NODE_HR/gettotal.pl aborted due to compilation errors.
What is the difference between running it manually (./gettotal.pl) and running it inside shell script? Simple yet still confusing for me:-)
gettotal.pl
#!/opt/perl/bin/perl-w
use strict;
my $sum;
open(FH,"/home/daily/scripts/per_NODE_HR/date.txt");
chomp (my #date = <FH>);
my $path = "/home/daily/output/per_NODE_HR/$date[0]/TOTAL.txt";
open(FILE,"$path") or die "Unable to open $path: $!";
my #hits = <FILE>;
$sum=sum($#hits);
print "TOTAL = $sum";
print "\n";
sub sum {
if ($_[0] == 0) {
return $hits[0];
}
return $hits[$_[0]] + sum($_[0]-1);
}
my $dir = "/home/daily/output/per_NODE_HR/$date[0]/test.txt";
open(OUT,'>>', "$dir") or die "Cannot open $dir: $!";
print OUT "TOTAL: $sum";
close OUT;
close FILE;
close FH;
shell script
#!/bin/sh
perl /home/daily/scripts/per_NODE_HR/gettotal.pl
The error you're getting suggests that your system perl is a truly ancient version... The three-argument form of open was added in perl 5.6.0 (released March 22, 2000), so a complaint about your open having too many arguments would seem to indicate that you're passing your code to a 5.5.x or older perl. Try perl -v on the command line to see what version the system perl is.
As for how to resolve this, call it in your shell script with just /home/daily/scripts/per_NODE_HR/gettotal.pl instead of perl /home/daily/scripts/per_NODE_HR/gettotal.pl and it will get passed to /opt/perl/bin/perl-w as specified in the shebang (#!) line, just like it does when you run it manually with ./gettotal.pl.
Incidentally, you might also want to add a use warnings along with use strict instead of relying on your code being run with perl -w. If you use perl gettotal.pl, warnings will not be enabled.
Almost certainly, perl != /opt/perl/bin/perl.
Try which perl.