.vimrc to automatically compile perl code, based on actual perl version - perl

My .vimrc is configured to automatically compile my Perl script upon save.
However, it is hard-coded to perl 5.14.
Some of the scripts I maintain are 5.8.8, and others are 5.16
I would like vim to compile my code based on the version in the hashbang #! line
Is this possible?
Here is my current .vimrc:
" check perl code with :make
" by default it will call "perl -c" to verify the code and, if there are any errors, the cursor will be positioned in the offending line.
" errorformat uses scanf to extract info from errors/warnings generated by make.
" %f = filename %l = line number %m = error message
" autowrite tells vim to save the file whenever the buffer is changed (for example during a save)
" BufWritePost executes the command after writing the buffer to file
" [F4] for quick :make
autocmd FileType perl set makeprg=/home/utils/perl-5.14/5.14.1-nothreads-64/bin/perl\ -c\ %\ $*
autocmd FileType perl set errorformat=%f:%l:%m
autocmd FileType perl set autowrite
autocmd BufWritePost *.pl,*.pm,*.t :make

You have to dynamically change the Perl path in 'makeprg' based on the shebang line. For example:
:let perlExecutable = matchstr(getline(1), '^#!\zs\S\+')
:let perlExecutable = (empty(perlExecutable) ? '/home/utils/perl-5.14/5.14.1-nothreads-64/bin/perl' : perlExecutable) " Default in case of no match
:let &makeprg = perlExecutable . ' -c % $*'
You can invoke that (maybe encapsulated in a function) on the FileType event, but then it won't properly detect new files that don't have the shebang yet. Or prepend the call to your autocmd BufWritePost *.pl,*.pm,*.t :make; that will re-detect after each save.
PS: Instead of all those :autocmd FileType in your ~/.vimrc, I would rather place those in ~/.vim/after/ftplugin/perl.vim (if you have :filetype plugin on). Also, you should use :setlocal instead of (global) :set.

Use plenv to change perl versions for your projects: plenv local 5.8.8
https://github.com/tokuhirom/plenv
You don't have to specify the path for shebang and makeprg.

I found another way to solve this problem.
Instead of setting makeprg to the perl executable, I tell vim to execute another script.
That script determines the appropriate version of perl, then runs that version with -c
This solution could be extended for other scripting languages
In the .vimrc, change:
autocmd FileType perl set makeprg=/home/utils/perl-5.14/5.14.1-nothreads-64/bin/perl\ -c\ %\ $*
to:
autocmd FileType perl set makeprg=/home/username/check_script_syntax.pl\ %\ $*
#!/home/utils/perl-5.14/5.14.1-nothreads-64/bin/perl
use 5.014; # enables 'say' and 'strict'
use warnings FATAL => 'all';
use IO::All;
my $script = shift;
my $executable;
for my $line ( io($script)->slurp ) {
##!/home/utils/perl-5.14/5.14.1-nothreads-64/bin/perl
##!/home/utils/perl-5.8.8/bin/perl
if ( $line =~ /^#!(\/\S+)/ ) {
$executable = $1;
} elsif ( $script =~ /\.(pm)$/ ) {
if ( $script =~ /(releasePatterns.pm|p4Add.pm|dftform.pm)$/ ) {
$executable = '/home/utils/perl-5.8.8/bin/perl';
} else {
$executable = '/home/utils/perl-5.14/5.14.1-nothreads-64/bin/perl';
}
} else {
die "ERROR: Could not find #! line in your script $script";
}
last;
}
if ( $script =~ /\.(pl|pm|t)$/ ) {
$executable .= " -c";
} else {
die "ERROR: Did not understand how to compile your script $script";
}
my $cmd = "$executable $script";
say "Running $cmd";
say `$cmd`;

Related

Get value of autosplit delimiter?

If I run a script with perl -Fsomething, is that something value saved anywhere in the Perl environment where the script can find it? I'd like to write a script that by default reuses the input delimiter (if it's a string and not a regular expression) as the output delimiter.
Looking at the source, I don't think the delimiter is saved anywhere. When you run
perl -F, -an
the lexer actually generates the code
LINE: while (<>) {our #F=split(q\0,\0);
and parses it. At this point, any information about the delimiter is lost.
Your best option is to split by hand:
perl -ne'BEGIN { $F="," } #F=split(/$F/); print join($F, #F)' foo.csv
or to pass the delimiter as an argument to your script:
F=,; perl -F$F -sane'print join($F, #F)' -- -F=$F foo.csv
or to pass the delimiter as an environment variable:
export F=,; perl -F$F -ane'print join($ENV{F}, #F)' foo.csv
As #ThisSuitIsBlackNot says it looks like the delimiter is not saved anywhere.
This is how the perl.c stores the -F parameter
case 'F':
PL_minus_a = TRUE;
PL_minus_F = TRUE;
PL_minus_n = TRUE;
PL_splitstr = ++s;
while (*s && !isSPACE(*s)) ++s;
PL_splitstr = savepvn(PL_splitstr, s - PL_splitstr);
return s;
And then the lexer generates the code
LINE: while (<>) {our #F=split(q\0,\0);
However this is of course compiled, and if you run it with B::Deparse you can see what is stored.
$ perl -MO=Deparse -F/e/ -e ''
LINE: while (defined($_ = <ARGV>)) {
our(#F) = split(/e/, $_, 0);
}
-e syntax OK
Being perl there is always a way, however ugly. (And this is some of the ugliest code I have written in a while):
use B::Deparse;
use Capture::Tiny qw/capture_stdout/;
BEGIN {
my $f_var;
}
unless ($f_var) {
$stdout = capture_stdout {
my $sub = B::Deparse::compile();
&{$sub}; # Have to capture stdout, since I won't bother to setup compile to return the text, instead of printing
};
my (undef, $split_line, undef) = split(/\n/, $stdout, 3);
($f_var) = $split_line =~ /our\(\#F\) = split\((.*)\, \$\_\, 0\);/;
print $f_var,"\n";
}
Output:
$ perl -Fe/\\\(\\[\\\<\\{\"e testy.pl
m#e/\(\[\<\{"e#
You could possible traverse the bytecode instead, since the start probably will be identical every time until you reach the pattern.

Cannot call pdflatex from perl script (due to encoding?)

When I call pdflatex manually from the windows command line, it generates the desired pdf.
When I call pdflatex from a perl script instead, it does not:
system("pdflatex $fileName");
.. results in
Sorry, but pdflatex did not succeed.
You may want to visit the MiKTeX project page, if you need help.
utf8 "\x80" does not map to Unicode at C:/strawberry-perl/perl/site/lib/Encode.pm line 200.
The script was running on unix before and working fine. Now, after having it migrated to a windows system it doesn't.
The content of the tex-input-file is generated by the script as well. the "file"-command on my Mac tells me that this file is encoded as "us-ascii".
So I tried to make perl encode it as "utf-8", but it did not work:
open(FH, "> :encoding(utf-8)", $fileName);
or
binmode(FH, ":utf8");
Files are still being generated with us-ascii encoding. How can I change that?
So far, the encoding is my only clue.
What else could be the problem?
If this works fine when manually typed into the command line the this could be due to the way perl interpolates the quotation marks before passing the command to the system. Have you tried printing the call you making to test whether it provides the exact same imput as when to enter it manually? Otherwise, for passing arguments to a program via the system command in perl I always separate them out as follows to avoid any interpolation errors:
#...
my $prog = "Z.*";
my $arg1 = "X";
my $arg2 = "Y";
#...
my $file = "W.*";
system("$prog", ("$arg1", "$arg2", ..., "$file"));
#...
If this doesn't work, another, albeit rather clunky solution, might be to import the file contents into a variable and try the following to 'manually' encode it in perl as follows:
use Encode;
use utf8;
use charnames qw( :full :short );
my $encodedfile = encode("utf8", $filecontents);
If you happen to have any active caracters in the file which could influence the way pdflatex handles the final output (for example in perl \\ gives \ to pdflatex, which ends up finally being ) you can append the following to the encoding:
my $str = $encodedfile;
my $find = "\\N{U+005C}";
my $replace = "\\textbackslash ";
$str =~ s/$find/$replace/g;
my %special_characters;
$special_characters{"\\N{U+0025}"} = "\\pourcent ";
$special_characters{"\\\$"} = "\\\$";
$special_characters{"\\N{U+007B}"} = "\\{";
$special_characters{"\N{U+007D}"} = "\\}";
$special_characters{"\N{U+0026}"} = "\\&";
$special_characters{"\\N{U+005F}"} = "\\textunderscore ";
$special_characters{"\\N{U+002F}"} = "\/";
$special_characters{"\\N{U+005B}"} = "\[";
$special_characters{"\\N{U+005D}"} = "\]";
$special_characters{"\\N{U+005E}"} = "\\textasciicircum ";
$special_characters{"\\N{U+0023}"} = "\\#";
$special_characters{"\\\N{U+007E}"} = "\\textasciitilde ";
$special_characters{"\\\N{U+0021}"} = " \\newline ";
my $string = $str;
foreach my $char (keys %special_characters) {
$string =~ s/$char/$special_characters{$char}/g;
}
Hope this helps.

Search and replace in Perl for particular word

I have a huge file which consists of similar lines below , with different clocks:
cmd -quiet [get_ports p1] ref_clocks "cudtclk_sp cudtclk"
cmd -quiet [get_ports p2] clock "cu2xdtclk_sp cu2xdtclk"
And I need to replace cudtclk with some other name like cdtclk whenever I have ref_clocks in my file, globally.
I have written following code but it doesn't seem to be working.
#!/usr/bin/perl
use strict;
use warnings;
sub clock_change
{       # Get the subroutine's argument.
my $arg = shift;
# Hash of stuff we want to replace.
my %replace = (
"cudtclk" => "cdtclk",
);
# See if there's a replacement for the given text.
my $text = $replace{$arg};
if(defined($text)) {
return $text;
}
return $arg;
}
open PAR, "<file name>";
while(<PAR>) {
$_ =~ s/\S+\s\S+\s\S+\s\S+\sref_clocks\s+(\S+\s+\S+)/clock_change($1)/eig;
print $_;   ##print it to some file later.
}
"And I need to replace cudtclk with some other name like cdtclk"
perl -pe 's/\bcudtclk\b/cdtclk/' thefile > newfile
"whenever I have ref_clocks"
perl -pe 's/\bcudtclk\b/cdtclk/ if /\bref_clocks\b/' thefile > newfile
Alternatively:
# saves original file as file.bak
perl -i.bak -pe 's/\bcudtclk\b/cdtclk/ if /\bref_clocks\b/' file
Tighten to suit your data, as necessary.
Although the substitution seems like unnecessarily complex, you can fix it with something similar to:
$_ =~ s/(ref_clocks\s+")([^_]+)_sp(\s+)\2/
$1.clock_change($2)."_sp$3".clock_change($2)/eig;

how to source a shell script [environment variables] in perl script without forking a subshell?

I want to call "env.sh " from "my_perl.pl" without forking a subshell. I tried with backtics and system like this --> system (. env.sh) [dot space env.sh] , however wont work.
Child environments cannot change parent environments. Your best bet is to parse env.sh from inside the Perl code and set the variables in %ENV:
#!/usr/bin/perl
use strict;
use warnings;
sub source {
my $name = shift;
open my $fh, "<", $name
or die "could not open $name: $!";
while (<$fh>) {
chomp;
my ($k, $v) = split /=/, $_, 2;
$v =~ s/^(['"])(.*)\1/$2/; #' fix highlighter
$v =~ s/\$([a-zA-Z]\w*)/$ENV{$1}/g;
$v =~ s/`(.*?)`/`$1`/ge; #dangerous
$ENV{$k} = $v;
}
}
source "env.sh";
for my $k (qw/foo bar baz quux/) {
print "$k => $ENV{$k}\n";
}
Given
foo=5
bar=10
baz="$foo$bar"
quux=`date +%Y%m%d`
it prints
foo => 5
bar => 10
baz => 510
quux => 20110726
The code can only handle simple files (for instance, it doesn't handle if statements or foo=$(date)). If you need something more complex, then writing a wrapper for your Perl script that sources env.sh first is the right way to go (it is also probably the right way to go in the first place).
Another reason to source env.sh before executing the Perl script is that setting the environment variables in Perl may happen too late for modules that are expecting to see them.
In the file foo:
#!/bin/bash
source env.sh
exec foo.real
where foo.real is your Perl script.
You can use arbitrarily complex shell scripts by executing them with the relevant shell, dumping their environment to standard output in the same process, and parsing that in perl. Feeding the output into something other than %ENV or filtering for specific values of interest is prudent so you don't change things like PATH that may have interesting side effects elsewhere. I've discarded standard output and error from the spawned shell script although they could be redirected to temporary files and used for diagnostic output in the perl script.
foo.pl:
#!/usr/bin/perl
open SOURCE, "bash -c '. foo.sh >& /dev/null; env'|" or
die "Can't fork: $!";
while(<SOURCE>) {
if (/^(BAR|BAZ)=(.*)/) {
$ENV{$1} = ${2} ;
}
}
close SOURCE;
print $ENV{'BAR'} . "\n";
foo.sh:
export BAR=baz
Try this (unix code sample):
cd /tmp
vi s
#!/bin/bash
export blah=test
vi t
#!/usr/bin/perl
if ($ARGV[0]) {
print "ENV second call is : $ENV{blah}\n";
} else {
print "ENV first call is : $ENV{blah}\n";
exec(". /tmp/s; /tmp/t 1");
}
chmod 777 s t
./t
ENV first call is :
ENV second call is : test
The trick is using the exec to source your bash script first and then calling your perl script again with an argument so u know that you are being called for a second time.

How can Perl's system() print the command that it's running?

In Perl, you can execute system commands using system() or `` (backticks). You can even capture the output of the command into a variable. However, this hides the program execution in the background so that the person executing your script can't see it.
Normally this is useful but sometimes I want to see what is going on behind the scenes. How do you make it so the commands executed are printed to the terminal, and those programs' output printed to the terminal? This would be the .bat equivalent of "#echo on".
I don't know of any default way to do this, but you can define a subroutine to do it for you:
sub execute {
my $cmd = shift;
print "$cmd\n";
system($cmd);
}
my $cmd = $ARGV[0];
execute($cmd);
And then see it in action:
pbook:~/foo rudd$ perl foo.pl ls
ls
file1 file2 foo.pl
As I understand, system() will print the result of the command, but not assign it. Eg.
[daniel#tux /]$ perl -e '$ls = system("ls"); print "Result: $ls\n"'
bin dev home lost+found misc net proc sbin srv System tools var
boot etc lib media mnt opt root selinux sys tmp usr
Result: 0
Backticks will capture the output of the command and not print it:
[daniel#tux /]$ perl -e '$ls = `ls`; print "Result: $ls\n"'
Result: bin
boot
dev
etc
home
lib
etc...
Update: If you want to print the name of the command being system() 'd as well, I think Rudd's approach is good. Repeated here for consolidation:
sub execute {
my $cmd = shift;
print "$cmd\n";
system($cmd);
}
my $cmd = $ARGV[0];
execute($cmd);
Use open instead. Then you can capture the output of the command.
open(LS,"|ls");
print LS;
Here's an updated execute that will print the results and return them:
sub execute {
my $cmd = shift;
print "$cmd\n";
my $ret = `$cmd`;
print $ret;
return $ret;
}
Hmm, interesting how different people are answering this different ways. It looks to me like mk and Daniel Fone interpreted it as wanting to see/manipulate the stdout of the command (neither of their solutions capture stderr fwiw). I think Rudd got closer. One twist you could make on Rudd's response is to overwite the built in system() command with your own version so that you wouldn't have to rewrite existing code to use his execute() command.
using his execute() sub from Rudd's post, you could have something like this at the top of your code:
if ($DEBUG) {
*{"CORE::GLOBAL::system"} = \&{"main::execute"};
}
I think that will work but I have to admit this is voodoo and it's been a while since I wrote this code. Here's the code I wrote years ago to intercept system calls on a local (calling namespace) or global level at module load time:
# importing into either the calling or global namespace _must_ be
# done from import(). Doing it elsewhere will not have desired results.
delete($opts{handle_system});
if ($do_system) {
if ($do_system eq 'local') {
*{"$callpkg\::system"} = \&{"$_package\::system"};
} else {
*{"CORE::GLOBAL::system"} = \&{"$_package\::system"};
}
}
Another technique to combine with the others mentioned in the answers is to use the tee command. For example:
open(F, "ls | tee /dev/tty |");
while (<F>) {
print length($_), "\n";
}
close(F);
This will both print out the files in the current directory (as a consequence of tee /dev/tty) and also print out the length of each filename read.