How to Call Perl script from tcl script - perl

I have a file with 4 perl commands ,
I want to open the file from the tcl and execute each perl command.
TCL script
runcmds $file
proc runcmds {file} {
set fileid [open $file "r"]
set options [read $fileid]
close $fileid
set options [split $options "\n"] #saperating each commad with new line
foreach line $options {
exec perl $line
}
}
when executing the above script
I am getting the error as "can't open the perl script /../../ : No Such file or directory " Use -S to search $PATH for it.

tl;dr: You were missing -e, causing your script to be interpreted as a filename.
To run a perl command from inside Tcl:
proc perl {script args} {
exec perl -e $script {*}$args
# or in 8.4: eval [list perl -e $script] $args
}
Then you can do:
puts [perl {
print "Hello "
print "World\n"
}]
That's right, an arbitrary perl script inside Tcl. You can even pass in other arguments as necessary; access from perl via #ARGV. (You'll need to add other options like -p explicitly.)
Note that this can pass whole scripts; you don't need to split them up (and probably shouldn't; you can do lots with one-liners but they tend to be awful to maintain and there's no technical reason to require it).

Related

Can I pass a string from perl back to the calling c-shell?

RHEL6
I have a c-shell script that runs a perl script. After dumping tons of stuff to stdout, it determines where (what dir) the parent shell should cd to when the perl script finishes. But that's a string, not an int which is all I can pass back with "exit()".
Storing the name of the dir in a file which the c-shell script can read is what I have now. It works, but is not elegant. Is there a better way to do this ? Maybe a little chunk of memory that I can share with the perl script ?
Short:
Redirect Perl's streams and restore in the end to print that info, taken by the shell script
Or, print that last and the shell script can pass output to the console and take the last line
Or, use a named pipe (either shell) or specific file descriptors (not csh) for that print
When the Perl script prints out that name you can assign it to a variable
in the shell script
#!/bin/csh
set DIR `perl -e'print "dir_name"'`
while in bash
#!/bin/bash
DIR="$(perl -e'print "dir_name"')"
where $(...) is preferred for the command substitution.
But those other prints to console from the Perl script then need be handled
One way is to redirect all output in Perl script other than that one print, what can be controlled by a command-line option (filename to which to redirect, which shell script can print out)
Or, take all Perl's output and pass it to console, the last line being the needed "return." This puts the burden on the Perl script to print that last (perhaps in an END block). The program's output can be printed from the shell script after it completes or line by line as it is emitted.
Or, use a named pipe (both shells) or a specific file descriptor (bash only) to which the Perl script can print that information. In this case its streams go straight to the console.
The question explicitly mentions csh so it is given below. But I must repeat the old and worn fact that shell scripting is far better done in bash than in csh. I strongly recommend to reconsider.
bash
If you need the program's output on the console as it goes, take and print it line by line
#!/bin/bash
while read line; do
echo "$line"
DIR=$line
done < <(perl script.pl)
echo "$DIR"
Or, if you don't need output on the console before the script is finished
#!/bin/bash
mapfile -t lines < <(perl script.pl)
DIR="${lines[-1]}"
printf '%s\n' "${lines[#]}" # print script.pl's output
Or, use file descriptors for that particular print
F=$(mktemp) # safe filename
exec 3> "$F" # open fd 3 to write to it
exec 4< "$F" # open fd 4 to read from it
rm -f "$F" # remove file(name) for safety; opened fd's can still access
perl -E'$fd=shift; say "...normal prints to STDOUT...";
open(FH, ">&=$fd") or die $!;
say FH "dirname";
close FH
' 3
read dir_name <&4
exec 3>&- # close them
exec 4<&-
echo "$dir_name"
I couldn't get it to work with a single file descriptor for both reading and writing (exec 3<> ...), I think because the read can't rewind after the write, thus separate descriptors are used.
With a Perl script (and not the demo one-liner above) pass the fd number as a command-line option. The script can then do this only if it's invoked with that option.
Or, use a named pipe very similarly to how it's done for csh below. This is probably best here, if the manipulation of the program's STDOUT isn't to your liking.
csh
Iterate over the program's (completed) output line by line
#!/bin/csh
foreach line ( "`perl script.pl`" )
echo "$line"
set dir_name = "$line"
end
echo "Directory name: $dir_name"
or extract the last line first and then print the whole output
#!/bin/csh
set lines = ( "`perl script.pl`" )
set dir_name = $lines[$#]
# Print program's output
while ( $#lines )
echo "$lines[1]"
shift lines
end
or use a named pipe
set fifo_name = "/tmp/fifo$$" # or use mktemp
mkfifo "$fifo_name"
( perl script.pl --fifo $fifo_name [other args] & )
set dir_name = `cat "$fifo_name"`
rm -f $fifo_name
echo "dir name from FIFO: $dir_name"
The Perl command is in the background since FIFO blocks until written and read. So if the shell script were to wait for perl ... to complete the Perl script would block as it's writing to FIFO (since that's not being read) so shell would never get to read it; we would deadlock. It is also in a subshell, with ( ), so to avoid the informational prints about the background job.
The --fifo NAME command-line option is needed so that Perl script knows what special file to use (and not to do this if the option is not there).
For an in-line example replace ( perl script ...) with this one-liner, used above as well
( perl -E'$ff = shift; say qq(\t...normal prints to STDOUT...);
open FF, ">$ff" or die $!;
say FF "dir_name_$$";
close FF
' $fifo_name
& )
(broken over lines for readability)

How Perl can execute a command in the same shell with it?

I am not sure whether the title is really make sense to this problem. My problem is simple, I want to write a perl script to change my current directory and hope the result can be kept after calling the perl script. The script looks like this:
if ($#ARGV != 0) {
print "usage: mycd <dir symbol>";
exit -1;
}
my $dn = shift #ARGV;
if ($dn eq "kite") {
my $cl = `cd ./private`;
print $cl."\n";
}
else {
print "unknown directory symbol";
exit -1;
}
However, my current directory doesn't change after calling the script. What is the reason? How can I resolve it?
No, the Perl script will be run in a subprocess so it will not be able to affect the environment of the process that called it.
There are various tricks you can use such as sourcing shell scripts (in the context of the current shell rather than a sub-process), or using bash functions and aliases, but they won't work here.
How Perl can execute a command in the same shell with it?
Unless you have a very atypical shell, shells can only receive commands via STDIN, via its command line, and possibly via a command evaluation builtin.
The first two are out unless the Perl script is the parent of the shell, but you could use the third one indirectly as in the following example.
script.pl:
#!/usr/bin/perl
print "chdir 'private'\n";
bash script:
echo "$PWD" # /some/dir
eval "$( script.pl )"
echo "$PWD" # /some/dir/private
Of course, if you use bash, you could hide the details in a shell function.
mycd () {
eval "$( mycd.pl "$#" )"
}
Allowing you use to use
mycd
or even
mycd foo

From Perl, spawn a shell, configure it, and fork the STDOUT

I use a Perl script to configure and spawn a compiled program, that needs a subshell configured a certain way, so I use $returncode = system("ulimit -s unlimited; sg ourgroup 'MyExecutable.exe'");
I want to capture and parse the STDOUT from that, but I need it forked, so that the output can be checked while the job is still running. This question comes close:
How can I send Perl output to a both STDOUT and a variable? The highest-rated answer describes a function called backtick() that creates a child process, captures STDOUT, and runs a command in it with exec().
But the calls I have require multiple lines to configure the shell. One solution would be to create a disposable shell script:
#disposable.sh
#!/bin/sh
ulimit -s unlimited
sg ourgroup 'MyExecutable.exe'
I could then get what I need either with backtick(disposable.sh) or open(PROCESS,'disposable.sh|').
But I'd really rather not make a scratch file for this. system() happily accepts multi-line command strings. How can I get exec() or open() to do the same?
If you want to use shell's power (that includes loops, variables, but also multiple command execution), you have to invoke the shell (open(..., 'xxx|') doesn't do that).
You can pass your shell script to the shell with the -c option of the shell (another possibility would be to pipe the commands to the shell, but that's more difficult IMHO).
That means calling the backtick function from the other answer like this:
backtick("sh", "-c", "ulimit -s unlimited; sg ourgroup 'MyExecutable.exe'");
The system tee with backticks will do this, no?
my $output = `(ulimit -s unlimited; sg ourgroup 'MyExecutable.exe') | tee /dev/tty`;
or modify Alnitak's backticks (so it does use a subshell)?
my $cmd = "ulimit -s unlimiited ; sg ourgroup 'MyExecutable.exe'";
my $pid = open(CMD, "($cmd) |");
my $output;
while (<CMD>) {
print STDOUT $_;
$output .= $_;
}
close CMD;
Expect should be used as you are interacting with your program: http://metacpan.org/pod/Expect
Assuming /bin/bash on your *nix matches something like bash-3.2$ the below program can be used to launch number of commands using $exp->send on bash console and output from each command can then be parsed for further actions.
#!/usr/bin/perl
use Expect;
my $command="/bin/bash";
my #parameters;
my $exp= new Expect;
$exp->raw_pty(1);
$exp->spawn($command);
$exp->expect(5, '-re', 'bash.*$');
$exp->send("who \n");
$exp->expect(10, '-re', 'bash.*$');
my #output = $exp->before();
print "Output of who command is #output \n";
$exp->send("ls -lt \n");
$exp->expect(10, '-re', 'bash.*$');
my #output = $exp->before();
print "Output of ls command is #output \n";

Ignoring variables in shell script while using cat>file.sh<<EOF ... EOF syntax

I have a question regarding embedding script files within a shell script. I often have a need to create a single shell script that unpacks other scripts, but really dislike having to comment out all of the embedded script's variables. Example of contents of my shell script:
echo "Hello world"
pwd
cat>embedded_perl_script<<EOF
#!/usr/bin/perl -w
\$input = \$ARGV[0];
my \$argc;
\$argc = #ARGV;
print \$input
EOF
perl embedded_perl_script
echo "Finished!"
This code works fine, but I would really like a way to avoid commenting out all of the embedded perl script's variables. Any suggestions?
Try this :
echo "Hello world"
pwd
cat>embedded_perl_script<<'EOF'
#!/usr/bin/perl -w
$input = $ARGV[0];
my $argc;
$argc = #ARGV;
print $input
EOF
perl embedded_perl_script
echo "Finished!"
Note that the EOF had changed to 'EOF' =)
Note : this technique is named here-doc

Export variable from a shell script into a perl script

Perl Code
`. /home/chronicles/logon.sh `;
print "DATA : $ENV{ID}\n";
In logon.sh , we are exporting the variable "ID" (sourcing of shell script).
Manual run
$> . /home/chronicles/logon.sh
$> echo $LOG
While I am running in terminal manually (not from script). I am getting the output. (But not working from the script)
I followed this post :
How to export a shell variable within a Perl script?
But didnt solve the problem.
Note
I am not allowed to change "logon.sh" script.
The script inside the backticks is executed in a child process. While environment variables are inherited from parent processes, the parent can't access the environment of child processes.
However, you could return the contents of the child environment variable and put it into a Perl variable like
use strict; use warnings; use feature 'say';
my $var = `ID=42; echo \$ID`;
chomp $var;
say "DATA: $var";
output:
DATA: 42
Here an example shell session:
$ cat test_script
echo foo
export test_var=42
$ perl -E'my $cmd = q(test_var=0; . test_script >/dev/null; echo $test_var); my $var = qx($cmd); chomp $var; say "DATA: $var"'
DATA: 42
The normal output is redirected into /dev/null, so only the echo $test_var shows.
It won't work.
An environment variable can't be inherited from a child process.
The environment variable can be updated in your "manual run" is because it's in the same "bash" process.
Source command is just to run every command in login.sh under current shell.
More info you can refer to: can we source a shell script in perl script
You could do something like:
#/usr/bin/perl
use strict;
use warnings;
chomp(my #values = `. myscript.sh; env`);
foreach my $value (#values) {
my ($k, $v) = split /=/, $value;
$ENV{$k} = $v;
}
foreach my $key (keys %ENV) {
print "$key => $ENV{$key}\n";
}
Well, I've find a solution, that sound nice for me: This seem robust, as this use widely tested mechanism to bind shell environment to perl (running perl) and robust library to export them in a perl variable syntax for re-injecting in root perl session.
The line export COLOR tty was usefull to ask my bash to export newer variables... This seem work fine.
#!/usr/bin/perl -w
my $perldumpenv='perl -MData::Dumper -e '."'".
'\$Data::Dumper::Terse=1;print Dumper(\%ENV);'."'";
eval '%ENV=('.$1.')' if `bash -c "
. ./home/chronicles/logon.sh;
export COLOR tty ID;
$perldumpenv"`
=~ /^\s*\{(.*)\}\s*$/mxs;
# map { printf "%-30s::%s\n",$_,$ENV{$_} } keys %ENV;
printf "%s\n", $ENV{'ID'};
Anyway, if you don't have access to logon.sh, you have to trust it before running such a solution.
Old...
There is my first post... for history purpose, don't look further.
The only way is to parse result command, while asking command to dump environ:
my #lines=split("\n",`. /home/chronicles/logon.sh;set`);
map { $ENV{$1}=$2 if /^([^ =])=(.*)$/; } #lines;
This can now be done with the Env::Modify module
use Env::Modify 'source';
source("/home/chronicles/logon.sh");
... environment setup in logon.sh is now available to Perl ...
Your Perl process is the parent of the shell process, so it won't inherit environment variables from it. Inheritance works the other way, from parent to child.
But when you run the script with backticks, as shown, the standard output of the script is returned to the Perl script. So, either modify the shell script to end with the echo $LOG statement you show, or create a new shell script that runs the login.sh and then has echo $LOG. Your Perl script would then be:
my $value = `./myscript.sh`;
print $value;