Perl run subroutine on different host - perl

i am new in perl. I have script where i need to jump to different hosts and comparing FS, environments etc
I have one main jumpserver(MAIN_JUMP and 5 jumpservers to different clusters(CLUSTER_JUMP_1-5). I run my script on MAIN_JUMP, but i need run some subroutines on CLUSTER_JUMP_*. In subroutine I jump to specific host in cluster.
Is it possible to run subroutine via ssh or some perl modules directly on CLUSTER_JUMP? for now I use double ssh to CLUSTER_JUMP_* and then to specific host. It is working in some cases, but for example selects to oracle databases is not working due to quote marks.

Object::Remote will do this for you in a really easy way...
use strict;
use warnings;
use feature 'say';
use Object::Remote;
####################################################################
# Note that My::File must be installed on the machines you want to
# run this on!
####################################################################
# package My::File;
# use Moo;
# has path => ( is => 'ro', required => 1 );
# sub size {
# my $self = shift;
# -s $self->path;
# }
# 1;
####################################################################
use My::File;
## find the size of a local file
my $file1 = My::File->new( path => '/etc/hostname' );
say $file1->size;
## find the size of a file on a remote host
my $conn = Object::Remote->connect('host.example.net'); # ssh
my $file2 = My::File->new::on( $conn, path => '/etc/hostname' );
say $file2->size;
Update: for clarity, there's nothing special about "My::File". That's just an example of a module that you would write and ensure is installed properly on all the machines that you will be remotely accessing, plus the "client" machine. It can be any module written in an OO style.

Related

If I declare a package with multiple levels of embedding, does the module named as the leaf node need to be in a subdirctory?

I am dealing with some legacy Perl code, which I want to change as little as possible. The main script, call it "myscript.pl" has a structure like this
use MyCompany::ABDoc::HTMLFile;
use MyCompany::ABDoc::JavaScriptFile 2014.004_015;
print "Howdy"
...
The HTMLFile.pm module looks like this
package MyCompany::AMDoc::HTMLFile;
...
I am troubleshooting my_script.pl, and wish to run it from the command line. (It normally is triggered by a Jenkins job). When I try
perl -d ./my_script.pl
I get a message about HTMLFile.pm not being found. This is because hte HTMLFile.pm actually exists at the same level as my_script.pl in the filesystem.
If this was my own script, and I had the freedom to move things around, I would create directory structure
MyCompany/AMDoc/HtmlFile.pm
and I know the script would work. But I am reluctant to do this, because somehow, this all runs fine when triggered by the Jenkins job.
So is it possible to run this code, from the command line, without moving anything? I just want to do some troubleshooting. I haven't found discussion in the Perl documentation about what kinds of command line flags, such as "-I", might help me out here.
I would create directory structure MyCompany/AMDoc/HtmlFile.pm and I know the script would work.
No, moving the file to MyCompany/AMDoc/HTMLFile.pm relative to the script would not work unless the script already takes steps to add its directory to #INC.[1]
For example, adding the following in the script would achieve that:
use FindBin qw( $RealBin );
use lib $RealBin;
This can also be done from without
perl -I "$dir" "$dir"/my_script.pl # General case
perl -I . ./my_script.pl # Specific case
So is it possible to run this code, from the command line, without moving anything?
No, not without modifying the script.[2]
According to what you gave us, it has to be accessible as MyCompany/AMDoc/HTMLFile.pm relative to a directory in #INC.
It would happen to work if the script's current work directory happened to match the directory in which the script is found in old version of Perl. But that's just a fluke. These frequently don't match.
Well, you could use something of the form
perl -e'
use FindBin qw( $RealBin );
my $s = shift;
push #INC, sub {
my ( undef, $path ) = #_;
my ( $qfn ) = $path =~ m{^MyCompany/AMDoc/(.*)}s
or return;
open( my $fh, '<', $qfn )
or return;
return $fh;
};
do( $s ) or die( $# // $! );
' ./my_script.pl
Even then, that expects the script to end in a true value.
My initial assumption was wrong. My code was actually running on a Kubernetes pod, and that pod was configured with the correct directory structure for the module. In addition, PERL5LIB is set in the pod.
PERL5LIB=/perl5lib/perl-modules:/perl5lib/perl-modules/cpan/lib64/perl5:/perl5lib/perl-modules/cpan/share/perl5
and sure enough, that very first path has the path to my module.

Quotes and slashes surviving multiple layers

Goal
I need to effectively run a copy (cp) command but have explicit quote symbols preserved. This is needed so that the z/OS Unix System Services Korn shell properly recognizes the target of the copy as a traditional MVS dataset.
The complexity is that this step is part of an automated process. The command is generated by Perl. That Perl is executed on a separate Docker container via ssh. This adds another layer of escaping that needs to be addressed, in addition to the escaping needed by Perl.
Basically, docker is doing something like
perl myprogram.perl
which generates the necessary SSH commands, sending them to the mainframe which tries to run them. When I run the Perl script, it generates the command
sshpass -p passwd ssh woodsmn#bldbmsb.boulder.mycompany.com export _UNIX03=NO;cp -P "RECFM=FB,LRECL=287,BLKSIZE=6027,SPACE=\(TRACK,\(1,1\)\)" /u/woodsmn/SSC.D051721.T200335.S90.CP037 "//'WOODSMN.SSC.D051721.T200335.S90.CP037'"
and the mainframe returns an error:
cp: target "//'WOODSMN.SSC.D051721.T200335.S90.CP037'" is not a directory
The sshpass is needed because my sysadmin refuses to turn on authorized users, so my only option is to run sshpass and shove a password in. The password exposure is contained and we're not worried about this.
The first command
export _UNIX03=NO
tells z/OS to treat the -P option as an indicator for MVS dataset control blocks. That is, this is where we tell the system, hey this is a fixed length of 287 characters, allocate in tracks, etc. The dataset will be assumed to be new.
For the copy command, I'm wanting z/OS to copy the HFS file (basically a normal UNIX file)
/u/woodsmn/SSC.D051721.T200335.S90.CP037
into the fully qualifed MVS dataset
WOODSMN.SSC.D051721.T200335.S90.CP037
Sometimes MVS commands assume a high level qualifier of basically the users userid and allow the user to omit this. In this case, I've explicitly specified this.
To get z/OS to treat the target as a dataset, one needs to prefix it with two slashes (/), so //
to use a fully qualified dataset, the name needs to be surrounded by an apostrophe (')
But, to avoid confusion within Korn shell, the target needs to be surrounded by double quotes (").
So, somehow between Perl, the shell running my SSH command inside the Docker container (likely bash) and the receiving Korn shell on z/OS, it's not being properly interpreted.
My scaled down Perl looks like:
use strict;
use warnings;
sub putMvsFileByHfs;
use IO::Socket::IP;
use IO::Socket::SSL;
use IPC::Run3;
use Net::SCP;
my $SSCJCL_SOURCE_DIRECTORY = "/home/bluecost/";
my $SSCJCL_STORAGE_UNIT = "TRACK";
my $SSCJCL_PRIMARY_EXTENTS = "1";
my $SSCJCL_SECONDARY_EXTENTS = "1";
my $SSCJCL_HFS_LOCATION="/u/woodsmn";
my $SSCJCL_STAGING_HLQ = "WOODSMN";
my $COST_FILE="SSC.D051721.T200335.S90.CP037";
my $SSCJCL_USER_PW="mypass";
my $SCJCL_USER_ID="woodsmn";
my $SSCJCL_HOST_NAME="bldbmsb.boulder.mycompany.com";
my $MVS_FORMAT_OPTIONS="-P ".qq(")."RECFM=FB,LRECL=287,BLKSIZE=6027,SPACE=\\("
.${SSCJCL_STORAGE_UNIT}
.",\\("
.${SSCJCL_PRIMARY_EXTENTS}
.","
.${SSCJCL_SECONDARY_EXTENTS}
."\\)\\)".qq(");
putMvsFileByHfs(${MVS_FORMAT_OPTIONS}." ",
$SSCJCL_SOURCE_DIRECTORY.'/'.$COST_FILE,
${SSCJCL_HFS_LOCATION}.'/'.$COST_FILE,
${SSCJCL_STAGING_HLQ}.'.'.$COST_FILE);
# This function copys the file first from my local volume mounted to the Docker container
# to my mainframe ZFS volume. Then it attempts to copy it from ZFS to a traditional MVS
# dataset. This second part is the failinmg part.
sub putMvsFileByHfs
{
#
# First copy the file from the local file system to my the mainframe in HFS form (copy to USS)
# This part works.
#
my $OPTIONS = shift;
my $FULLY_QUALIFIED_LOCAL_FILE = shift;
my $FULLY_QUALIFIED_HFS_FILE = shift;
my $FULLY_QUALIFIED_MVS_FILE = shift;
RunScpCommand($FULLY_QUALIFIED_LOCAL_FILE,$FULLY_QUALIFIED_HFS_FILE);
#
# I am doing something wrong here
# Attempt to build the target dataset name.
#
my $dsnPrefix = qq(\"//');
my $dsnSuffix = qq('\");
my $FULLY_QUALIFIED_MVS_ARGUMENT = ${dsnPrefix}.${FULLY_QUALIFIED_MVS_FILE}.${dsnSuffix};
RunSshCommand("export _UNIX03=NO;cp ${OPTIONS}".${FULLY_QUALIFIED_HFS_FILE}." ".${FULLY_QUALIFIED_MVS_ARGUMENT});
}
# This function marshals whatever command I want to run and mostly does it. I'm not having
# any connectivity issues. My command at least reaches the server and SSH will try to run it.
sub RunScpCommand()
{
my $ssh_source= $_[0];
my $ssh_target= $_[1];
my ($out,$err);
my $in = "${SSCJCL_USER_PW}\n";
my $full_command = "sshpass -p ".${SSCJCL_USER_PW}." scp ".${ssh_source}." ".${SSCJCL_USER_ID}."#".${SSCJCL_HOST_NAME}.":".${ssh_target};
print ($full_command."\n");
run3 $full_command,\$in,\$out,\$err;
print ($out."\n");
print ($err."\n");
return ($out,$err);
}
# This function marshals whatever command I want to run and mostly does it. I'm not having
# any connectivity issues. My command at least reaches the server and SSH will try to run it.
sub RunSshCommand
{
my $ssh_command = $_[0];
my $in = "${SSCJCL_USER_PW}\n";
my ($out,$err);
my $full_command = "sshpass -p ".${SSCJCL_USER_PW}." ssh ".${SSCJCL_USER_ID}."#".${SSCJCL_HOST_NAME}." ".${ssh_command};
print ($full_command."\n");
run3 $full_command,\$in,\$out,\$err;
print ($out."\n");
print ($err."\n");
return ($out,$err);
}
Please forgive any Perl malpractices above as I'm new to Perl, though kind constructive pointers are appreciated.
First, let's build the values we want to pass to the program. We'll worry about building shell commands later.
my #OPTIONS = (
-P => join(',',
"RECFM=FB",
"LRECL=287",
"BLKSIZE=6027",
"SPACE=($SSCJCL_STORAGE_UNIT,($SSCJCL_PRIMARY_EXTENTS,$SSCJCL_SECONDARY_EXTENTS))",
),
);
my $FULLY_QUALIFIED_LOCAL_FILE = "$SSCJCL_SOURCE_DIRECTORY/$COST_FILE";
my $FULLY_QUALIFIED_HFS_FILE = "$SSCJCL_HFS_LOCATION/$COST_FILE";
my $FULLY_QUALIFIED_MVS_FILE = "$SSCJCL_STAGING_HLQ.$COST_FILE";
my $FULLY_QUALIFIED_MVS_ARGUMENT = "//'$FULLY_QUALIFIED_MVS_FILE'";
Easy peasy.
Now it's time to build the commands to execute. The key is to avoid trying to do multiple levels of escaping at once. First build the remote command, and then build the local command.
use String::ShellQuote qw( shell_quote );
my $scp_cmd = shell_quote(
"sshpass",
-p => $SSCJCL_USER_PW,
"scp",
$FULLY_QUALIFIED_LOCAL_FILE,
"$SSCJCL_USER_ID\#$SSCJCL_HOST_NAME:$FULLY_QUALIFIED_HFS_FILE",
);
run3 $scp_cmd, ...;
my $remote_cmd =
'_UNIX03=NO ' .
shell_quote(
"cp",
#OPTIONS,
$FULLY_QUALIFIED_HFS_FILE,
$FULLY_QUALIFIED_MVS_ARGUMENT,
);
my $ssh_cmd = shell_quote(
"sshpass",
-p => $SSCJCL_USER_PW,
"ssh", $remote_cmd,
);
run3 $ssh_cmd, ...;
But there's a much better solution since you're using run3. You can entirely avoid creating a shell on the local host, and thus entirely avoid having to create a command for it! This is done by passing a reference to an array containing the program and its args instead of passing a shell command.
use String::ShellQuote qw( shell_quote );
my #scp_cmd = (
"sshpass",
-p => $SSCJCL_USER_PW,
"scp",
$FULLY_QUALIFIED_LOCAL_FILE,
"$SSCJCL_USER_ID\#$SSCJCL_HOST_NAME:$FULLY_QUALIFIED_HFS_FILE",
);
run3 \#scp_cmd, ...;
my $remote_cmd =
'_UNIX03=NO ' .
shell_quote(
"cp",
#OPTIONS,
$FULLY_QUALIFIED_HFS_FILE,
$FULLY_QUALIFIED_MVS_ARGUMENT,
);
my #ssh_cmd = (
"sshpass",
-p => $SSCJCL_USER_PW,
"ssh", $remote_cmd,
);
run3 \#ssh_cmd, ...;
By the way, it's insecure to pass passwords on the command line; other users on the machine can see them.
By the way, VAR=VAL cmd (as a single command) sets the env var for cmd. I used that shorthand above.
The parameter specifying the disk units is "TRK" not TRACK", so this has to be
-P "RECFM=FB,LRECL=287,BLKSIZE=6027,SPACE=\(TRK,\(1,1\)\)"
Also, I never had to escape the paranthesis, when running such a command interactively from an SSH session. So this works for me
-P "RECFM=FB,LRECL=287,BLKSIZE=6027,SPACE=(TRK,(1,1))"
Then, the error
cp: target "//'WOODSMN.SSC.D051721.T200335.S90.CP037'" is not a directory
indicates that cp understood it has to copy more than one source file, thus it requests the final pathname to be a directoy. Seems to confirm that the cp did not run on the remote mainframe but on your local shell (as someone pointed out caused by not escaping the semicolon). And your local UNIX does not understand the z/OS specific MVS Data Set notation //'your.mvs.data.set'.
Instead of exporting _UNIX03=NO, you could replace
-P "RECFM=FB,LRECL=287,BLKSIZE=6027,SPACE=(TRK,(1,1))"
with
-W "seqparms='RECFM=FB,LRECL=287,BLKSIZE=6027,SPACE=(TRK,(1,1))'"
Then, only one command is to be run.

Perl sFTP : how to check remote file doesnt exist

I am 1 day old to Perl, was going through API doc here, have few basic questions
$sftp = Net::SFTP::Foreign->new($host, autodie => 1);
my $ls = $sftp->ls("/bar");
# dies as: "Couldn't open remote dir '/bar': No such file"
Question
with autodie will the connection be auto closed ?
we see in above example how to use folder , similar syntax also works for file ?
Or something like this makes more sense ??
my $sftp = Net::SFTP::Foreign->new($host, autodie => 1);
$sftp->find("/sdfjkalshfl", # nonexistent directory
on_error => sub { print "foo!\n";sftp->disconnect();exit; });
I was trying to run following code on my windows machine
use Net::SFTP::Foreign;
my $host = "demo.wftpserver.com";
my $sftp = Net::SFTP::Foreign->new($host ,ssh_cmd => 'plink',autodie => 1);
my $ls = $sftp->ls("/bar");
But i get error
'plink' is not recognized as an internal or external command ,
however when i run plink from windows command line it works fine !!
with autodie will the connection be auto closed ?
Yes. When the program ends, everything is destroyed and connections are closed. That is also the case when the $sftp variable goes out of scope. Modules like this usually implement a DESTROY sub. Those are invoked when the object (which is just a reference in Perl) goes out of scope. There can be some cleanup in that sub. Another example that has that is DBI, and of course lexical filehandles (like $fh from a open call).
we see in above example how to use folder , similar syntax also works for file ?
No. The docs say ls is for a directory:
Fetches a listing of the remote directory $remote. If $remote is not given, the current remote working directory is listed.
But you can just do ls for the directory that the file you want is in, and use the wanted option.
my $ls = $sftp->ls( '/home/foo', wanted => qr/^filename.txt$/ );
Though with the autodie that should die, so if you don't want it to actually die here, you should wrap it in a Try::Tiny call or an eval.
use Try::Tiny
# ...
my $ls = try {
return $sftp->ls( '/home/foo', wanted => qr/^filename.txt$/ );
} catch {
return; # will return undef
};
say 'Found file "filename.txt" on remote server' if $ls;
As to plink being not found, probably the Windows PATH is different from what your Perl sees.

Defining constants for a number of scripts and modules in perl

I am facing the following problem:
I am working on a perl project consisting of a number of modules and scripts. The project must run on two different machines.
Throughout the project i call external programs, but the paths are different on both machines, so I would like to define them once globally for all files and then only change this definition when i switch machines.
Since I am fairly new to perl I ask you what would be a common way to accomplish this.
Should I use "use define" or global variables or something else?
Thanks in advance!
If I were you, I'd definitely do my best to avoid global variables - they are a sign of weak coding style (in any language) and offer you a maintenance hell.
Instead, you could create and use configuration files - one for each of your machines. Being on Perl, you have plenty of options for free, ready to use CPAN modules:
Config::Auto
Config::JSON
Config::YAML
And many many other
Rather than defining globals which may or may not work, why not use a subroutine to find a working executable?
my $program = program_finder();
sub program_finder {
-x && return $_ for qw( /bin/perl /usr/bin/perl /usr/local/bin/perl );
die "Could not find a perl executable";
}
Create a module to hold your configuration information.
In file My/Config.pm in your perl library path:
package My::Config;
use warnings;
use strict;
use Carp ();
my %setup = (
one => {path => '/some/path'},
two => {path => '/other/path'},
);
my $config = $setup{ $ENV{MYCONFIG} }
or Carp::croak "environment variable MYCONFIG must be set to one of: "
.(join ' ' => keys %setup)."\n";
sub AUTOLOAD {
my ($key) = our $AUTOLOAD =~ /([^:]+)$/;
exists $$config{$key} or Carp::croak "no config for '$key'";
$$config{$key}
}
And then in your files:
use My::Config;
my $path = My::Config->path;
And of course on your machines, set the environment variable MYCONFIG to one of the keys in %setup.

Why does SSL Web access work as root in an interactive shell, but not as user `apache` in a post-commit script?

I have a Perl program, intended to be run from a subversion post-commit script, which needs to connect to a HTTPS based Web API.
When I test the program from an interactive shell, as root, it works just fine.
When it runs from the post-commit script, it errors out, and the response from LWP is along the lines of "500 Connect failed".
There's some evidence that when run from the post-commit script, SS isn't enabled, because when I set $ENV{HTTPS_DEBUG} =1; and run it as root, I see debug output, such as
SSL_connect:before/connect initialization
but from the post-commit script, non of the SLL debug info is printed.
The the post-commit script runs as user apache.
I'm running CentOS 64bit.
It's been years since I've done any Unix work, so I'm not sure what the next steps are to get SSL working in this case.
The difference in environments makes me suspicious. Like running cron jobs, it may be that the environment, the INC path, or the perl interpreter itself is sufficiently different that it can't find Crypt::SSLeay or whatever else you're using for SSL support.
As a troubleshooting step, try using this program in both your shell and in the post-commit hook to see if there is an environment difference between the two. This will dump several runtime variables that show what perl knows about its environment to a tempfile.
#!/usr/bin/perl
use Data::Dumper;
use File::Temp qw( tempfile );
use strict;
use warnings;
my $tempdir = '/tmp'; # Change this if necessary.
my( $fh, $fname ) = tempfile( "tempXXXXXX", DIR => $tempdir, UNLINK => 0 );
print $fh Data::Dumper->Dump( [ \#INC, \%INC, $^X, $0, $], \#ARGV, \%ENV ],
[ qw( #INC %INC ^X 0 ] #ARGV %ENV ] ) ] );
close( $fh );
# Change this if the post-commit hook doesn't pass stdout back to you.
print "Wrote data to $fname.\n";
__END__
If they differ substantially, your next step would be to make the environment in the post-commit hook the same as under your shell, e.g. adding a use lib qw( /path/to/where/ssl/modules/are/installed ); line to your script's use section, by setting PERL5LIB, using the full path to a different Perl interpreter, or whatever is appropriate. See perldoc perlvar for a description of some of the variables, if you're not familiar with them.
It's not a Perl issue. Perl is doing only what it is told. Figure out what is saying "http:" vs "https:" above Perl, and sort that out. You don't need to "configure Perl... to use SSL".
I had to run this:
setsebool httpd_can_network_connect=on
to allow the httpd process to make network connections