service build with perl - is it correct - perl

I tried putting a script i saw together, plus used an existing script to make something run as a service. Now I have the following pl script and the init.d / start/stop scripts.
They work, but I am wondering if I did it right, because when I start the service and i would start it again, it would just start again and give a new PID number (is this what I want? shouldn't it be saying "already running?")
Also what I didn't understand is what the "cache" part of the STDIN and STDOUT does.
Nor the filecheck (file set in the beginning and in the final loop checked for newer version...not sure what that does)
Here goes:
#!/usr/bin/perl
#use strict;
use POSIX;
use DateTime;
use Fcntl qw(:flock);
use File::CacheDir qw(cache_dir);
Log("Initializing...");
# Find and read config file
if (#ARGV != 1) {
print("Usage: miniserv.pl <config file>");
die;
}
if ($ARGV[0] =~ /^([a-z]:)?\//i) {
$config_file = $ARGV[0];
}
else {
print("NO CORRECT CONFIG FILE SPECIFIED");
die;
}
%config = &read_config_file($config_file);
Log("Initialized...");
Log("Loaded config file.");
my $file = $0;
my $age = -M $file;
Log("File - ".$file.", age - ".$age);
# Change dir to the server root
#roots = ( $config{'root'} );
for($i=0; defined($config{"extraroot_$i"}); $i++) {
push(#roots, $config{"extraroot_$i"});
}
chdir($roots[0]);
Log("Changed working directory: ".$roots[0]);
Status("Daemonizing...");
my $pid = fork;
if(!defined $pid)
{
LogError("Unable to fork : $!");
die;
}
if($pid)
{
Log("Parent process exiting, let the deamon (".$pid.") go...");
sleep 3;
exit;
}
POSIX::setsid;
if(-e $config{'pidfile'})
{
open(PID, "<".$config{'pidfile'});
my $runningpid = <PID>;
close PID;
unlink $config{'pidfile'};
while(-e "/proc/".$runningpid)
{
Status("Waiting for ".$runningpid." to exit...");
Log("Waiting for ".$runningpid." to exit...");
sleep 1;
}
}
open(PID, ">".$config{'pidfile'}) || die "Failed to create PID file $_[0] : $!";
print PID $$;
close PID;
Log("The deamon is now running...");
Status("Deamon running");
my $stdout = cache_dir({base_dir => $config{'root'}.'/cache', ttl => '1 day', filename => "STDOUT".$$});
my $stderr = cache_dir({base_dir => $config{'root'}.'/cache', ttl => '1 day', filename => "STDERR".$$});
Log("STDOUT : ".$stdout);
Log("STDERR : ".$stderr);
open STDIN, '/dev/null';
open STDOUT, '>>'.$stdout;
open STDERR, '>>'.$stderr;
while(1)
{
#### Code to be performed by the daemon
if($age - (-M $file))
{
Log("File modified, restarting");
open(FILE, $file ." |");
close(FILE);
last;
}
if(!-e $config{'pidfile'})
{
Log("Pid file doesn't exist, time go exit.");
last;
}
sleep 5;
}
sub Log
{
my $string = shift;
if($string)
{
my $time = DateTime->now();
if(open(LOG, ">>".$config{'logfile'}))
{
flock(LOG, LOCK_EX);
print LOG $$." [".$time->ymd." ".$time->hms."] - ".$string."\r\n";
close LOG;
}
}
}
sub LogError
{
my $string = shift;
if($string)
{
my $time = DateTime->now();
if(open(LOG, ">>".$config{'errorlog'}))
{
flock(LOG, LOCK_EX);
print LOG $$." [".$time->ymd." ".$time->hms."] - ".$string."\r\n";
close LOG;
}
}
}
sub Status
{
my $string = shift;
if($string)
{
$0 = "My Daemon- ".$string;
}
return $0;
}
# read_config_file(file)
# Reads the given config file, and returns a hash of values
sub read_config_file
{
local %rv;
if(-e $_[0])
{
open(CONF, $_[0]) || die "Failed to open config file $_[0] : $!";
while(<CONF>) {
s/\r|\n//g;
if (/^#/ || !/\S/) { next; }
/^([^=]+)=(.*)$/;
$name = $1; $val = $2;
$name =~ s/^\s+//g; $name =~ s/\s+$//g;
$val =~ s/^\s+//g; $val =~ s/\s+$//g;
$rv{$name} = $val;
}
close(CONF);
return %rv;
} else {
print("COULD NOT FIND CONFIG FILE");
die;
}
}
the start script
#!/bin/sh
echo Starting reliand server in /usr/libexec/reliand
trap '' 1
LANG=
export LANG
#PERLIO=:raw
unset PERLIO
export PERLIO
PERLLIB=/usr/libexec/reliand
export PERLLIB
exec '/usr/libexec/reliand/miniserv.pl' /etc/reliand/miniserv.conf
the init.d script
#!/bin/sh
# chkconfig: 235 99 10
# description: Start or stop the reliand server
#
### BEGIN INIT INFO
# Provides: reliand
# Required-Start: $network $syslog
# Required-Stop: $network
# Default-Start: 2 3 5
# Default-Stop: 0 1 6
# Description: Start or stop the reliand server
### END INIT INFO
start=/etc/reliand/start
stop=/etc/reliand/stop
lockfile=/var/lock/subsys/reliand
confFile=/etc/reliand/miniserv.conf
pidFile=/var/reliand/miniserv.pid
name='reliand'
case "$1" in
'start')
$start >/dev/null 2>&1 </dev/null
RETVAL=$?
if [ "$RETVAL" = "0" ]; then
touch $lockfile >/dev/null 2>&1
fi
;;
'stop')
$stop
RETVAL=$?
if [ "$RETVAL" = "0" ]; then
rm -f $lockfile
fi
pidfile=`grep "^pidfile=" $confFile | sed -e 's/pidfile=//g'`
if [ "$pidfile" = "" ]; then
pidfile=$pidFile
fi
rm -f $pidfile
;;
'status')
pidfile=`grep "^pidfile=" $confFile | sed -e 's/pidfile=//g'`
if [ "$pidfile" = "" ]; then
pidfile=$pidFile
fi
if [ -s $pidfile ]; then
pid=`cat $pidfile`
kill -0 $pid >/dev/null 2>&1
if [ "$?" = "0" ]; then
echo "$name (pid $pid) is running"
RETVAL=0
else
echo "$name is stopped"
RETVAL=1
fi
else
echo "$name is stopped"
RETVAL=1
fi
;;
'restart')
$stop ; $start
RETVAL=$?
;;
*)
echo "Usage: $0 { start | stop | restart }"
RETVAL=1
;;
esac
exit $RETVAL

The script will give you a new PID.
As for the cache file, it is storing the standard I/O streams; this is normal for Bash so the script does not keep having to perpetually create new streams each time it initializes the daemon.

Related

Perl subroutine not running when script executed from Nagios XI back-end

I have a Perl script that is executed from Nagios XI.
It has two subroutines: SendEmail and SendTraps.
The script works fine when executed manually by passing the required parameters, but it doesn't work when triggered from Nagios. The script gets executed but the subroutines are skipped.
echo is working, but the two subroutines are not working even if the condition is met.
if ( ( $hoststatetype =~ m/HARD/ ) && ( $hoststate =~ m/DOWN/ ) ) {
`echo "HOST::$prihost $hostoutput">>/tmp/failover_log.txt`;
sendMail();
send_trap();
}
Full script here:
use strict;
use warnings;
use Text::CSV;
# Declared all the variables here
# Parsing input arguments
if ( $#ARGV > -1 ) {
if ( $ARGV[0] eq "-nagiosxi_trigger" ) {
$prihost = $ARGV[1];
$hoststate = $ARGV[2];
$hoststatetype = $ARGV[3];
$hostoutput = $ARGV[4];
}
elsif ( $ARGV[0] eq "-manual_trigger" ) {
$comment = $ARGV[1];
$userid = $ARGV[2];
$flag = "Failover-Trigger_Manual";
print "Maunal Failover triggered with comment: $comment by $userid\n";
$error_desc = "Maunal Failover triggered with comment: $comment by $userid";
send_trap();
sendMail();
exit 0;
}
else {
print STDERR "Invalid parameter $ARGV[0] \n";
exit 1;
}
}
else {
print STDERR "ERROR:No Arguments Passed.\n";
exit 1
}
# Check if Host or Service is in Hard/down state
if ( ( $hoststatetype =~ m/HARD/ ) && ( $hoststate =~ m/DOWN/ ) ) {
`echo "HOST::$prihost $hostoutput">>/tmp/failover_log.txt`;
sendMail();
send_trap();
}
elsif ( ( $hoststatetype =~ m/SOFT/ ) && ( $hoststate =~ m/DOWN/ ) ) {
`echo "HOST::$prihost $hostoutput">>/tmp/failover_log.txt`;
}
else {
`echo "HOST Good, $prihost $hostoutput">>/tmp/failover_log.txt`;
}
# Sub-Routines
sub failover {
my $csv = Text::CSV->new({ sep_char => ',' }) or die "Cannot use CSV: ".Text::CSV->error_diag ();;
my $file = "myxilist";
my $primary;
my $secondary;
#my $xienv;
my $host = `hostname`;
chomp $host;
open( my $data, '<', $file ) or die "Could not open '$file' $!\n";
while ( my $xi = <$data> ) {
chomp $xi;
if ( $csv->parse($xi) ) {
my #fields = $csv->fields();
if ( $fields[0] =~ m/$host/ ) {
$primary = $fields[1];
$secondary = $fields[0];
$xienv = $fields[2];
}
elsif ( $fields[1] =~ m/$host/ ) {
$primary = $fields[0];
$secondary = $fields[1];
$xienv = $fields[2];
}
}
else {
warn "Line could not be parsed: $xi\n";
exit 1;
}
}
my $failovermsg="failover successful from $primary to $secondary server";
return $failovermsg;
}
sub sendMail {
# Build the list for mailing out results
my $mailSubject;
my $mailID = "test\#mail.com";
my #results = failover();
$mailSubject = "Failover Successful on $xienv instance";
print "Sending email to $mailID \n";
`echo "sending Email">>/tmp/failover_log.txt`;
open MAILX, "|/usr/bin/mailx -s \"$mailSubject\" $mailID " or die $!;
print MAILX "#results";
close MAILX;
return;
}
sub send_trap {
# Sending SNMP traps
my #results = failover();
my $trap = `/usr/bin/snmptrap -v 2c -c public tcp:server:1010 '' MIB::Event Hostname s "$xienv" nSvcDesc s "$flag" nSvcStateID i 2 nSvcOutput s "#results"`;
return;
}
Any thoughts what could be missing?
Issue was in the failover() SubRoutine. I was calling a file "myxilist" that was present in the same directory as the script.
So, the script was working fine when called manually, but when it is triggered from application, script is getting executed from some other directory and the failover sub exits, as it's not able to open the file.
I've provided the full path of the file and the script works fine.
Thank you all for your help.

eliminate empty files in a subroutine in perl

I want to a add a code in the next script to eliminate those empty output files.
The script convert a single fastq file or all the fastq files in a folder to fasta format, all the output fasta files keep the same name of the fastq file; the script present an option to exclude all the sequences that present a determinate number of NNN repeats (NNNNNNNNNNNNNNNNNNATAGTGAAGAATGCGACGTACAGGATCATCTA), I added this option because some sequences present only NNNNN in the sequences, example: if the -n option is equal to 15 (-n 15) it will exclude all the sequences that present 15 o more N repeats, to this point the code works well, but it generate an empty files (in those fastq files that all the sequences present 15 or more N repeats are excluded). I want to eliminate all the empty files (without sequences) and add a count of how many files were eliminate because it were empty.
Code:
#!/usr/bin/env perl
use strict;
use warnings;
use Getopt::Long;
my ($infile, $file_name, $file_format, $N_repeat, $help, $help_descp,
$options, $options_descrp, $nofile, $new_file, $count);
my $fastq_extension = "\\.fastq";
GetOptions (
'in=s' => \$infile,
'N|n=i' =>\$N_repeat,
'h|help' =>\$help,
'op' =>\$options
);
# Help
$help_descp =(qq(
Ussaje:
fastQF -in fastq_folder/ -n 15
or
fastQF -in file.fastq -n 15
));
$options_descrp =(qq(
-in infile.fastq or fastq_folder/ required
-n exclude sequences with more than N repeat optional
-h Help description optional
-op option section optional
));
$nofile =(qq(
ERROR: "No File or Folder Were Chosen !"
Usage:
fastQF -in folder/
Or See -help or -op section
));
# Check Files
if ($help){
print "$help_descp\n";
exit;
}
elsif ($options){
print "$options_descrp\n";
exit;
}
elsif (!$infile){
print "$nofile\n";
exit;
}
#Subroutine to convert from fastq to fasta
sub fastq_fasta {
my $file = shift;
($file_name = $file) =~ s/(.*)$fastq_extension.*/$1/;
# eliminate old files
my $oldfiles= $file_name.".fasta";
if ($oldfiles){
unlink $oldfiles;
}
open LINE, '<', $file or die "can't read or open $file\n";
open OUTFILE, '>>', "$file_name.fasta" or die "can't write $file_name\n";
while (
defined(my $head = <LINE>) &&
defined(my $seq = <LINE>) &&
defined(my $qhead = <LINE>) &&
defined(my $quality = <LINE>)
) {
substr($head, 0, 1, '>');
if (!$N_repeat){
print OUTFILE $head, $seq;
}
elsif ($N_repeat){
my $number_n=$N_repeat-1;
if ($seq=~ m/(n)\1{$number_n}/ig){
next;
}
else{
print OUTFILE $head, $seq;
}
}
}
close OUTFILE;
close LINE;
}
# execute the subrutine to extract the sequences
if (-f $infile) { # -f es para folder !!
fastq_fasta($infile);
}
else {
foreach my $file (glob("$infile/*.fastq")) {
fastq_fasta($file);
}
}
exit;
I have tried to use the next code outside of the subroutine (before exit) but it just work for the last file :
$new_file =$file_name.".fasta";
foreach ($new_file){
if (-z $new_file){
$count++;
if ($count==1){
print "\n\"The choosen File present not sequences\"\n";
print " \"or was excluded due to -n $N_repeat\"\n\n";
}
elsif ($count >=1){
print "\n\"$count Files present not sequences\"\n";
print " \" or were excluded due to -n $N_repeat\"\n\n";
}
unlink $new_file;
}
}
and I just have tried something similar inside of the subroutine but this last code donĀ“t work !!!!
Any Advise !!!!???
Thanks So Much !!!
you should check, if something was written to your new file at the end of our fastq_fasta subroutine. Just put your code after the close OUTFILE statement:
close OUTFILE;
close LINE;
my $outfile = $file_name.".fasta";
if (-z $outfile)
{
unlink $outfile || die "Error while deleting '$outfile': $!";
}
Additionally, it will be better to add the die/warn statement also to the other unlink line. Empty files should be deleted.
Maybe another solution if you are not fixed to perl, but allowed to use sed and a bash loop:
for i in *.fastq
do
out=$(dirname "$i")/$(basename "$i" .fastq).fasta
sed -n '1~4{s/^#/>/;N;p}' "$i" > "$out"
if [ -z $out ]
then
echo "Empty output file $out"
rm "$out"
fi
done
Hope that helps!
Best Frank
The easiest thing to do is probably to add a counter to your subroutine to keep track of the number of sequences in the outfile:
sub fastq_fasta {
my $counter1 = 0;
my $file = shift;
($file_name = $file) =~ s/(.*)$fastq_extension.*/$1/;
# eliminate old files
my $oldfiles= $file_name.".fasta";
if ($oldfiles){
unlink $oldfiles;
}
open LINE, '<', $file or die "can't read or open $file\n";
open OUTFILE, '>>', "$file_name.fasta" or die "can't write $file_name\n";
while (
defined(my $head = <LINE>) &&
defined(my $seq = <LINE>) &&
defined(my $qhead = <LINE>) &&
defined(my $quality = <LINE>)
) {
$counter1 ++;
substr($head, 0, 1, '>');
if (!$N_repeat){
print OUTFILE $head, $seq;
}
elsif ($N_repeat){
my $number_n=$N_repeat-1;
if ($seq=~ m/(n)\1{$number_n}/ig){
$counter1 --;
next;
}
else{
print OUTFILE $head, $seq;
}
}
}
close OUTFILE;
close LINE;
return $counter1;
}
You can then delete files when the returned count is zero:
if (-f $infile) { # -f es para folder !!
fastq_fasta($infile);
}
else {
foreach my $file (glob("$infile/*.fastq")) {
if (fastq_fasta($file) == 0) {
$file =~ s/(.*)$fastq_extension.*/$1.fasta/;
unlink $file;
}
}
}

perl Parallel::ForkManager stuck in this script?

I am trying to run a Perl script in parallel and got stuck at a point here. See the example script:
If I run it without the -fork 4 option, it runs fine:
perl perl_parallel_forkmanager_ls.pl -limit 10
799c89a4c78eafbfb9e7962b8e9705f7 /etc/apt/trusted.gpg
ff163e8e9e38670705a9f2cea8b530c9 /etc/apt/trusted.gpg~
075e92fd5c6f0dcdad857603f03dd3a5 /etc/bash_completion.d/R
b269c1383a87a7da2cc309c929ba35ca /etc/bash_completion.d/grub
7cbefff45508d2ed69576bebc80e66bb /etc/bash_completion.d/docker
facb1fdc0fcf7f6b150442d1a9036795 /etc/bash_completion.d/pulseaudio-bash-completion.sh
69dfca7a7b55181cef06b9ed28debb20 /etc/gnome/defaults.list
a65e81e55558941ce0f3080b9333e18f /etc/sensors3.conf
9e87bc86a77261acfb2bae618073a787 /etc/grub.d/20_linux_xen
8039709ee9648dabda0cdca713f2ed49 /etc/grub.d/30_os-prober
1bc18861cc2438517ce6b6c22fd4fa49 /etc/grub.d/10_linux
But if I run it with a value of -fork 4 smaller than the value of -limit 10, it ignores the value of limit:
perl perl_parallel_forkmanager_ls.pl -fork 4 -limit 10 2>/dev/null | wc -l
80
Any ideas?
#!/usr/bin/perl
use strict;
use warnings;
use Parallel::ForkManager;
use Getopt::Long;
my $dir = '/etc'; my $fork = 1; my $size = '9876'; my $limit;
my $verbose;
GetOptions(
'dir:s' => \$dir,
'fork:s' => \$fork,
'size:s' => \$size,
'limit:s' => \$limit,
'verbose' => \$verbose,
);
my $cmd; my $ret;
$cmd = "find $dir -size +".$size."c -type f 2>/dev/null";
open(P, "-|", "$cmd") or die "$cmd -- $!";
my $pm; $pm=new Parallel::ForkManager($fork) if ($fork > 1);
my $count = 0;
while (<P>) {
if ($fork > 1) {
$pm->start and next;
}
my $file = $_; chomp $file;
my $md5 = `md5sum $file`;
print "$md5";
$pm->finish if ($fork > 1);
$count++;
last if (defined $limit && $count > $limit);
};
$pm->wait_all_children if ($fork > 1);
close P;
The statements after $pm->finish are never reached when -fork > 1 is given.. You should change the order of the statements in the while loop:
while (<P>) {
$count++;
last if (defined $limit && $count > $limit);
if ($fork > 1) {
$pm->start and next;
}
my $file = $_; chomp $file;
my $md5 = `md5sum $file`;
print "$md5";
$pm->finish if ($fork > 1);
};

Automate paired analysis

I have a growing number of files to process using a simple Perl script I wrote. The script takes two files as input and prints an output. I want to use a bash script (or anything really) to automate the following usage:
perl Program.pl GeneLevels_A GeneLevels_B > GeneLevels_A_B
with every paired, non-directional combination of files in a particular directory.
Here is the Perl script:
#!/usr/bin/perl
use strict;
use warnings;
die "Usage: $0 <File_1> <File_2>\n" unless #ARGV == 2;
my $file1 = shift #ARGV;
my $file2 = shift #ARGV;
my %hash1;
my %hash2;
my $counter = 0;
my $amt = 25;
my $start = 244 - $amt;
open (REF, $file1);
while (<REF>) {
my $line = $_;
chomp $line;
if ($counter < $start) {
$counter++;
next;
}
my #cells = split('\t', $line);
my $ID = $cells[2];
my $row = $cells[0];
$hash1{$ID} = $row;
$counter++;
}
close REF;
$counter = 0;
open (FILE, $file2);
while (<FILE>) {
my $line = $_;
chomp $line;
if ($counter < $start) {
$counter++;
next;
}
my #cells = split('\t', $line);
my $ID = $cells[2];
my $row = $cells[0];
$hash2{$ID} = $row;
$counter++;
}
close FILE;
while ( my ($key, $value) = each(%hash1) ) {
if ( exists $hash2{$key} ) {
print "$key\t$value\t$hash2{$key}\n";
}
}
A good solution would allow me to run the Perl script on every file with an appropriate suffix.
An even better solution would assess the suffixes of existing files to determine which pairs of files have already been processed this way and omit those. For example if File_A, File_B, File_C, and File_B_C exist then only File_A_B and File_A_C would be produced. Note that File_A_B and File_B_A are equivalent.
This should work. Better checks for bad arguments would be a good thing to add:
#!/bin/bash
if [ $# != 2 ]; then
echo "usage: pair <suffix1> <suffix2>"
exit
fi
suffix1=$1
suffix2=$2
for file1 in *_${suffix1}; do
fileCheck=$(echo $file1 | sed -e "s#_$suffix2##")
if [ "$fileCheck" = "$file1" ]; then
file2=${file1/_$suffix1/_$suffix2}
if [[ ( ! -f ${file1}_${suffix2} ) && ( ! -f ${file2}_${suffix1} ) ]]; then
echo processing ${file1}_${suffix2}
perl Program.pl $file1 $file2 > ${file1}_${suffix2}
fi
fi
done

Compare two directories for differences in regular files

Need to compare two directories and check through every file in both directories for files of the same name, if the same name occurs you check to see if the files have the same content, if so print > file <. If the file has the same name but not the same content < file > If there is a file that's not in directory 2 but in directory 1 <<< file1, and likewise >>> file2 for a file in dir 2 but not dir 1.
I have been having trouble, my code doesn't even compare when I test to see if the files are equal in name.
#!/usr/bin/perl -w
use File::Basename;
#files1 = `/usr/bin/find $ARGV[0] -print`;
chop #files1;
#files2 = `/usr/bin/find $ARGV[1] -print`;
chop #files2;
here:
for ($i=1; #files1 >= $i; $i++) {
for ($x=1; #files2 >= $x; $x++) {
$file1 = basename($files1[$i]);
$file2 = basename($files2[$x]);
if ($file1 eq $file2) {
shift #files1;
shift #files2;
$result = `/usr/bin/diff -q $files1[$i] $files2[$x]`;
chop $result;
if ($result eq "Files $files1[$i] and $files2[$x] differ") {
print "< $file1 >\n";
next here;
}
else {
print "> $file1 <\n";
}
}
else {
if ( !-e "$files1[$i]/$file2") { print ">>> $file2\n";}
unless ( -e "$files2[$x]/$file1") { print "<<< $file1\n";}
}
}
}
Try using this :
diff -aqr /dir1 /dir2
or :
#!/bin/bash
for f;
for g; do
[[ "$f" != "$g" ]] &&
cmp &>/dev/null "$f" "$g" || echo "$f is different of $g"
done
done
USE this
./script dir1/* dir2/*