Export Snapshots from Accurev using Perl - perl
I am trying to use a Perl script to pull out all snapshots from Accurev but I'm having issues.
I can run this command fine on it's own
accurev show -p myDepot streams
This will get all the streams for me, but when I go to put it into my Perl script, I come up empty and can't pass in the argument to a for each loop.
Here's what I have:
#!/usr/bin/perl
#only tested on Windows - not supported by AccuRev
use XML::Simple ;
use Data::Dumper ;
use strict ;
use Time::Piece;
### Modify to reflect your local AccuRev client path
$::AccuRev = "/cygdrive/c/\"Program Files (x86)\"/AccuRev/bin/accurev.exe" ;
my ($myDepot, $myDate, $stream_raw, $stream_xml, $streamNumber, $streamName, $counter, $snapTime) ;
### With AccuRev 4.5+ security, if you want to ensure you are authenticated before executing the script,
### uncomment the following line and use a valid username and password.
system "$::AccuRev login -n username password" ;
chomp($myDepot = $ARGV[0]);
chomp($myDate = $ARGV[1]);
if ($myDepot eq "") {
print "\nUsage: perl snapshot_streams.pl <depot_name>\n" ;
print "This script will return the name of the snapshot streams for the depot passed in...\n" ;
exit(1) ;
}
$stream_raw = `$::AccuRev show -p $myDepot -fx streams`;
$stream_xml = XMLin($stream_raw, forcearray => 1, suppressempty => '', KeyAttr => 'stream') ;
if ($stream_xml eq "") {
print "\nDepot $myDepot doesn't exist...\n" ;
exit(1) ;
}
print "List of snapshots in depot $myDepot:\n";
$counter = 0 ;
foreach $stream_xml (#{$stream_xml->{stream}})
{
if ($stream_xml->{type} eq "snapshot") {
$streamName = $stream_xml->{name};
$snapTime = scalar localtime($stream_xml->{time});
my $datecheck = $snapTime->strftime('%Y%m%d');
if ($datecheck >= $myDate){
print "Snapshot Name: $streamName \t\t\t Time: $snapTime\n" ;
}
$counter = $counter + 1 ;
}
}
if ( $counter == 0 ) {
print "\nNo snapshots found in depot $myDepot...\n" ;
}
The problem was that the AccuRev path was not working correctly so I was not getting the correct output. Since I have the AccuRev home directory listed in my envrionment variables I was able to call accurev and save it to an XML file to be referenced in the XMLin call.
In addition to this, the command had to be in "" not '' or ``.
Below is the end result with an additional argument to specify the date range of snapshots:
#!C:\Strawberry\perl\bin
#only tested on Windows - not supported by AccuRev
use XML::Simple qw(:strict);
use English qw( -no_match_vars );
use Data::Dumper ;
use strict ;
use Time::Piece;
my ( $login, $xml, $command, $myDepot, $myDateStart, $myDateEnd, $stream_xml, $streamNumber, $streamName, $counter, $snapTime) ;
###If Accurev is already in your environment variables, you can call it without setting the path
###otherwise uncomment and update script
###$accurev = "/cygdrive/c/\"Program Files (x86)\"/AccuRev/bin/accurev.exe";
### With AccuRev 4.5+ security, if you want to ensure you are authenticated before executing the script,
### uncomment the following line and use a valid username and password.
###$login = "accurev login -n username password" ;
###system($login);
chomp($myDepot = $ARGV[0]);
chomp($myDateStart = $ARGV[1]);
chomp($myDateEnd = $ARGV[2]);
if ($myDepot eq "") {
print "\nUsage: perl snapshot_streams.pl <depot_name>\n" ;
print "This script will return the name of the snapshot streams for the depot passed in...\n" ;
exit(1) ;
}
$command = "accurev show -p $myDepot -fx streams > snapshot_streams.xml";
system($command);
$stream_xml = XMLin("snapshot_streams.xml", ForceArray => 1, SuppressEmpty => '', KeyAttr => 'stream') ;
if ($stream_xml eq "") {
print "\nDepot $myDepot doesn't exist...\n" ;
exit(1) ;
}
print "List of snapshots in depot $myDepot:\n";
$counter = 0 ;
foreach $stream_xml (#{$stream_xml->{stream}})
{
if ($stream_xml->{type} eq "snapshot") {
$streamName = $stream_xml->{name};
$snapTime = scalar localtime($stream_xml->{time});
my $datecheck = $snapTime->strftime('%Y%m%d');
if ($datecheck >= $myDateStart && $datecheck <= $myDateEnd){
print "Snapshot Name: $streamName \t\t\t Time: $snapTime\n" ;
}
$counter = $counter + 1 ;
}
}
if ( $counter == 0 ) {
print "\nNo snapshots found in depot $myDepot...\n" ;
}
Here is the call:
perl -w snapshot.pl <depot> "FromDate" "ToDate" > output.txt 2>&1
The output looks something like this:
List of snapshots in depot <Depot_Name>:
Snapshot Name: Product_1_SS Time: Tue Jul 04 10:00:05 2018
Snapshot Name: Product_2_SS Time: Tue Jul 07 11:00:15 2018
Snapshot Name: Product_3_SS Time: Tue Jul 15 12:30:30 2018
Related
Perl Script Not Liking Date Extension
why do I receive the error complaining about the parenthesis ? sh: syntax error at line 1 : `)' unexpected when adding this date extension to the new file -- mv abc abc$(date +%Y%m%d%H%M%S) for it seems that it doesn't like that last parenthesis #!/usr/bin/perl # =========================================== # # Script to watch POEDIACK file size # # - Comments - # # script will check the file size of the POEDIACK file in # $LAWDIR/$PLINE/edi/in. # If it's > 1 gig, it will send notification via email # # # =========================================== # use strict; use POSIX qw(strftime); # get env vars from system my $LAWDIR = #ENV{'LAWDIR'}; my $PLINE = #ENV{'PLINE'}; #my $email_file = "/lsf10/monitors/poediack.email"; my $curr_date = strftime('%m%d%Y', localtime); my $ack_file = "$LAWDIR" . "/$PLINE" . "/edi/in/POEDIACK"; my $ack_location = "$LAWDIR" . "/$PLINE" . "/edi/in/"; my $mv_location = "$LAWDIR" . "/$PLINE" . "/edi/in/Z_files"; my $ack_file_limit = 10; #my $ack_file_limit = 1000000000; my $ack_file_size; if( -e $ack_file) { $ack_file_size = -s $ack_file; if ( $ack_file_size > $ack_file_limit ) { `compress -vf $ack_file`; `mv $mv_location\$ack_file.Z $mv_location\$ack_file.Z.$(date +%Y%m%d%H%M%S)`; } } else { print "POEDIACK File not found: $ack_file\n"; } ### end perl script ###
$( is being interpreted as a variable. It is the group ID of the process. You need to escape it. And you probably shouldn't escape $ack_file. `mv $mv_location$ack_file.Z $mv_location$ack_file.Z.\$(date +%Y%m%d%H%M%S)`; It's safer and faster to avoid complicated shell commands and use rename instead. use autodie; my $timestamp = strftime('%Y%m%d%H%M%S', localtime); rename "$mv_location$ack_file.Z", "$mv_location$ack_file.Z.$timestamp"; Or use an existing log rotator.
CMG biotools - a Perl based tool
sub runBlast { # order is preserved ! for ( my $subject_counter = 0 ; $subject_counter < scalar ( #{$xmlcfg->{sources}[0]->{entry}} ) ; $subject_counter++ ) { my $subjectTitle = $INFO{$subject_counter}{title}; my $subjectSubtitle = $INFO{$subject_counter}{subtitle}; for ( my $query_counter = 0 ; $query_counter < scalar ( #{$xmlcfg->{sources}[0]->{entry}} ) ; $query_counter++ ) { my $queryTitle = $INFO{$query_counter}{title}; my $querySubtitle = $INFO{$query_counter}{subtitle}; $tab_h{"$query_counter-$subject_counter"} = $cm->start(); unless ( $tab_h{"$query_counter-$subject_counter"} ) { my $blastreport_scratch = "$scratch/$query_counter-$subject_counter.blastout.gz"; my $jobid = md5 ( "$scratch/$query_counter.fsa" , "$scratch/$subject_counter.fsa" ) ; system "$perl /usr/biotools/indirect/cacher --id='$jobid' --source='$cache_source' -action get > $blastreport_scratch"; if ( $? != 0 or $clean or -s $blastreport_scratch == 0) { print STDERR "# jobid $jobid not in cache - redoing\n"; my $cmd = "$BLASTALL -F 0 -p blastp -d $scratch/$subject_counter.fsa -e 1e-5 -m 7 < $scratch/$query_counter.fsa | $TIGRCUT | gawk '{print \$1\"\\t\"\$2}' | $gzip > $blastreport_scratch"; system $cmd; die "# failed at '$cmd'\n" if $? != 0; system "$perl /usr/biotools/indirect/cacher --id=$jobid --source=$cache_source -action put -expire 100 < $blastreport_scratch"; } else { my $s = -s $blastreport_scratch; print STDERR "# fetched jobid $jobid from cache ( $s bytes)\n"; } exit; } } } $cm->wait_all_children; } I am completely zero in Perl programming. I had to run this tool called CMG Biotools which has been coded in Perl. I am attaching part of its code here. Can anyone please tell me when jobid not in cache...redoing message will be displayed.code for CMG biotools
Your script, blastmatrix, attempts to use an external (to this script) perl tool called "cacher" - /usr/biotools/indirect/cacher - passing parameters -action get --source='$cache_source'; and --id='$jobid' So the script is attemting to retrieve a job with ID $jobid from a caching utillity and its failing. Having failed, the reference to "redoing" appears to be an attempt to run BLASTALL, that is /usr/biotools/blast/bin/blastall, and then retrys the same cache command. So, if all you are seeing is the message but the script is working then I'd guess - and that's all I can do - that BLASTALL is cleaning up some issue - a unexpected file, a missing file - who knows - and the second attempt at the cache is working. If it's not working at all, I can only say that it finally fails - which is a different thing from say "the root cause is ..." - when it attempts to use the cacher. Note - all the above is speculative.
insert null values in missing rows of file
I have a text file which consists some data of 24 hours time stamp segregated in 10 minutes interval. 2016-02-06,00:00:00,ujjawal,36072-2,MT,37,0,1 2016-02-06,00:10:00,ujjawal,36072-2,MT,37,0,1 2016-02-06,00:20:00,ujjawal,36072-2,MT,37,0,1 2016-02-06,00:40:00,ujjawal,36072-2,MT,37,0,1 2016-02-06,00:50:00,ujjawal,36072-2,MT,42,0,2 2016-02-06,01:00:00,ujjawal,36072-2,MT,55,0,2 2016-02-06,01:10:00,ujjawal,36072-2,MT,41,0,2 2016-02-06,01:20:00,ujjawal,36072-2,MT,46,0,2 2016-02-06,01:30:00,ujjawal,36072-2,MT,56,0,3 2016-02-06,01:40:00,ujjawal,36072-2,MT,38,0,2 2016-02-06,01:50:00,ujjawal,36072-2,MT,49,0,2 2016-02-06,02:00:00,ujjawal,36072-2,MT,58,0,4 2016-02-06,02:10:00,ujjawal,36072-2,MT,43,0,2 2016-02-06,02:20:00,ujjawal,36072-2,MT,46,0,2 2016-02-06,02:30:00,ujjawal,36072-2,MT,61,0,2 2016-02-06,02:40:00,ujjawal,36072-2,MT,57,0,3 2016-02-06,02:50:00,ujjawal,36072-2,MT,45,0,2 2016-02-06,03:00:00,ujjawal,36072-2,MT,45,0,3 2016-02-06,03:10:00,ujjawal,36072-2,MT,51,0,2 2016-02-06,03:20:00,ujjawal,36072-2,MT,68,0,3 2016-02-06,03:30:00,ujjawal,36072-2,MT,51,0,2 2016-02-06,03:40:00,ujjawal,36072-2,MT,68,0,3 2016-02-06,03:50:00,ujjawal,36072-2,MT,67,0,3 2016-02-06,04:00:00,ujjawal,36072-2,MT,82,0,8 2016-02-06,04:10:00,ujjawal,36072-2,MT,82,0,5 2016-02-06,04:20:00,ujjawal,36072-2,MT,122,0,4 2016-02-06,04:30:00,ujjawal,36072-2,MT,133,0,3 2016-02-06,04:40:00,ujjawal,36072-2,MT,142,0,3 2016-02-06,04:50:00,ujjawal,36072-2,MT,202,0,1 2016-02-06,05:00:00,ujjawal,36072-2,MT,731,1,3 2016-02-06,05:10:00,ujjawal,36072-2,MT,372,0,7 2016-02-06,05:20:00,ujjawal,36072-2,MT,303,0,2 2016-02-06,05:30:00,ujjawal,36072-2,MT,389,0,3 2016-02-06,05:40:00,ujjawal,36072-2,MT,454,0,1 2016-02-06,05:50:00,ujjawal,36072-2,MT,406,0,6 2016-02-06,06:00:00,ujjawal,36072-2,MT,377,0,1 2016-02-06,06:10:00,ujjawal,36072-2,MT,343,0,5 2016-02-06,06:20:00,ujjawal,36072-2,MT,370,0,2 2016-02-06,06:30:00,ujjawal,36072-2,MT,343,0,9 2016-02-06,06:40:00,ujjawal,36072-2,MT,315,0,8 2016-02-06,06:50:00,ujjawal,36072-2,MT,458,0,3 2016-02-06,07:00:00,ujjawal,36072-2,MT,756,1,3 2016-02-06,07:10:00,ujjawal,36072-2,MT,913,1,3 2016-02-06,07:20:00,ujjawal,36072-2,MT,522,0,3 2016-02-06,07:30:00,ujjawal,36072-2,MT,350,0,7 2016-02-06,07:40:00,ujjawal,36072-2,MT,328,0,6 2016-02-06,07:50:00,ujjawal,36072-2,MT,775,1,3 2016-02-06,08:00:00,ujjawal,36072-2,MT,310,0,9 2016-02-06,08:10:00,ujjawal,36072-2,MT,308,0,6 2016-02-06,08:20:00,ujjawal,36072-2,MT,738,1,3 2016-02-06,08:30:00,ujjawal,36072-2,MT,294,0,6 2016-02-06,08:40:00,ujjawal,36072-2,MT,345,0,1 2016-02-06,08:50:00,ujjawal,36072-2,MT,367,0,6 2016-02-06,09:00:00,ujjawal,36072-2,MT,480,0,3 2016-02-06,09:10:00,ujjawal,36072-2,MT,390,0,3 2016-02-06,09:20:00,ujjawal,36072-2,MT,436,0,3 2016-02-06,09:30:00,ujjawal,36072-2,MT,1404,2,3 2016-02-06,09:40:00,ujjawal,36072-2,MT,346,0,3 2016-02-06,09:50:00,ujjawal,36072-2,MT,388,0,3 2016-02-06,10:00:00,ujjawal,36072-2,MT,456,0,2 2016-02-06,10:10:00,ujjawal,36072-2,MT,273,0,7 2016-02-06,10:20:00,ujjawal,36072-2,MT,310,0,3 2016-02-06,10:30:00,ujjawal,36072-2,MT,256,0,7 2016-02-06,10:40:00,ujjawal,36072-2,MT,283,0,3 2016-02-06,10:50:00,ujjawal,36072-2,MT,276,0,3 2016-02-06,11:00:00,ujjawal,36072-2,MT,305,0,1 2016-02-06,11:10:00,ujjawal,36072-2,MT,310,0,9 2016-02-06,11:20:00,ujjawal,36072-2,MT,286,0,3 2016-02-06,11:30:00,ujjawal,36072-2,MT,286,0,3 2016-02-06,11:40:00,ujjawal,36072-2,MT,247,0,7 2016-02-06,11:50:00,ujjawal,36072-2,MT,366,0,2 2016-02-06,12:00:00,ujjawal,36072-2,MT,294,0,2 2016-02-06,12:10:00,ujjawal,36072-2,MT,216,0,5 2016-02-06,12:20:00,ujjawal,36072-2,MT,233,0,1 2016-02-06,12:30:00,ujjawal,36072-2,MT,785,1,2 2016-02-06,12:40:00,ujjawal,36072-2,MT,466,0,1 2016-02-06,12:50:00,ujjawal,36072-2,MT,219,0,9 2016-02-06,13:00:00,ujjawal,36072-2,MT,248,0,6 2016-02-06,13:10:00,ujjawal,36072-2,MT,223,0,7 2016-02-06,13:20:00,ujjawal,36072-2,MT,276,0,8 2016-02-06,13:30:00,ujjawal,36072-2,MT,219,0,6 2016-02-06,13:40:00,ujjawal,36072-2,MT,699,1,2 2016-02-06,13:50:00,ujjawal,36072-2,MT,439,0,2 2016-02-06,14:00:00,ujjawal,36072-2,MT,1752,2,3 2016-02-06,14:10:00,ujjawal,36072-2,MT,203,0,5 2016-02-06,14:20:00,ujjawal,36072-2,MT,230,0,7 2016-02-06,14:30:00,ujjawal,36072-2,MT,226,0,1 2016-02-06,14:40:00,ujjawal,36072-2,MT,195,0,6 2016-02-06,14:50:00,ujjawal,36072-2,MT,314,0,1 2016-02-06,15:00:00,ujjawal,36072-2,MT,357,0,2 2016-02-06,15:10:00,ujjawal,36072-2,MT,387,0,9 2016-02-06,15:20:00,ujjawal,36072-2,MT,1084,1,3 2016-02-06,15:30:00,ujjawal,36072-2,MT,1295,2,3 2016-02-06,15:40:00,ujjawal,36072-2,MT,223,0,8 2016-02-06,15:50:00,ujjawal,36072-2,MT,254,0,1 2016-02-06,16:00:00,ujjawal,36072-2,MT,252,0,7 2016-02-06,16:10:00,ujjawal,36072-2,MT,268,0,1 2016-02-06,16:20:00,ujjawal,36072-2,MT,242,0,1 2016-02-06,16:30:00,ujjawal,36072-2,MT,254,0,9 2016-02-06,16:40:00,ujjawal,36072-2,MT,271,0,3 2016-02-06,16:50:00,ujjawal,36072-2,MT,244,0,7 2016-02-06,17:00:00,ujjawal,36072-2,MT,281,0,1 2016-02-06,17:10:00,ujjawal,36072-2,MT,190,0,8 2016-02-06,17:20:00,ujjawal,36072-2,MT,187,0,1 2016-02-06,17:30:00,ujjawal,36072-2,MT,173,0,9 2016-02-06,17:40:00,ujjawal,36072-2,MT,140,0,5 2016-02-06,17:50:00,ujjawal,36072-2,MT,147,0,6 2016-02-06,18:00:00,ujjawal,36072-2,MT,109,0,4 2016-02-06,18:10:00,ujjawal,36072-2,MT,99,0,1 2016-02-06,18:20:00,ujjawal,36072-2,MT,66,0,6 2016-02-06,18:30:00,ujjawal,36072-2,MT,67,0,4 2016-02-06,18:40:00,ujjawal,36072-2,MT,40,0,2 2016-02-06,18:50:00,ujjawal,36072-2,MT,52,0,3 2016-02-06,19:00:00,ujjawal,36072-2,MT,40,0,3 2016-02-06,19:10:00,ujjawal,36072-2,MT,30,0,2 2016-02-06,19:20:00,ujjawal,36072-2,MT,25,0,3 2016-02-06,19:30:00,ujjawal,36072-2,MT,35,0,4 2016-02-06,19:40:00,ujjawal,36072-2,MT,14,0,1 2016-02-06,19:50:00,ujjawal,36072-2,MT,97,0,7 2016-02-06,20:00:00,ujjawal,36072-2,MT,14,0,1 2016-02-06,20:10:00,ujjawal,36072-2,MT,12,0,4 2016-02-06,20:20:00,ujjawal,36072-2,MT,11,0,2 2016-02-06,20:30:00,ujjawal,36072-2,MT,12,0,1 2016-02-06,20:40:00,ujjawal,36072-2,MT,6,0,1 2016-02-06,20:50:00,ujjawal,36072-2,MT,13,0,2 2016-02-06,21:00:00,ujjawal,36072-2,MT,5,0,1 2016-02-06,21:10:00,ujjawal,36072-2,MT,12,0,2 2016-02-06,21:20:00,ujjawal,36072-2,MT,1,0,1 2016-02-06,21:30:00,ujjawal,36072-2,MT,21,0,2 2016-02-06,21:50:00,ujjawal,36072-2,MT,9,0,3 2016-02-06,22:00:00,ujjawal,36072-2,MT,2,0,1 2016-02-06,22:10:00,ujjawal,36072-2,MT,12,0,5 2016-02-06,22:20:00,ujjawal,36072-2,MT,1,0,1 2016-02-06,22:30:00,ujjawal,36072-2,MT,9,0,1 2016-02-06,22:40:00,ujjawal,36072-2,MT,13,0,1 2016-02-06,23:00:00,ujjawal,36072-2,MT,20,0,2 2016-02-06,23:10:00,ujjawal,36072-2,MT,10,0,3 2016-02-06,23:20:00,ujjawal,36072-2,MT,10,0,1 2016-02-06,23:30:00,ujjawal,36072-2,MT,6,0,1 2016-02-06,23:40:00,ujjawal,36072-2,MT,12,0,1 if you see above sample as per 10 minutes interval there should be total 143 rows in 24 hours in this file but after second last line which has time 2016-02-06,23:40:00 data for date, time 2016-02-06,23:50:00 is missing. similarly after 2016-02-06,22:40:00 data for date, time 2016-02-06,22:50:00 is missing. can we insert missing date,time followed by 6 null separated by commas e.g. 2016-02-06,22:50:00,null,null,null,null,null,null where ever any data missing in rows of this file based on count no 143 rows and time stamp comparison in rows 2016-02-06,00:00:00 to 2016-02-06,23:50:00 which is also 143 in count ? here is what i have tried created a file with 143 entries of date and time as 2.csv and used below command join -j 2 -o 1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8,2.1,2.1,2.2 <(sort -k2 1.csv) <(sort -k2 2.csv)|grep "2016-02-06,21:30:00"| sort -u|sed "s/\t//g"> 3.txt part of output is repetitive like this : 2016-02-06,21:30:00 2016-02-06,21:30:00 2016-02-06,00:00:00,ujjawal,36072-2,MT,37,0,1 2016-02-06,21:30:00 2016-02-06,21:30:00 2016-02-06,00:10:00,ujjawal,36072-2,MT,37,0,1 2016-02-06,21:30:00 2016-02-06,21:30:00 2016-02-06,00:20:00,ujjawal,36072-2,MT,37,0,1 2016-02-06,21:30:00 2016-02-06,21:30:00 2016-02-06,00:40:00,ujjawal,36072-2,MT,37,0,1 2016-02-06,21:30:00 2016-02-06,21:30:00 2016-02-06,00:50:00,ujjawal,36072-2,MT,42,0,2 2016-02-06,21:30:00 any suggestions ?
I'd actually not cross reference a new csv file, and instead do it like this: #!/usr/bin/env perl use strict; use warnings; use Time::Piece; my $last_timestamp; my $interval = 600; #read stdin line by line while ( <> ) { #extract date and time from this line. my ( $date, $time, #fields ) = split /,/; #parse the timestamp my $timestamp = Time::Piece -> strptime ( $date . $time, "%Y-%m-%d%H:%M:%S" ); #set last if undefined. $last_timestamp //= $timestamp; #if there's a gap... : if ( $last_timestamp + $interval < $timestamp ) { #print "GAP detected at $timestamp: ",$timestamp - $last_timestamp,"\n"; #print lines to fill in the gap for ( ($timestamp - $last_timestamp) % 600 ) { $last_timestamp += 600; print join ( ",", $last_timestamp -> strftime("%Y-%m-%d,%H:%M:%S"), ("null")x6),"\n"; } } $last_timestamp = $timestamp; print; } Which for your sample gives me lines (snipped for brevity): 2016-02-06,22:40:00,ujjawal,36072-2,MT,13,0,1 2016-02-06,22:50:00,null,null,null,null,null,null 2016-02-06,23:00:00,ujjawal,36072-2,MT,20,0,2 Note - this is assuming the timestamps are exactly 600s apart. You can adjust the logic a little if that isn't a valid assumption, but it depends exactly what you're trying to get at that point.
Here's another Perl solution It initialises $date to the date contained in the first line of the file, and a time of 00:00:00 It then fills the %values hash with records using the value of $date as a key, incrementing the value by ten minutes until the day of month changes. These form the "default" values Then the contents of the file are used to overwrite all elements of %values for which we have an actual value. Any gaps will remain set to their default from the previous step Then the hash is simply printed in sorted order, resulting in a full set of data with defaults inserted as necessary use strict; use warnings 'all'; use Time::Piece; use Time::Seconds 'ONE_MINUTE'; use Fcntl ':seek'; my $delta = 10 * ONE_MINUTE; my $date = Time::Piece->strptime(<ARGV> =~ /^([\d-]+)/, '%Y-%m-%d'); my %values; for ( my $day = $date->mday; $date->mday == $day; $date += $delta ) { my $ds = $date->strftime('%Y-%m-%d,%H:%M:%S'); $values{$ds} = $ds. ',null' x 6 . "\n"; } seek ARGV, 0, SEEK_SET; while ( <ARGV> ) { my ($ds) = /^([\d-]+,[\d:]+)/; $values{$ds} = $_; } print $values{$_} for sort keys %values;
here is the answer.. cat 1.csv 2.csv|sort -u -t, -k2,2
...or a shell script: #! /bin/bash set -e file=$1 today=$(head -1 $file | cut -d, -f1) line=0 for (( h = 0 ; h < 24 ; h++ )) do for (( m = 0 ; m < 60 ; m += 10 )) do stamp=$(printf "%02d:%02d:00" $h $m) if [ $line -eq 0 ]; then IFS=',' read date time data; fi if [ "$time" = "$stamp" ]; then echo $date,$time,$data line=0 else echo $today,$stamp,null,null,null,null,null,null line=1 fi done done <$file
I would write it like this in Perl This program expects the name of the input file as a parameter on the command line, and prints its output to STDOUT, which may be redirected as normal use strict; use warnings 'all'; use feature 'say'; use Time::Piece; use Time::Seconds 'ONE_MINUTE'; my $format = '%Y-%m-%d,%H:%M:%S'; my $delta = 10 * ONE_MINUTE; my $next; our #ARGV = 'mydates.txt'; while ( <> ) { my $new = Time::Piece->strptime(/^([\d-]+,[\d:]+)/, $format); while ( $next and $next < $new ) { say $next->strftime($format) . ',null' x 6; $next += $delta; } print; $next = $new + $delta; } while ( $next and $next->hms('') > 0 ) { say $next->strftime($format) . ',null' x 6; $next += $delta; } output 2016-02-06,00:00:00,ujjawal,36072-2,MT,37,0,1 2016-02-06,00:10:00,ujjawal,36072-2,MT,37,0,1 2016-02-06,00:20:00,ujjawal,36072-2,MT,37,0,1 2016-02-06,00:30:00,null,null,null,null,null,null 2016-02-06,00:40:00,ujjawal,36072-2,MT,37,0,1 2016-02-06,00:50:00,ujjawal,36072-2,MT,42,0,2 2016-02-06,01:00:00,ujjawal,36072-2,MT,55,0,2 2016-02-06,01:10:00,ujjawal,36072-2,MT,41,0,2 2016-02-06,01:20:00,ujjawal,36072-2,MT,46,0,2 2016-02-06,01:30:00,ujjawal,36072-2,MT,56,0,3 2016-02-06,01:40:00,ujjawal,36072-2,MT,38,0,2 2016-02-06,01:50:00,ujjawal,36072-2,MT,49,0,2 2016-02-06,02:00:00,ujjawal,36072-2,MT,58,0,4 2016-02-06,02:10:00,ujjawal,36072-2,MT,43,0,2 2016-02-06,02:20:00,ujjawal,36072-2,MT,46,0,2 2016-02-06,02:30:00,ujjawal,36072-2,MT,61,0,2 2016-02-06,02:40:00,ujjawal,36072-2,MT,57,0,3 2016-02-06,02:50:00,ujjawal,36072-2,MT,45,0,2 2016-02-06,03:00:00,ujjawal,36072-2,MT,45,0,3 2016-02-06,03:10:00,ujjawal,36072-2,MT,51,0,2 2016-02-06,03:20:00,ujjawal,36072-2,MT,68,0,3 2016-02-06,03:30:00,ujjawal,36072-2,MT,51,0,2 2016-02-06,03:40:00,ujjawal,36072-2,MT,68,0,3 2016-02-06,03:50:00,ujjawal,36072-2,MT,67,0,3 2016-02-06,04:00:00,ujjawal,36072-2,MT,82,0,8 2016-02-06,04:10:00,ujjawal,36072-2,MT,82,0,5 2016-02-06,04:20:00,ujjawal,36072-2,MT,122,0,4 2016-02-06,04:30:00,ujjawal,36072-2,MT,133,0,3 2016-02-06,04:40:00,ujjawal,36072-2,MT,142,0,3 2016-02-06,04:50:00,ujjawal,36072-2,MT,202,0,1 2016-02-06,05:00:00,ujjawal,36072-2,MT,731,1,3 2016-02-06,05:10:00,ujjawal,36072-2,MT,372,0,7 2016-02-06,05:20:00,ujjawal,36072-2,MT,303,0,2 2016-02-06,05:30:00,ujjawal,36072-2,MT,389,0,3 2016-02-06,05:40:00,ujjawal,36072-2,MT,454,0,1 2016-02-06,05:50:00,ujjawal,36072-2,MT,406,0,6 2016-02-06,06:00:00,ujjawal,36072-2,MT,377,0,1 2016-02-06,06:10:00,ujjawal,36072-2,MT,343,0,5 2016-02-06,06:20:00,ujjawal,36072-2,MT,370,0,2 2016-02-06,06:30:00,ujjawal,36072-2,MT,343,0,9 2016-02-06,06:40:00,ujjawal,36072-2,MT,315,0,8 2016-02-06,06:50:00,ujjawal,36072-2,MT,458,0,3 2016-02-06,07:00:00,ujjawal,36072-2,MT,756,1,3 2016-02-06,07:10:00,ujjawal,36072-2,MT,913,1,3 2016-02-06,07:20:00,ujjawal,36072-2,MT,522,0,3 2016-02-06,07:30:00,ujjawal,36072-2,MT,350,0,7 2016-02-06,07:40:00,ujjawal,36072-2,MT,328,0,6 2016-02-06,07:50:00,ujjawal,36072-2,MT,775,1,3 2016-02-06,08:00:00,ujjawal,36072-2,MT,310,0,9 2016-02-06,08:10:00,ujjawal,36072-2,MT,308,0,6 2016-02-06,08:20:00,ujjawal,36072-2,MT,738,1,3 2016-02-06,08:30:00,ujjawal,36072-2,MT,294,0,6 2016-02-06,08:40:00,ujjawal,36072-2,MT,345,0,1 2016-02-06,08:50:00,ujjawal,36072-2,MT,367,0,6 2016-02-06,09:00:00,ujjawal,36072-2,MT,480,0,3 2016-02-06,09:10:00,ujjawal,36072-2,MT,390,0,3 2016-02-06,09:20:00,ujjawal,36072-2,MT,436,0,3 2016-02-06,09:30:00,ujjawal,36072-2,MT,1404,2,3 2016-02-06,09:40:00,ujjawal,36072-2,MT,346,0,3 2016-02-06,09:50:00,ujjawal,36072-2,MT,388,0,3 2016-02-06,10:00:00,ujjawal,36072-2,MT,456,0,2 2016-02-06,10:10:00,ujjawal,36072-2,MT,273,0,7 2016-02-06,10:20:00,ujjawal,36072-2,MT,310,0,3 2016-02-06,10:30:00,ujjawal,36072-2,MT,256,0,7 2016-02-06,10:40:00,ujjawal,36072-2,MT,283,0,3 2016-02-06,10:50:00,ujjawal,36072-2,MT,276,0,3 2016-02-06,11:00:00,ujjawal,36072-2,MT,305,0,1 2016-02-06,11:10:00,ujjawal,36072-2,MT,310,0,9 2016-02-06,11:20:00,ujjawal,36072-2,MT,286,0,3 2016-02-06,11:30:00,ujjawal,36072-2,MT,286,0,3 2016-02-06,11:40:00,ujjawal,36072-2,MT,247,0,7 2016-02-06,11:50:00,ujjawal,36072-2,MT,366,0,2 2016-02-06,12:00:00,ujjawal,36072-2,MT,294,0,2 2016-02-06,12:10:00,ujjawal,36072-2,MT,216,0,5 2016-02-06,12:20:00,ujjawal,36072-2,MT,233,0,1 2016-02-06,12:30:00,ujjawal,36072-2,MT,785,1,2 2016-02-06,12:40:00,ujjawal,36072-2,MT,466,0,1 2016-02-06,12:50:00,ujjawal,36072-2,MT,219,0,9 2016-02-06,13:00:00,ujjawal,36072-2,MT,248,0,6 2016-02-06,13:10:00,ujjawal,36072-2,MT,223,0,7 2016-02-06,13:20:00,ujjawal,36072-2,MT,276,0,8 2016-02-06,13:30:00,ujjawal,36072-2,MT,219,0,6 2016-02-06,13:40:00,ujjawal,36072-2,MT,699,1,2 2016-02-06,13:50:00,ujjawal,36072-2,MT,439,0,2 2016-02-06,14:00:00,ujjawal,36072-2,MT,1752,2,3 2016-02-06,14:10:00,ujjawal,36072-2,MT,203,0,5 2016-02-06,14:20:00,ujjawal,36072-2,MT,230,0,7 2016-02-06,14:30:00,ujjawal,36072-2,MT,226,0,1 2016-02-06,14:40:00,ujjawal,36072-2,MT,195,0,6 2016-02-06,14:50:00,ujjawal,36072-2,MT,314,0,1 2016-02-06,15:00:00,ujjawal,36072-2,MT,357,0,2 2016-02-06,15:10:00,ujjawal,36072-2,MT,387,0,9 2016-02-06,15:20:00,ujjawal,36072-2,MT,1084,1,3 2016-02-06,15:30:00,ujjawal,36072-2,MT,1295,2,3 2016-02-06,15:40:00,ujjawal,36072-2,MT,223,0,8 2016-02-06,15:50:00,ujjawal,36072-2,MT,254,0,1 2016-02-06,16:00:00,ujjawal,36072-2,MT,252,0,7 2016-02-06,16:10:00,ujjawal,36072-2,MT,268,0,1 2016-02-06,16:20:00,ujjawal,36072-2,MT,242,0,1 2016-02-06,16:30:00,ujjawal,36072-2,MT,254,0,9 2016-02-06,16:40:00,ujjawal,36072-2,MT,271,0,3 2016-02-06,16:50:00,ujjawal,36072-2,MT,244,0,7 2016-02-06,17:00:00,ujjawal,36072-2,MT,281,0,1 2016-02-06,17:10:00,ujjawal,36072-2,MT,190,0,8 2016-02-06,17:20:00,ujjawal,36072-2,MT,187,0,1 2016-02-06,17:30:00,ujjawal,36072-2,MT,173,0,9 2016-02-06,17:40:00,ujjawal,36072-2,MT,140,0,5 2016-02-06,17:50:00,ujjawal,36072-2,MT,147,0,6 2016-02-06,18:00:00,ujjawal,36072-2,MT,109,0,4 2016-02-06,18:10:00,ujjawal,36072-2,MT,99,0,1 2016-02-06,18:20:00,ujjawal,36072-2,MT,66,0,6 2016-02-06,18:30:00,ujjawal,36072-2,MT,67,0,4 2016-02-06,18:40:00,ujjawal,36072-2,MT,40,0,2 2016-02-06,18:50:00,ujjawal,36072-2,MT,52,0,3 2016-02-06,19:00:00,ujjawal,36072-2,MT,40,0,3 2016-02-06,19:10:00,ujjawal,36072-2,MT,30,0,2 2016-02-06,19:20:00,ujjawal,36072-2,MT,25,0,3 2016-02-06,19:30:00,ujjawal,36072-2,MT,35,0,4 2016-02-06,19:40:00,ujjawal,36072-2,MT,14,0,1 2016-02-06,19:50:00,ujjawal,36072-2,MT,97,0,7 2016-02-06,20:00:00,ujjawal,36072-2,MT,14,0,1 2016-02-06,20:10:00,ujjawal,36072-2,MT,12,0,4 2016-02-06,20:20:00,ujjawal,36072-2,MT,11,0,2 2016-02-06,20:30:00,ujjawal,36072-2,MT,12,0,1 2016-02-06,20:40:00,ujjawal,36072-2,MT,6,0,1 2016-02-06,20:50:00,ujjawal,36072-2,MT,13,0,2 2016-02-06,21:00:00,ujjawal,36072-2,MT,5,0,1 2016-02-06,21:10:00,ujjawal,36072-2,MT,12,0,2 2016-02-06,21:20:00,ujjawal,36072-2,MT,1,0,1 2016-02-06,21:30:00,ujjawal,36072-2,MT,21,0,2 2016-02-06,21:40:00,null,null,null,null,null,null 2016-02-06,21:50:00,ujjawal,36072-2,MT,9,0,3 2016-02-06,22:00:00,ujjawal,36072-2,MT,2,0,1 2016-02-06,22:10:00,ujjawal,36072-2,MT,12,0,5 2016-02-06,22:20:00,ujjawal,36072-2,MT,1,0,1 2016-02-06,22:30:00,ujjawal,36072-2,MT,9,0,1 2016-02-06,22:40:00,ujjawal,36072-2,MT,13,0,1 2016-02-06,22:50:00,null,null,null,null,null,null 2016-02-06,23:00:00,ujjawal,36072-2,MT,20,0,2 2016-02-06,23:10:00,ujjawal,36072-2,MT,10,0,3 2016-02-06,23:20:00,ujjawal,36072-2,MT,10,0,1 2016-02-06,23:30:00,ujjawal,36072-2,MT,6,0,1 2016-02-06,23:40:00,ujjawal,36072-2,MT,12,0,1 2016-02-06,23:50:00,null,null,null,null,null,null
New to Perl - Parsing file and replacing pattern with dynamic values
I am very new to Perl and i am currently trying to convert a bash script to perl. My script is used to convert nmon files (AIX / Linux perf monitoring tool), it takes nmon files present in a directory, grep and redirect the specific section to a temp file, grep and redirect the associated timestamp to aother file. Then, it parses data into a final csv file that will be indexed by a a third tool to be exploited. A sample NMON data looks like: TOP,%CPU Utilisation TOP,+PID,Time,%CPU,%Usr,%Sys,Threads,Size,ResText,ResData,CharIO,%RAM,Paging,Command,WLMclass TOP,5165226,T0002,10.93,9.98,0.95,1,54852,4232,51220,311014,0.755,1264,PatrolAgent,Unclassified TOP,5365876,T0002,1.48,0.81,0.67,135,85032,132,84928,38165,1.159,0,db2sysc,Unclassified TOP,5460056,T0002,0.32,0.27,0.05,1,5060,616,4704,1719,0.072,0,db2kmchan64.v9,Unclassified The field "Time" (Seen as T0002 and really called ZZZZ in NMON) is a specific NMON timestamp, the real value of this timestamp is present later (in a dedicated section) in the NMON file and looks like: ZZZZ,T0001,00:09:55,01-JAN-2014 ZZZZ,T0002,00:13:55,01-JAN-2014 ZZZZ,T0003,00:17:55,01-JAN-2014 ZZZZ,T0004,00:21:55,01-JAN-2014 ZZZZ,T0005,00:25:55,01-JAN-2014 The NMON format is very specific and can't be exploited directly without being parsed, the timestamp has to be associated with the corresponding value. (A NMON file is almost like a concatenation of numerous different csv files with each a different format, different fileds and so on.) I wrote the following bash script to parse the section i'm interested in (The "TOP" section which represents top process cpu, mem, io stats per host) #!/bin/bash # set -x ################################################################ # INFORMATION ################################################################ # nmon2csv_TOP.sh # Convert TOP section of nmon files to csv # CAUTION: This script is expected to be launched by the main workflow # $DST and DST_CONVERTED_TOP are being exported by it, if not this script will exit at launch time ################################################################ # VARS ################################################################ # Location of NMON files NMON_DIR=${DST} # Location of generated files OUTPUT_DIR=${DST_CONVERTED_TOP} # Temp files rawdatafile=/tmp/temp_rawdata.$$.temp timestampfile=/tmp/temp_timestamp.$$.temp # Main Output file finalfile=${DST_CONVERTED_TOP}/NMON_TOP_processed_at_date_`date '+%F'`.csv ########################### # BEGIN OF WORK ########################### # Verify exported var are not null if [ -z ${NMON_DIR} ]; then echo -e "\nERROR: Var NMON_DIR is null!\n" && exit 1 elif [ -z ${OUTPUT_DIR} ]; then echo -e "\nERROR: Var OUTPUT_DIR is null!\n" && exit 1 fi # Check if temp and output files already exists if [ -s ${rawdatafile} ]; then rm -f ${rawdatafile} elif [ -s ${timestampfile} ]; then rm -f ${timestampfile} elif [ -s ${finalfile} ]; then rm -f ${finalfile} fi # Get current location PWD=`pwd` # Go to NMON files location cd ${NMON_DIR} # For each NMON file present: # To restrict to only PROD env: `ls *.nmon | grep -E -i 'sp|gp|ge'` for NMON_FILE in `ls *.nmon | grep -E -i 'sp|gp|ge'`; do # Set Hostname identification serialnum=`grep 'AAA,SerialNumber,' ${NMON_FILE} | awk -F, '{print $3}' OFS=, | tr [:lower:] [:upper:]` hostname=`grep 'AAA,host,' ${NMON_FILE} | awk -F, '{print $3}' OFS=, | tr [:lower:] [:upper:]` # Grep and redirect TOP Section grep 'TOP' ${NMON_FILE} | grep -v 'AAA,version,TOPAS-NMON' | grep -v 'TOP,%CPU Utilisation' > ${rawdatafile} # Grep and redirect associated timestamps (ZZZZ) grep 'ZZZZ' ${NMON_FILE}> ${timestampfile} # Begin of work while IFS=, read TOP PID Time Pct_CPU Pct_Usr Pct_Sys Threads Size ResText ResData CharIO Pct_RAM Paging Command WLMclass do timestamp=`grep ${Time} ${timestampfile} | awk -F, '{print $4 " "$3}' OFS=,` echo ${serialnum},${hostname},${timestamp},${Time},${PID},${Pct_CPU},${Pct_Usr},${Pct_Sys},${Threads},${Size},${ResText},${ResData},${CharIO},${Pct_RAM},${Paging},${Command},${WLMclass} \ | grep -v '+PID,%CPU,%Usr,%Sys,Threads,Size,ResText,ResData,CharIO,%RAM,Paging,Command,WLMclass' >> ${finalfile} done < ${rawdatafile} echo -e "INFO: Done for Serialnum: ${serialnum} Hostname: ${hostname}" done # Go back to initial location cd ${PWD} ########################### # END OF WORK ########################### This works as wanted and generate a main csv file (you'll see in the code that i voluntary don't keep the csv header in the file) wich is a concatenation of all parsed hosts. But, i have a very large amount of host to treat each day (around 3000 hosts), with this current code and in worst cases, it can takes a few minutes to generate data for 1 host, multiplicated per number of hosts minutes becomes easily hours... So, this code is really not performer enough to deal with such amount of data 10 hosts represents around 200.000 lines, which represents finally around 20 MB of csv file. That's not that much, but i think that a shell script is probably not the better choice to manage such a process... I guess that perl shall be much better at this task (even if the shell script could probably be improved), but my knowledge in perl is (currently) very poor, this is why i ask your help... I think that code should be quite simple to do in perl but i can't get it to work as for now... One guy used to develop a perl script to manage NMON files and convert them to sql files (to dump these data into a database), i staged it to use its feature and with the help of some shell scripts i manage the sql files to get my final csv files. But the TOP section was not integrated into that perl script and can't be used to that without being redeveloped. The code in question: #!/usr/bin/perl # Program name: nmon2mysql.pl # Purpose - convert nmon.csv file(s) into mysql insert file # Author - Bruce Spencer # Disclaimer: this provided "as is". # Date - March 2007 # $nmon2mysql_ver="1.0. March 2007"; use Time::Local; ################################################# ## Your Customizations Go Here ## ################################################# # Source directory for nmon csv files my $NMON_DIR=$ENV{DST_TMP}; my $OUTPUT_DIR=$ENV{DST_CONVERTED_CPU_ALL}; # End "Your Customizations Go Here". # You're on your own, if you change anything beyond this line :-) #################################################################### ############# Main Program ############ #################################################################### # Initialize common variables &initialize; # Process all "nmon" files located in the $NMON_DIR # #nmon_files=`ls $NMON_DIR/*.nmon $NMON_DIR/*.csv`; #nmon_files=`ls $NMON_DIR/*.nmon`; if (#nmon_files eq 0 ) { die ("No \*.nmon or csv files found in $NMON_DIR\n"); } #nmon_files=sort(#nmon_files); chomp(#nmon_files); foreach $FILENAME ( #nmon_files ) { #cols= split(/\//,$FILENAME); $BASEFILENAME= $cols[#cols-1]; unless (open(INSERT, ">$OUTPUT_DIR/$BASEFILENAME.sql")) { die("Can not open /$OUTPUT_DIR/$BASEFILENAME.sql\n"); } print INSERT ("# nmon version: $NMONVER\n"); print INSERT ("# AIX version: $AIXVER\n"); print INSERT ("use nmon;\n"); $start=time(); #now=localtime($start); $now=join(":",#now[2,1,0]); print ("$now: Begin processing file = $FILENAME\n"); # Parse nmon file, skip if unsuccessful if (( &get_nmon_data ) gt 0 ) { next; } $now=time(); $now=$now-$start; print ("\t$now: Finished get_nmon_data\n"); # Static variables (number of fields always the same) ##static_vars=("LPAR","CPU_ALL","FILE","MEM","PAGE","MEMNEW","MEMUSE","PROC"); ##static_vars=("LPAR","CPU_ALL","FILE","MEM","PAGE","MEMNEW","MEMUSE"); #static_vars=("CPU_ALL"); foreach $key (#static_vars) { &mk_mysql_insert_static($key);; $now=time(); $now=$now-$start; print ("\t$now: Finished $key\n"); } # end foreach # Dynamic variables (variable number of fields) ##dynamic_vars=("DISKBSIZE","DISKBUSY","DISKREAD","DISKWRITE","DISKXFER","ESSREAD","ESSWRITE","ESSXFER","IOADAPT","NETERROR","NET","NETPACKET"); #dynamic_vars=(""); foreach $key (#dynamic_vars) { &mk_mysql_insert_variable($key);; $now=time(); $now=$now-$start; print ("\t$now: Finished $key\n"); } close(INSERT); # system("gzip","$FILENAME"); } exit(0); ############################################ ############# Subroutines ############ ############################################ ################################################################## ## Extract CPU_ALL data for Static fields ################################################################## sub mk_mysql_insert_static { my($nmon_var)=#_; my $table=lc($nmon_var); my #rawdata; my $x; my #cols; my $comma; my $TS; my $n; #rawdata=grep(/^$nmon_var,/, #nmon); if (#rawdata < 1) { return(1); } #rawdata=sort(#rawdata); #cols=split(/,/,$rawdata[0]); $x=join(",",#cols[2..#cols-1]); $x=~ s/\%/_PCT/g; $x=~ s/\(MB\)/_MB/g; $x=~ s/-/_/g; $x=~ s/ /_/g; $x=~ s/__/_/g; $x=~ s/,_/,/g; $x=~ s/_,/,/g; $x=~ s/^_//; $x=~ s/_$//; print INSERT (qq|insert into $table (serialnum,hostname,mode,nmonver,time,ZZZZ,$x) values\n| ); $comma=""; $n=#cols; $n=$n-1; # number of columns -1 for($i=1;$i<#rawdata;$i++){ $TS=$UTC_START + $INTERVAL*($i); #cols=split(/,/,$rawdata[$i]); $x=join(",",#cols[2..$n]); $x=~ s/,,/,-1,/g; # replace missing data ",," with a ",-1," print INSERT (qq|$comma("$SN","$HOSTNAME","$MODE","$NMONVER",$TS,"$DATETIME{#cols[1]}",$x)| ); $comma=",\n"; } print INSERT (qq|;\n\n|); } # end mk_mysql_insert ################################################################## ## Extract CPU_ALL data for variable fields ################################################################## sub mk_mysql_insert_variable { my($nmon_var)=#_; my $table=lc($nmon_var); my #rawdata; my $x; my $j; my #cols; my $comma; my $TS; my $n; my #devices; #rawdata=grep(/^$nmon_var,/, #nmon); if ( #rawdata < 1) { return; } #rawdata=sort(#rawdata); $rawdata[0]=~ s/\%/_PCT/g; $rawdata[0]=~ s/\(/_/g; $rawdata[0]=~ s/\)/_/g; $rawdata[0]=~ s/ /_/g; $rawdata[0]=~ s/__/_/g; $rawdata[0]=~ s/,_/,/g; #devices=split(/,/,$rawdata[0]); print INSERT (qq|insert into $table (serialnum,hostname,time,ZZZZ,device,value) values\n| ); $n=#rawdata; $n--; for($i=1;$i<#rawdata;$i++){ $TS=$UTC_START + $INTERVAL*($i); $rawdata[$i]=~ s/,$//; #cols=split(/,/,$rawdata[$i]); print INSERT (qq|\n("$SN","$HOSTNAME",$TS,"$DATETIME{$cols[1]}","$devices[2]",$cols[2])| ); for($j=3;$j<#cols;$j++){ print INSERT (qq|,\n("$SN","$HOSTNAME",$TS,"$DATETIME{$cols[1]}","$devices[$j]",$cols[$j])| ); } if ($i < $n) { print INSERT (","); } } print INSERT (qq|;\n\n|); } # end mk_mysql_insert_variable ######################################################## ### Get an nmon setting from csv file ### ### finds first occurance of $search ### ### Return the selected column...$return_col ### ### Syntax: ### ### get_setting($search,$col_to_return,$separator)## ######################################################## sub get_setting { my $i; my $value="-1"; my ($search,$col,$separator)= #_; # search text, $col, $separator for ($i=0; $i<#nmon; $i++){ if ($nmon[$i] =~ /$search/ ) { $value=(split(/$separator/,$nmon[$i]))[$col]; $value =~ s/["']*//g; #remove non alphanum characters return($value); } # end if } # end for return($value); } # end get_setting ##################### ## Clean up ## ##################### sub clean_up_line { # remove characters not compatible with nmon variable # Max rrdtool variable length is 19 chars # Variable can not contain special characters (% - () ) my ($x)=#_; # print ("clean_up, before: $i\t$nmon[$i]\n"); $x =~ s/\%/Pct/g; # $x =~ s/\W*//g; $x =~ s/\/s/ps/g; # /s - ps $x =~ s/\//s/g; # / - s $x =~ s/\(/_/g; $x =~ s/\)/_/g; $x =~ s/ /_/g; $x =~ s/-/_/g; $x =~ s/_KBps//g; $x =~ s/_tps//g; $x =~ s/[:,]*\s*$//; $retval=$x; } # end clean up ########################################## ## Extract headings from nmon csv file ## ########################################## sub initialize { %MONTH2NUMBER = ("jan", 1, "feb",2, "mar",3, "apr",4, "may",5, "jun",6, "jul",7, "aug",8, "sep",9, "oct",10, "nov",11, "dec",12 ); #MONTH2ALPHA = ( "junk","jan", "feb", "mar", "apr", "may", "jun", "jul", "aug", "sep", "oct", "nov", "dec" ); } # end initialize # Get data from nmon file, extract specific data fields (hostname, date, ...) sub get_nmon_data { my $key; my $x; my $category; my %toc; my #cols; # Read nmon file unless (open(FILE, $FILENAME)) { return(1); } #nmon=<FILE>; # input entire file close(FILE); chomp(#nmon); # Cleanup nmon data remove trainig commas and colons for($i=0; $i<#nmon;$i++ ) { $nmon[$i] =~ s/[:,]*\s*$//; } # Get nmon/server settings (search string, return column, delimiter) $AIXVER =&get_setting("AIX",2,","); $DATE =&get_setting("date",2,","); $HOSTNAME =&get_setting("host",2,","); $INTERVAL =&get_setting("interval",2,","); # nmon sampling interval $MEMORY =&get_setting(qq|lsconf,"Good Memory Size:|,1,":"); $MODEL =&get_setting("modelname",3,'\s+'); $NMONVER =&get_setting("version",2,","); $SNAPSHOTS =&get_setting("snapshots",2,","); # number of readings $STARTTIME =&get_setting("AAA,time",2,","); ($HR, $MIN)=split(/\:/,$STARTTIME); if ($AIXVER eq "-1") { $SN=$HOSTNAME; # Probably a Linux host } else { $SN =&get_setting("systemid",4,","); $SN =(split(/\s+/,$SN))[0]; # "systemid IBM,SN ..." } $TYPE =&get_setting("^BBBP.*Type",3,","); if ( $TYPE =~ /Shared/ ) { $TYPE="SPLPAR"; } else { $TYPE="Dedicated"; } $MODE =&get_setting("^BBBP.*Mode",3,","); $MODE =(split(/: /, $MODE))[1]; # $MODE =~s/\"//g; # Calculate UTC time (seconds since 1970) # NMON V9 dd/mm/yy # NMON V10+ dd-MMM-yyyy if ( $DATE =~ /[a-zA-Z]/ ) { # Alpha = assume dd-MMM-yyyy date format ($DAY, $MMM, $YR)=split(/\-/,$DATE); $MMM=lc($MMM); $MON=$MONTH2NUMBER{$MMM}; } else { ($DAY, $MON, $YR)=split(/\//,$DATE); $YR=$YR + 2000; $MMM=$MONTH2ALPHA[$MON]; } # end if ## Calculate UTC time (seconds since 1970). Required format for the rrdtool. ## timelocal format ## day=1-31 ## month=0-11 ## year = x -1900 (time since 1900) (seems to work with either 2006 or 106) $m=$MON - 1; # jan=0, feb=2, ... $UTC_START=timelocal(0,$MIN,$HR,$DAY,$m,$YR); $UTC_END=$UTC_START + $INTERVAL * $SNAPSHOTS; #ZZZZ=grep(/^ZZZZ,/,#nmon); for ($i=0;$i<#ZZZZ;$i++){ #cols=split(/,/,$ZZZZ[$i]); ($DAY,$MON,$YR)=split(/-/,$cols[3]); $MON=lc($MON); $MON="00" . $MONTH2NUMBER{$MON}; $MON=substr($MON,-2,2); $ZZZZ[$i]="$YR-$MON-$DAY $cols[2]"; $DATETIME{$cols[1]}="$YR-$MON-$DAY $cols[2]"; } # end ZZZZ return(0); } # end get_nmon_data It almost (i say almost because with recent NMON versions it can sometimes have some issue when no data present) does the job, and it does it much much faster that would do my shell script if i would use it for these section This is why i think perl shall be a perfect solution. Off course, i don't ask anyone to convert my shell script into something final in perl, but at least to give me to right direction :-) I really thank anyone in advance for your help !
Normally i am strongly opposed to questions like this but our production systems are down and until they are fixed i do not really have all that much to do... Here is some code that might get you started. Please consider it pseudo code as it is completely untested and probably won't even compile (i always forget some parantheses or semicolons and as i said, the actual machines that can run code are unreachable) but i commented a lot and hopefully you will be able to modify it to your actual needs and get it to run. use strict; use warnings; open INFILE, "<", "path/to/file.nmon"; # Open the file. my #topLines; # Initialize variables. my %timestamps; while <INFILE> # This will walk over all the lines of the infile. { # Storing the current line in $_. chomp $_; # Remove newline at the end. if ($_ =~ m/^TOP/) # If the line starts with TOP... { push #topLines, $_; # ...store it in the array for later use. } elsif ($_ =~ m/^ZZZZ/) # If it is in the ZZZZ section... { my #fields = split ',', $_; # ...split the line at commas... my $timestamp = join ",", $fields(2), $fields(3); # ...join the timestamp into a string as you wish... $timestamps{$fields(1)} = $timestamp; # ...and store it in the hash with the Twhatever thing as key. } # This iteration could certainly be improved with more knowledge # of how the file looks. For example the search could be cancelled # after the ZZZZ section if the file is still long. } close INFILE; open OUTFILE, ">", "path/to/output.csv"; # Open the file you want your output in. foreach (#topLines) # Iterate through all elements of the array. { # Once again storing the current value in $_. my #fields = split ',', $_; # Probably not necessary, depending on how output should be formated. my $outstring = join ',', $fields(0), $fields(1), $timestamps{$fields(2)}; # And whatever other fields you care for. print OUTFILE $outstring, "\n"; # Print. } close OUTFILE; print "Done.\n";
Undefined subroutine &package::subroutine called at line <of script>
I am debugging this script at work - the boss says that is used to work on Solaris, but since they switched over to linux, it stopped working. I had to rewrite it with strict and warnings . When I run it I get error: Undefined subroutine &Logging::openLog called at /path/to/script line 27 here is script (well part of it) 1 #!/usr/local/bin/perl 2 3 unshift #INC, "/production/fo/lib"; 4 use strict; 5 use warnings; 6 use Sys::Hostname; 7 use Getopt::Long qw(:config bundling auto_version); 8 use File::Path; 9 10 require "dbconfig2.pl"; 11 require "logging2.pl"; 12 require "hpov.pl"; 13 14 # global variables 15 my $parseDate = ""; 16 my #fileList = ""; 17 my #transList = ""; 18 my $mLogDate = ""; 19 my $lHost = hostname; 20 my $corefiles_dir="/production/log/corefiles"; 21 my $default_Threshold=90; 22 23 # do stuff 24 25 parseOptions(); 26 Dbconfig::readconfigFile("$config"); 27 Logging::openLog("$Dbconfig::prefs{logFile}","overwrite"); 28 # msglog actions TODO logs, compress only, data files 29 my $check_shdw=`ls -l /etc/motd | awk '{print \$11}' | grep 'motd.shdw'`; #Check if hostname is shadow 30 $check_shdw =~ y/\n//d; #remove new line if any 31 if ( $check_shdw eq "motd.shdw" ) 32 { 33 Logging::printLog("INFO","Enviroment is Shadow, triggering core files compressing"); 34 if (is_folder_empty($corefiles_dir)) { 35 print "Corefile Directory is EMPTY......! \n"; 36 } 37 else { 38 gzip_corefiles() ; #Execute compress core files 39 } 40 } 41 The script uses require statements to I guess call upon the routines that the script creator built. For the purpsoe of this script - the dbconfig just slurps in a config file and breaks them down into values. like the "$Dbconfig::prefs{logFile}" equals a logfile location /prod/logs/script.log - that's it. #!/usr/local/bin/perl package Dbconfig; #use warnings; use DBI; use DBD::Oracle; %prefs = ""; #$dbPrefs = ""; $raiseError = 0; %startupItem = ""; # readconfigFile(file) - read in a configuration file. sub readconfigFile { my $file = shift; if ( ! -e $file ) { $errorTxt = "Error: $file does not exist.\n"; $raiseError = 1; } # read in the cfg variables open(CFGFILE,"<","$file") or die "Cannot open $file for reading: $!\n"; while(<CFGFILE>) { chomp; # kill newlines s/#.*//; # ignore comments s/^\s+//; # ignore leading whitespace s/\s+$//; # ignore trailing whitespace next unless length; my($var,$value) = split(/\s*=\s*/, $_, 2); $prefs{$var} = $value; } close(CFGFILE); } Then there is this logging package. In line 27 of the script (where the error comes in) i see an "overwrite" invocation, but don't see anything referenceing overwrite in the logging.pl package - but not really sure if it matters. the parent script does not seem to write to any log file. I am not even sure if the filehandle LOGFILE is gtting created. #!/usr/local/bin/perl package Logging; use File::Copy; use warnings; use strict; my $timestamp = ""; my $filestamp = ""; # openLog(logfile name) - opens a log file sub openLog { my $file = shift; my $rotate = shift; # force a rotation if it exists. if ( -e $file && $rotate eq "rotate" ) { print "Warning: $file exists. Rotating.\n"; rotateLog($file); } getTime(); open(LOGFILE,">","$file") or warn "Error: Cannot open $file for writing: $!\n"; print LOGFILE "[$timestamp] - Normal - Opening log for $file.\n"; } # rotateLog(log file) - rotate a log. sub rotateLog { my $file = shift; getTime(); openLog("$file"); print LOGFILE "[$timestamp] - Warning - Rotating $file to $file.$filestamp.log"; closeLog($file); move($file,$file-"$filestamp.log"); openLog($file); } time() - grab timestamp for the log. sub getTime { undef $timestamp; undef $filestamp; ($sec,$min,$hour,$mday,$mon,$year) = (localtime(time))[0,1,2,3,4,5]; $sec = sprintf("%02d",$sec); $min = sprintf("%02d",$min); $hour = sprintf("%02d",$hour); $mday = sprintf("%02d",$mday); $year = sprintf("%04d",$year +1900); $mon = sprintf("%02d",$mon +1); $timestamp = "$mon-$mday-$year $hour:$min:$sec"; $filestamp = "$year$mon$mday$hour$min$sec"; } just wondering - is there a problem with logging.pl calling something from dbconfig.pl in line 27? Like can one module call a value fron another module? besides using strict and warnings, and alot of print statements I do not know what my next debugging step is. I have not idea how to check and see that the LOGFILE filehandle is getting created - if it does not error out, I can only suppose that it is. Like is there something extra I have to do to get the modules talking to each other? I am not a scripting king - just the only guy in my row who can even begin to understand this stuff.
Not sure if this will effect things but .... 1) Packages need to return true, normal procedure is to end the file with the line: 1; to ensure that. 2) Theres a comment in the logger package without the leading # which would cause compilation failure: time() - grab timestamp for the log. 3) This line: unshift #INC, "/production/fo/lib"; is adding the directory to search path for modules, make sure your logging2.pl file is actually in that location (it propably is otherwise you would get different errors, but worth a double check)
That looks all OK then. For some reason although require "logging2.pl" works (there'd be an error if not) the sub-routines in it aren't loaded and available. Unlike the load of DBconfig2.pl which works OK (otherwise the call to Dbconfig::readconfigFile() would fail first). Only difference I can see is the leading space on the package command in Logging2.pl, don't think that would matter though. Could try calling openLog without the package prefix (Logging::) to see if its been loading into main from some reason and print the contents of %INC after the require statements to make sure its been loaded correctly?