I am trying to execute user created Perl script,
usage: my.pl <type> <stats> [-map <map>] <session1> [session2]
Produces statistics about the session from a Wireshark .pcap file where:
<type> is the type of data in the pcap file (wlan, ethernet or ip)
<stats> is the output file to write notable information
<session> is the pcap input file or a folder containing pcaps (recursive)
but it fails with the error below.
$perl my.pl ethernet pa.xls google.pcap
Processing pcap trace (TCP):
Not a HASH reference at folder/httpTrace.pm line 654.
Here is the debug console -
Not a HASH reference at folder/httpTrace.pm line 654. at folder/httpTrace.pm line 654
folder::httpTrace.pm::readHttp('HASH(0x2306d38)', undef) called at my.pl line 56
main::__processSession('google.pcap') called at my.pl line 35 Debugged program terminated.
httpTrace.pm: last line# 654
sub readHttp($#)
{
my ($conntable, $map) = #_;
my $http_req_id = 0;
my $pipelining = 0;
my $mapc;
my #allReq;
my #allRep;
if( defined( $map ) ) {
$mapc = $map->clone();
}
foreach my $connect ( sort { $pa->{'id'} <=> $pb->{'id'} } values( %{ $conntable } ) ) {
line 56 in my.pl:
my $stats = Pcapstats::HTTP::readHttp( \%tcp_stream, $map );
Also there map.pm & usage in my.pl as
my $map;
if( $ARGV[0] eq '-map' ) {
shift( #ARGV );
$map = Pcapstats::Map->new( shift( #ARGV ) );
}
In your file httpTrace.pm you have this line
foreach my $connect ( sort { $pa->{'id'} <=> $pb->{'id'} } values( %{ $conntable } ) ) {
and I wonder what $pa and $pb are? Unless you have set them to something elsewhere they will be undefined and will give you Use of uninitialized value errors. Either way you will not be doing a sort.
But that doesn't explain the Not a HASH reference error, which is most likely because $conntable isn't what you think it is. It won't cause an error if it is undefined because autovivication will automatically create an empty hash, so it is probably a simple string or number, or possibly an array reference.
If you want to show the code that sets $conntable then we may be able to help further.
Related
Problem is to read a file with value at every new line. Content of file looks like
3ssdwyeim3,3ssdwyeic9,2017-03-16,09:10:35.372,0.476,EndInbound
3ssdwyeim3,3ssdwyfyyn,2017-03-16,09:10:35.369,0.421,EndOutbound
3ssdwyfxc0,3ssdwyfxfi,2017-03-16,09:10:35.456,0.509,EndInbound
3ssdwyfxc0,3ssdwyhg0v,2017-03-16,09:10:35.453,0.436,EndOutbound
With the string before first comma being the Key and string in between last and second last comma the Value
i.e. for the first line 3ssdwyeim3 becomes the key and 0.476 Value.
Now as we are looping over each line if the key exists we have to concatenate the values separated by comma.
Hence for the next new line as key already exists key remains 3ssdwyeim3 but the value is updated to 0.476,0.421.
Finally we have to print the keys and values in a file.
I have written a code to achieve the same, which is as follows.
sub findbreakdown {
my ( $out ) = #_;
my %timeLogger;
open READ, "out.txt" or die "Cannot open out.txt for read :$!";
open OUTBD, ">$out\_breakdown.csv" or die "Cannot open $out\_breakdown.csv for write :$!";
while ( <READ> ) {
if ( /(.*),.*,.*,.*,(.*),.*/ ) {
$btxnId = $1;
$time = $2;
if ( !$timeLogger{$btxnId} ) {
$timeLogger{$btxnId} = $time;
}
else {
$previousValue = $timeLogger{$btxnId};
$newValue = join ",", $previousValue, $time;
$timeLogger{$btxnId} = $newValue;
}
}
foreach ( sort keys %timeLogger ) {
print OUTBD "$_ ,$timeLogger{$_}\n";
}
}
close OUTBD;
close READ;
}
However Something is going wrong and its printing like this
3ssdwyeim3,0.476
3ssdwyeim3,0.476,0.421
3ssdwyeim3,0.476,0.421
3ssdwyfxc0,0.509
3ssdwyeim3,0.476,0.421
3ssdwyfxc0,0.509,0.436
3ssdwyeim3,0.476,0.421
3ssdwyfxc0,0.509,0.436
Whereas expected is:
3ssdwyeim3,0.476,0.421
3ssdwyfxc0,0.509,0.436
Your program is behaving correctly, but you are printing the current state of the entire hash after you process each line.
Therefore you are printing hash keys before they have the complete set of values, and you have many duplicated lines.
If you move the foreach loop that prints to the end of your program (or simply use the debugger to inspect the variables) you will find that the final state of the hash is exactly what you expect.
Edit: I previously thought the problem was the below, but it's because I misread the sample data in your question.
This regular expression is not ideal:
if (/(.*),.*,.*,.*,(.*),.*/) {
The .* is greedy and will match as much as possible (including some content with commas). So if any line contains more than six comma-separated items, more than one item will be included in the first matching group. This may not be a problem in your actual data, but it's not an ideal way to write the code. The expression is more ambiguous than necessary.
It would be better written like this:
if (/^([^,]*),[^,]*,[^,]*,[^,]*,([^,]*),[^,]*$/) {
Which would only match lines with exactly six items.
Or consider using split on the input line, which would be a cleaner solution.
This is much simpler than you have made it. You can just split each line into fields and use push to add the value to the list corresponding to the key
I trust you can modify this to read from an external file instead of the DATA file handle?
use strict;
use warnings 'all';
my %data;
while ( <DATA> ) {
my #fields = split /,/;
push #{ $data{$fields[0]} }, $fields[-2];
}
for my $key ( sort keys %data ) {
print join(',', $key, #{ $data{$key} }), "\n";
}
__DATA__
3ssdwyeim3,3ssdwyeic9,2017-03-16,09:10:35.372,0.476,EndInbound
3ssdwyeim3,3ssdwyfyyn,2017-03-16,09:10:35.369,0.421,EndOutbound
3ssdwyfxc0,3ssdwyfxfi,2017-03-16,09:10:35.456,0.509,EndInbound
3ssdwyfxc0,3ssdwyhg0v,2017-03-16,09:10:35.453,0.436,EndOutbound
output
3ssdwyeim3,0.476,0.421
3ssdwyfxc0,0.509,0.436
I am trying to read a config file in a separte subroutine and trying to call that in my main function. The subroutine return three variables(two arrays and one hash).Below is the code.
sub read_config{
my #keys;
my #dbkeys;
my %config;
open CONFILE,'/usr/local/pbiace/current/comparator/cfg/configFile.cfg' or die $!;
warn info_H . "opening config file \n ";
warn info_H . "reading postion info";
#keys=split '\|',<CONFILE>;
( $config{$keys[0]},
$config{$keys[1]},
$config{$keys[2]},
$config{$keys[3]},
$config{$keys[4]},
$config{$keys[5]},
$config{$keys[6]},
$config{$keys[7]}) = split '\|',<CONFILE>;
warn info_H. "reading config file to obatin DB connection details";
#dbkeys=split '\|',<CONFILE>;
( $config{$dbkeys[0]},
$config{$dbkeys[1]},
$config{$dbkeys[2]},
$config{$dbkeys[3]} ) = split '\|',<CONFILE>;
warn info_H . "returning values read";
return(#keys,#dbkeys,%config);
}
I am calling it using the code below.
(#keys,#dbkeys,%config)=read_config();
but this is not working.can anybody help me to solve this?
The problem here is that perl squashes lists when passed back and forth. You can only return one list of results. See: perlsub
So the assignment to #keys will be 'eating' all the results from read_config, which isn't returning 3 data structures - it's returning one, containing all the elements in each.
The solution to this is to return by reference.
return ( \#keys, \#dbdkeys, \%config );
You then need to dereference them when you 'get' them:
my ($keys_ref, $dbkeys_ref, $config_ref)=read_config();
#keys = #$keys_ref;
#dbkeys = #$dbkeys_ref;
%config = %$config_ref;
Or just work with them as is, and dereference as you use them.
$keys -> [0];
$config -> {$key};
I would also point out - you should look at hash slices as that would probably improve your code - you can assign:
#config{#keys} = split ( '\|', <CONFILE> );
(But don't forget to chomp if you don't want a line feed!)
I have the following problem:
I have a perl program which is extracting csv files, reads them and outputing result.
The information about the csv structure is in XML files, provided in the archives mentioned above.
In the old version of the program i have read those XML files for each line of the CSV file and everithing worked fine:
...;
foreach $b (#gz_files)
{
if ( index($b, 'condition1') >= 0
|| index($b, 'condition2') >= 0
|| index($b, 'condition3') >= 0 )
{
$lt = localtime;
open (my $outputfile, '>>'.'/path_to_output/'.$dir_file.'.tmp')
|| die print $lfh "$lt - /path_to_output/$dir_file\.tmp - $!\n";
if ($b ne "")
{
# this is the procedure, which reads xml_files
%cv_tmp = eventstype::initialize($complex_xml_path, $rating_input_xml_path);
#EXPORT=qw(%cv_tmp);
...;
This code adds the structure from XML files into %cv_tmp variable.
After that foreach row in CSV file I'am assigning the value of %cv_tmp to %complex_vals which is manipulated further.
...
%complex_vals=%mainfile::cv_tmp;
...
But after this manipulation i notice that %cv_tmp has changed - which is strange because this is the right-side of the assignment.
I do not want to change the %cv_tmp on each CSV row.
Sorry for the bad description but I am absolutely novice.
Thank you in advance.
Do you perhaps have something like
my %h1;
$h1{foo}{bar} = 123;
my %h2 = %h1;
$h2{foo}{bar} = 456;
print "$h1{foo}{bar}\n"; # 456
If so, you're not changing %h1 or %h2; you're changing the (anonymous) hash referenced by both $h1{foo} and $h2{foo}. You need to copy the referenced hash (not the reference to the hash) to solve this problem.
use Storable qw( dclone );
my %h1;
$h1{foo}{bar} = 123;
my %h2 = %{ dclone(\%h1) };
$h2{foo}{bar} = 456;
print "$h1{foo}{bar}\n"; # 123
I need help about how to numeration text in file.
I have also linux machine and I need to write the script with perl
I have file name: file_db.txt
In this file have parameters like name,ParameterFromBook,NumberPage,BOOK_From_library,price etc
Each parameter equal to something as name=elephant
My question How to do this by perl
I want to give number for each parameter (before the "=") that repeated (unique parameter) in the file , and increase by (+1) the new number of the next repeated parameter until EOF
lidia
For example
file_db.txt before numbering
parameter=1
name=one
parameter=2
name=two
file_db.txt after parameters numbering
parameter1=1
name1=one
parameter2=2
name2=two
other examples
Example1 before
name=elephant
ParameterFromBook=234
name=star.world
ParameterFromBook=200
name=home_room1
ParameterFromBook=264
Example1 after parameters numbering
name1=elephant
ParameterFromBook1=234
name2=star.world
ParameterFromBook2=200
name3=home_room1
ParameterFromBook3=264
Example2 before
file_db.txt before numbering
lines_and_words=1
list_of_books=3442
lines_and_words=13
list_of_books=344224
lines_and_words=120
list_of_books=341
Example2 after
file_db.txt after parameters numbering
lines_and_words1=1
list_of_books1=3442
lines_and_words2=13
list_of_books2=344224
lines_and_words3=120
list_of_books3=341
It can be condensed to a one line perl script pretty easily, though I don't particularly recommend it if you want readability:
#!/usr/bin/perl
s/(.*)=/$k{$1}++;"$1$k{$1}="/e and print while <>;
This version reads from a specified file, rather than using the command line:
#!/usr/bin/perl
open IN, "/tmp/file";
s/(.*)=/$k{$1}++;"$1$k{$1}="/e and print while <IN>;
The way I look at it, you probably want to number blocks and not just occurrences. So you probably want the number on each of the keys to be at least as great as the earliest repeating key.
my $in = \*::DATA;
my $out = \*::STDOUT;
my %occur;
my $num = 0;
while ( <$in> ) {
if ( my ( $pre, $key, $data ) = m/^(\s*)(\w+)=(.*)/ ) {
$num++ if $num < ++$occur{$key};
print { $out } "$pre$key$num=$data\n";
}
else {
$num++;
print;
}
}
__DATA__
name=elephant
ParameterFromBook=234
name=star.world
ParameterFromBook=200
name=home_room1
ParameterFromBook=264
However, if you just wanted to give the key it's particular count. This is enough:
my %occur;
while ( <$in> ) {
my ( $pre, $key, $data ) = m/^(\s*)(\w+)=(.*)/;
$occur{$key}++;
print { $out } "$pre$key$occur{$key}=$data\n";
}
in pretty much pseudo code:
open(DATA, "file");
my #lines = <DATA>;
my %tags;
foreach line (#lines)
{
my %parts=split(/=/, $line);
my $name=$parts[0];
my $value=$parts[1];
$name = ${name}$tags{ $name };
$tags{ $name } = $tags{ $name } + 1;
printf "${name}=$value\n";
}
close( DATA );
This looks like a CS101 assignment. Is it really good to ask for complete solutions instead of asking specific technical questions if you have difficulty?
If Perl is not a must, here's an awk version
$ cat file
name=elephant
ParameterFromBook=234
name=star.world
ParameterFromBook=200
name=home_room1
ParameterFromBook=264
$ awk -F"=" '{s[$1]++}{print $1s[$1],$2}' OFS="=" file
name1=elephant
ParameterFromBook1=234
name2=star.world
ParameterFromBook2=200
name3=home_room1
ParameterFromBook3=264
I have basically the following perl I'm working with:
open I,$coupon_file or die "Error: File $coupon_file will not Open: $! \n";
while (<I>) {
$lctr++;
chomp;
my #line = split/,/;
if (!#line) {
print E "Error: $coupon_file is empty!\n\n";
$processFile = 0; last;
}
}
I'm having trouble determining what the split/,/ function is returning if an empty file is given to it. The code block if (!#line) is never being executed. If I change that to be
if (#line)
than the code block is executed. I've read information on the perl split function over at
http://perldoc.perl.org/functions/split.html and the discussion here about testing for an empty array but not sure what is going on here.
I am new to Perl so am probably missing something straightforward here.
If the file is empty, the while loop body will not run at all.
Evaluating an array in scalar context returns the number of elements in the array.
split /,/ always returns a 1+ elements list if $_ is defined.
You might try some debugging:
...
chomp;
use Data::Dumper;
$Data::Dumper::Useqq = 1;
print Dumper( { "line is" => $_ } );
my #line = split/,/;
print Dumper( { "split into" => \#line } );
if (!#line) {
...
Below are a few tips to make your code more idiomatic:
The special variable $. already holds the current line number, so you can likely get rid of $lctr.
Are empty lines really errors, or can you ignore them?
Pull apart the list returned from split and give the pieces names.
Let Perl do the opening with the "diamond operator":
The null filehandle <> is special: it can be used to emulate the behavior of sed and awk. Input from <> comes either from standard input, or from each file listed on the command line. Here's how it works: the first time <> is evaluated, the #ARGV array is checked, and if it is empty, $ARGV[0] is set to "-", which when opened gives you standard input. The #ARGV array is then processed as a list of filenames. The loop
while (<>) {
... # code for each line
}
is equivalent to the following Perl-like pseudo code:
unshift(#ARGV, '-') unless #ARGV;
while ($ARGV = shift) {
open(ARGV, $ARGV);
while (<ARGV>) {
... # code for each line
}
}
except that it isn't so cumbersome to say, and will actually work.
Say your input is in a file named input and contains
Campbell's soup,0.50
Mac & Cheese,0.25
Then with
#! /usr/bin/perl
use warnings;
use strict;
die "Usage: $0 coupon-file\n" unless #ARGV == 1;
while (<>) {
chomp;
my($product,$discount) = split /,/;
next unless defined $product && defined $discount;
print "$product => $discount\n";
}
that we run as below on Unix:
$ ./coupons input
Campbell's soup => 0.50
Mac & Cheese => 0.25
Empty file or empty line? Regardless, try this test instead of !#line.
if (scalar(#line) == 0) {
...
}
The scalar method returns the array's length in perl.
Some clarification:
if (#line) {
}
Is the same as:
if (scalar(#line)) {
}
In a scalar context, arrays (#line) return the length of the array. So scalar(#line) forces #line to evaluate in a scalar context and returns the length of the array.
I'm not sure whether you're trying to detect if the line is empty (which your code is trying to) or whether the whole file is empty (which is what the error says).
If the line, please fix your error text and the logic should be like the other posters said (or you can put if ($line =~ /^\s*$/) as your if).
If the file, you simply need to test if (!$lctr) {} after the end of your loop - as noted in another answer, the loop will not be entered if there's no lines in the file.