I m using this code to find specific text into database then i will load into page with mojolicious.Is this method is good or how fast it is?
use MongoDB;
use Data::Dump q(dump);
my $connection = MongoDB::Connection->new(host => 'localhost', port => 27017);
my $database = $connection->test;
my $col = $database->user;
my $r3 = $database->run_command([
"distinct" => "person",
"key" => "text",
"query" =>""
]);
for my $d ( #{ $r3->{values} } ) {
if ($d=~ /value/){
print "D: $d\n";
}
}
distinct command can certainly work (and it seems that it does), so it's good. It is also probably the fastest way to do this (the implementation just opens appropriate index, reads from it and populates hash table, IIRC).
Note, however, that it will fail with error if total size of distinct values is greater than BSON size limit (16MB currently).
If you ever run into this, you'll have to resort to slower alternatives. MapReduce, for example.
Related
$UsRx = '1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737288';
my %table; # Hash to store the results
my $res = $session->get_bulk_request(
-varbindlist => [ $UsRx ],
-callback => [ \&get_callback, \%table ],
-maxrepetitions => 80,
);
snmp_dispatcher();
if (!defined $res) {
printf "ERROR: %s\n", $session->error();
$session->close();
exit 1;
}
for my $oid (oid_lex_sort(keys %table)) {
printf "%s,%s,\n",
$index,
$table{$oid};
}
Note : callback function not here but assume that is working correct issue seems with get_bulk_request when need a single index data then it is ignoring the given index and returning data of index, any alternative solution is also will be appreciated
o/p :
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737288.1337,-70
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737288.1338,-75
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737288.1339,-55
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737288.1340,-60
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737289.1337,-75
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737289.1338,-75
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737289.1339,-60
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737289.1340,-65
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737290.1337,-80
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737290.1338,-70
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737290.1339,-65
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737290.1340,-65
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737291.1337,-65
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737291.1338,-55
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737291.1339,-50
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737291.1340,-45
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737293.1337,-15
Expected o/p :
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737288.1337,-70
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737288.1338,-75
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737288.1339,-55
1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737288.1340,-60
While this working fine with snmpwalk on terminal
system#new:~$ snmpwalk -v2c -c #543%we 23.9.4.67 1.3.6.1.4.1.4491.2.1.20.1.4.1.3.737288
iso.3.6.1.4.1.4491.2.1.20.1.4.1.3.737288.1337 = INTEGER: -70
iso.3.6.1.4.1.4491.2.1.20.1.4.1.3.737288.1338 = INTEGER: -75
iso.3.6.1.4.1.4491.2.1.20.1.4.1.3.737288.1339 = INTEGER: -55
iso.3.6.1.4.1.4491.2.1.20.1.4.1.3.737288.1340 = INTEGER: -60
I'm not sure I am interpreting your question correctly, but it sounds like you are asking why snmpwalk (CLI tool) returns only OIDs that have the same prefix as the one you specified, while using get-bulk from your perl code returns OIDs beyond the subtree you requested.
This would be expected behavior. "snmpwalk" is not an SNMP request type; get-bulk and get-next are. Instead, "snmpwalk" is a specialized tool that uses get-next or get-bulk and handles, itself, detecting that the get-bulk or get-next has retrieved an OID outside the subtree you specified and terminating the walk. Unless the API you're using provides a similar function, you would have to implement this logic in your code. The agent is just doing what was requested: return up to 80 (per your code) varbinds lexicographically greater than the request OID. SNMP doesn't have a built-in request type that retrieves only a subtree.
I was hoping that someone might be able to assist me. I'm new to Perl and generally getting some good results from some small scripts I've written, however, I'm stuck on a nested while loop of a new script I'm working on.
The script I've put together performs two mysql select statements, and then places the results into to separate arrays. I then want to check from the first element in the first array against all of the results in the second array. Then move to the second element in the first array and check for against all results in the seconds array and so on.
The goal of the script is to find an IP address in the first array and see which subnets it fits into in the second...
What I find is happening is that the script runs through on only the first element on the first array and all elements on the second array, then stops.
Here is the extract of the perl script below - if anyone could point me int the right direction I would really appreciate it.
my #ip_core_wan_field;
while ( #ip_core_wan_field = $wan_core_collection->fetchrow_array() ) {
my $coreipAddr = #ip_core_wan_field[1];
my #ip_wan_field;
while ( #ip_wan_field = $wan_collection->fetchrow_array() ) {
my $ipAddr = #ip_wan_field[1];
my $network = NetAddr::IP->new( #ip_wan_field[4], #ip_wan_field[5] );
my $ip = NetAddr::IP->new($coreipAddr);
if ( $ip->within($network) && $ip ne $ipAddr ) {
print "$ip IS IN THE SAME subnet as $network \n";
}
else {
print "$coreipAddr is outside the subnet for $network\n\n";
}
}
}
Your sql queries are single pass operations. If you want to loop over the second collection more than once, you need to either cache the values and interate over the cache, or rerun the query.
I would of course advise that you go with the first option using fetchall_arrayref
my $wan_arrayref = $wan_collection->fetchall_arrayref;
while ( my #ip_core_wan_field = $wan_core_collection->fetchrow_array() ) {
my $coreipAddr = #ip_core_wan_field[1];
for my $ip_wan_field_ref (#$wan_arrayref) {
my #ip_wan_field = #$ip_wan_field_ref;
There are of course other ways to make this operation more efficient, but that's the crux of your current problem.
I think I've got the gist of creating a table using Perl's PDF::Report and PDF::Report::Table, but am having difficulty seeing what the 2-dimensional array #data would look like.
The documentation says it's a 2-dimensional array, but the example on CPAN just shows an array of arrays test1, test2, and so on, rather than the example showing data and formatting like $padding $bgcolor_odd, and so on.
Here's what I've done so far:
$main_rpt_path = "/home/ics/work/rpts/interim/mtr_prebill.rpt";
$main_rpt_pdf =
new PDF::Report('PageSize' => 'letter', 'PageOrientation' => 'Landscape',);
$main_rpt_tbl_wrt =
PDF::Report::Table->new($main_rpt_pdf);
Obviously, I can't pass a one dimensional array, but I have searched for examples and can only find the one in CPAN search.
Edit:
Here is how I am trying to call addTable:
$main_rpt_tbl_wrt->addTable(build_table_writer_array($pt_column_headers_ref, undef));
.
.
.
sub build_table_writer_array
# $data -- an array ref of data
# $format -- an array ref of formatting
#
# returns an array ref of a 2d array.
#
{
my ($data, $format) = #_;
my $out_data_table = undef;
my #format_array = (10, 10, 0xFFFFFF, 0xFFFFCC);
$out_data_table = [[#$data],];
return $out_data_table;
}
and here is the error I'm getting.
Use of uninitialized value in subtraction (-) at /usr/local/share/perl5/PDF/Report/Table.pm line 88.
at /usr/local/share/perl5/PDF/Report/Table.pm line 88
I cannot figure out what addTable wants for data. That is I am wondering where the formatting is supposed to go.
Edit:
It appears the addData call should look like
$main_rpt_tbl_wrt->addTable(build_table_writer_array($pt_column_headers_ref), 10,10,xFFFFFF, 0xFFFFCC);
not the way I've indicated.
This looks like a bug in the module. I tried running the example code in the SYNOPSIS, and I got the same error you get. The module has no real tests, so it is no surprise that there would be bugs. You can report it on CPAN.
The POD has bugs, too.
You increase your chances of getting it fixed if you look at the source code and fix it yourself with a patch.
The Net::LDAP module for Perl provides an Net::LDAP::Search object. Its as_struct method returns the structure below.
Multiple entries as
$entry{dn=...} =
ref {cn} = ref {name}
ref {l} = ref {city}
ref{mail} = ref {xxxxxx}
An example:
uid=pieterb,ou=People,dc=example,dc=org {key of first hash = dn in ldap}
uid=pieterb {key=uid}
cn=Pieter B. {key=cn}
uidNumber=1000 {key=uidNumber}
gidNumber=4000 {key=gidNumber}
uid=markc,ou=People,dc=example,dc=org {key of first hash = dn in ldap }
uid=markc {key=uid}
cn=Mark Cole {key=cn}
uidNumber=1001 {key=uidNumber}
gidNumber=4000 {key=gidNumber}
However, the interface uses UI::Dialog which expects a list in the format below (radiolist/checklist), with data coming from the attribute values in the LDAP server
list => [
'Pieter B.', ['uid=pieterb,ou=People,dc=example,dc=org',0],
'Mark Cole', ['uid=markc,ou=People,dc=example,dc=org',0],
'cn_value(openldap)',['dn_value',0],
'givenname_value(activedirectory)',['dn_value',0]
]
It is very hard to guess what you want, but I think it is a list of the LDAP attribute names versus their values.
You should look at Data::Dumper to examine and present the data structures you are dealing with.
You don't mention what to do if the data you get from the search contains multiple Distinguished Names, or multiple values for an attribute, but this code simply takes the first DN and the first value in lists of attribute values to generate a list of lists.
I have little doubt that this isn't exactly what you need, and if you specify your requirement better we will be able to help further.
my $data = $msg->as_struct;
my $entry = (values %$data)[0];
my #attributes = map {
$_, [$entry->{$_}[0], 0]
} keys %$entry;
$dialog->checklist(list => \#attributes);
A while back I created a log parser. The logs can be several thousands of lines up to millions of lines. I store the parsed entries in an array of hash refs.
I am looking for suggestions on how to store my output, so that I can quickly read it back in if the script is run again (this prevents the need to re-parse the log).
The end goal is to have a web interface that will allow users to create queries (basically treating the parsed output like it existed within a database).
I have already considered writing the output of Data::Dumper to a file.
Here is an example array entry printed with Data::Dumper:
$VAR =
{
'weekday' => 'Sun',
'index' => 26417,
'timestamp' => '1316326961',
'text' => 'sys1 NSP
Test.cpp 1000
This is a example error message.
',
'errname' => 'EM_TEST',
'time' => {
'array' => [
2011,
9,
18,
'06',
22,
41
],
'stamp' => '20110918062241',
'whole' => '06:22:41',
'hour' => '06',
'sec' => 41,
'min' => 22
},
'month' => 'Sep',
'errno' => '2261703',
'dayofmonth' => 18,
'unknown2' => '1',
'unknown3' => '1',
'year' => 2011,
'unknown1' => '0',
'line' => 219154
},`
Is there a more efficient way of accomplishing my goal?
If your output is an object (or if you want to make it into an object), then you can use KiokuDB (along with a database back end of your choice). If not, then you can use Storable. Of course, if your data structure essentially mimics a CSV file, then you can just write the output to file. Or you can output the data into a JSON object that you can store in a file. Or you can forgo the middleman and simply use a database.
You mentioned that your data structure is a "array of hashes" (presumably you mean an array of hash references). If the keys of each hash reference are the same, then you can store this in CSV.
You're unlikely to get a specific answer without being more specific about your data.
Edit: Now that you've posted some sample data, you can simply write this to a CSV file or a database with the values for index,timestamp,text,errname,errno,unknown1,unknown2,unknown3, and line.
use Storable;
# fill my hash
store \%hash, 'file';
%hash = ();
%hash = %{retrieve('file')};
# print my hash
You can always use KiokuDB, Storable or what have we, but if you are planning to do aggregation, using a relational data base (or some data store that supports queries) may be the best solution in the longer run. A lightweight data store with an SQL engine like SQLite that doesn't require running a database server could be a good starting point.