Grab the fsynclock status of a mongo db in perl - perl

I'm trying to build a nagios check to check for how long a mongoDB has been locked using fsyncLock() for backup purposes (if the iSCSI snapshotting script blows up and the mongo is not being unlocked for example)
I was thinking about using a simple
$currentLock->run_command({currentOp => 1})
$isLocked = $currentLock->{fsyncLock}
But it seems like run_command() doesn't support currentOp yet. (As seen in there: https://github.com/MLstate/opalang/blob/master/lib/stdlib/apis/mongo/commands.opa)
Woudl anybody have an advice on how to check if a mongo is locked with a perl script? If not, I guess I'll go for some bash. I was thinking about using a db.eval('db.currentOp()') but I'm getting a bit lost.
Thanks!

You are right that run_command does not support doing a currentOp directly. However, if we look at the implementation of db.currentOp in the mongo shell, we can see how it works under the hood:
> db.currentOp
function (arg) {
var q = {};
if (arg) {
if (typeof arg == "object") {
Object.extend(q, arg);
} else if (arg) {
q.$all = true;
}
}
return this.$cmd.sys.inprog.findOne(q);
}
So we can query the special collection $cmd.sys.inprog on the Perl side to get the same inprog array that would be returned in the shell.
use strict;
use warnings;
use MongoDB;
my $db = MongoDB::MongoClient->new->get_database( 'test' );
my $current_op = $db->get_collection( '$cmd.sys.inprog' )->find_one;
When the server is not locked, it will return a structure in $current_op that looks something like this:
{
'inprog' => [
{
'connectionId' => 53,
'insert' => {},
'active' => bless( do{\(my $o = 0)}, 'boolean' ),
'lockStats' => {
'timeAcquiringMicros' => {
'w' => 1,
'r' => 0
},
'timeLockedMicros' => {
'w' => 9,
'r' => 0
}
},
'numYields' => 0,
'locks' => {
'^' => 'w',
'^test' => 'W'
},
'waitingForLock' => $VAR1->{'inprog'}[0]{'active'},
'ns' => 'test.fnoof',
'client' => '127.0.0.1:50186',
'threadId' => '0x105a81000',
'desc' => 'conn53',
'opid' => 7152352,
'op' => 'insert'
}
]
};
During an fsyncLock(), you'll get an empty inprog array but you will have a helpful info field and the expected fsyncLock boolean:
{
'info' => 'use db.fsyncUnlock() to terminate the fsync write/snapshot lock',
'fsyncLock' => bless( do{\(my $o = 1)}, 'boolean' ), # <--- that's true
'inprog' => []
};
So, putting it all together, we get:
use strict;
use warnings;
use MongoDB;
my $db = MongoDB::MongoClient->new->get_database( 'fnarf' );
my $current_op = $db->get_collection( '$cmd.sys.inprog' )->find_one;
if ( $current_op->{fsyncLock} ) {
print "fsync lock is currently ON\n";
} else {
print "fsync lock is currently OFF\n";
}

I actually decided to switch for a solution in bash (easier for what I want to do with the data later):
currentOp=`mongo --port $port --host $host --eval "printjson(db.currentOp())"`
Then some sort of grep -Po '"fsyncLock" : \d'
Thanks for the Perl insight though, it worked perfectly

Related

Config::IniFiles hash behaves different than manually written hash

I am loading a config file, which ends up as an embedded hash, with Config::IniFiles. After that, I want to modify the resulting hash by, for some keys, bringing its values one level up. In the example below, I am aiming for this as a result:
$VAR1 = {
'max_childrensubtree' => '7',
'port' => '1984',
'user' => 'someuser',
'password' => 'somepw',
'max_width' => '20',
'host' => 'localhost',
'attrs' => {
'subattr2' => 'cat',
'topattr1' => 'cat',
'subattr2_1' => 'pt',
'subattr1' => 'rel'
},
'max_descendants' => '1000'
};
So for the keys params and basex at the highest level, I want to move its contents (key-value pairs) to the highest level - and remove the items themselves. In short:
(
a => {
'key1' => 'ok',
'key2' => 'hello'
}
)
turns into
(
'key1' => 'ok',
'key2' => 'hello'
)
The strange thing is that what I am trying to do does not work on a hash built from a read INI file, but it does work with a manually inserted hash. In other words, this works:
#!/usr/bin/perl
use utf8;
use strict;
use warnings;
use Data::Dumper;
my %ini = (
'params' => {
'max_width' => '20',
'max_childrensubtree' => '7',
'max_descendants' => '1000'
},
'attrs' => {
'topattr1' => 'cat',
'subattr1' => 'rel',
'subattr2' => 'cat',
'subattr2_1' => 'pt',
},
'basex' => {
'host' => 'localhost',
'port' => '1984',
'user' => 'someuser',
'password' => 'somepw'
}
);
&_parse_ini(\%ini);
sub _parse_ini {
my $ref = shift;
foreach (('params', 'basex')) {
foreach my $k (keys %{$ref->{$_}}) {
$ref->{$k} = $ref->{$_}->{$k};
}
delete $ref->{$_};
}
print Dumper($ref);
}
But this does not:
#!/usr/bin/perl
use utf8;
use strict;
use warnings;
use Data::Dumper;
use Config::IniFiles;
# Load config file
tie my %ini, 'Config::IniFiles', (-file => $ARGV[0]);
&_parse_ini(\%ini);
sub _parse_ini {
my $ref = shift;
foreach (('params', 'basex')) {
foreach my $k (keys %{$ref->{$_}}) {
$ref->{$k} = $ref->{$_}->{$k};
}
delete $ref->{$_};
}
print Dumper($ref);
}
The input ini file for this example would be:
[params]
max_width = 20
max_childrensubtree = 7
max_descendants = 1000
[attrs]
topattr1 = cat
subattr1 = rel
subattr2 = cat
subattr2_1 = pt
[basex]
host = localhost
port = 1984
user = admin
password = admin
I have been looking in the documentation and on SO for similar issues but have found none. It appears that the hashes are identical (Config::IniFiles doesn't seem to add something specific), so I have no idea why it works for 'manual' hashes, and not for read-in ones.
The two hashes are not identical at all, although they may appear to be from the point of view of the data they contain.
The first one is a regular hash. You can do whatever you like with it.
The second one is a tied hash. It becomes an object of Config::IniFiles, but with a hash like interface. So whilst it appears to be a hash, the package can override the methods for storing or fetching information in the hash however it likes.
In this particular case, it looks like Config::IniFiles will only store a new key value in the hash if the value is hash ref. So you can't flatten out the tied hash as you want. Instead you'll have to create a new hash and copy the data in to it to do what you want.

How can I do a scrolled search on MetaCPAN?

I'm trying to convert this script to use the new Elasticsearch official client instead of the older (now deprecated) ElasticSearch.pm, but I can't get the scrolled search to work. Here's what I've got:
#! /usr/bin/perl
use strict;
use warnings;
use 5.010;
use Elasticsearch ();
use Elasticsearch::Scroll ();
my $es = Elasticsearch->new(
nodes => 'http://api.metacpan.org:80',
cxn => 'NetCurl',
cxn_pool => 'Static::NoPing',
#log_to => 'Stderr',
#trace_to => 'Stderr',
);
say 'Getting all results at once works:';
my $results = $es->search(
index => 'v0',
type => 'release',
body => {
filter => { range => { date => { gte => '2013-11-28T00:00:00.000Z' } } },
fields => [qw(author archive date)],
},
);
foreach my $hit (#{ $results->{hits}{hits} }) {
my $field = $hit->{fields};
say "#$field{qw(date author archive)}";
}
say "\nUsing a scrolled search does not work:";
my $scroller = Elasticsearch::Scroll->new(
es => $es,
index => 'v0',
search_type => 'scan',
size => 100,
type => 'release',
body => {
filter => { range => { date => { gte => '2013-11-28T00:00:00.000Z' } } },
fields => [qw(author archive date)],
},
);
while (my $hit = $scroller->next) {
my $field = $hit->{fields};
say "#$field{qw(date author archive)}";
} # end while $hit
The first search, where I'm just getting all the results in 1 chunk, works fine. But the second search, where I'm trying to scroll through the results, produces:
Using a scrolled search does not work:
[Request] ** [http://api.metacpan.org:80]-[500]
ActionRequestValidationException[Validation Failed: 1: scrollId is missing;],
called from sub Elasticsearch::Transport::try {...}
at .../Try/Tiny.pm line 83. With vars: {'body' =>
'ActionRequestValidationException[Validation Failed: 1: scrollId is missing;]',
'request' => {'path' => '/_search/scroll','serialize' => 'std',
'body' => 'c2Nhbjs1OzE3MjU0NjM2MjowakFELUU3VFFibTJIZW1ibUo0SUdROzE3MjU0NjM2NDowakFELUU3VFFibTJIZW1ibUo0SUdROzE3MjU0NjM2MTowakFELUU3VFFibTJIZW1ibUo0SUdROzE3MjU0NjM2MDowakFELUU3VFFibTJIZW1ibUo0SUdROzE3MjU0NjM2MzowakFELUU3VFFibTJIZW1ibUo0SUdROzE7dG90YWxfaGl0czoxNDQ7',
'method' => 'GET','qs' => {'scroll' => '1m'},'ignore' => [],
'mime_type' => 'application/json'},'status_code' => 500}
What am I doing wrong? I'm using Elasticsearch 0.75 and Elasticsearch-Cxn-NetCurl 0.02, and Perl 5.18.1.
I finally got it working with the newer Search::Elasticsearch official client. Here's the short version:
#! /usr/bin/perl
use strict;
use warnings;
use 5.010;
use Search::Elasticsearch ();
my $es = Search::Elasticsearch->new(
cxn_pool => 'Static::NoPing',
nodes => 'api.metacpan.org:80',
);
my $scroller = $es->scroll_helper(
index => 'v0',
type => 'release',
search_type => 'scan',
scroll => '2m',
size => 100,
body => {
fields => [qw(author archive date)],
query => { range => { date => { gte => '2015-02-01T00:00:00.000Z' } } },
},
);
while (my $hit = $scroller->next) {
my $field = $hit->{fields};
say "#$field{qw(date author archive)}";
} # end while $hit
Note that the records are not sorted when you do a scrolled search. I wound up dumping the records into a temporary database and sorting them locally. The updated script is on GitHub.
I don't have a direct answer, but I might have an approach to trouble shooting:
I followed your link to the Elasticsearch::Client and found a scroll() method:
https://metacpan.org/pod/Elasticsearch::Client::Direct#scroll
This method takes scroll and scroll_id as parameters. scroll is the number of minutes that you can keep calling the scroll method before the search expires. scroll_id is a marker to the place where the last call to scroll() ended.
$results = $e->scroll(
scroll => '1m',
scroll_id => $id
);
Elasticsearch::Scroll is an object oriented wrapper around scroll() which hides scroll and scroll_id.
I would run perl -d on your script, and step in to $scroller->next and follow that as far down the rabbit hole as you can. Something in there is trying a search which should be populating scroll_id or scrollId and is failing.
My description here is admittedly pretty rough... I ran across an accurate description of what the scroll id is and does during my googling, but I can't seem to find it again.

Tie Reading and Writing Perl Config Files

I'm using the PerlMonk example I found on:
Reading and Writing Perl Config Files
Configuration.pl:
%CFG = (
'servers' => {
'SRV1' => {
'IP' => 99.32.4.0,
'user' => 'aname',
'pswd' => 'p4ssw0rd',
'status' => 'unavailable'
},
'SRV2' => {
'IP' => 129.99.10.5
'user' => 'guest',
'pswd' => 'guest'
'status' => 'unavailable'
}
},
'timeout' => 60,
'log' => {
'file' => '/var/log/my_log.log',
'level' => 'warn',
},
'temp' => 'remove me'
);
It is working great, but the only issue is when reading and writing the HASH like configuration is being 'out of order'.
Is there a way to keep it TIED?
This important since the configuration file will be also edited manually, so I want the keys and values in the same order.
You could tie config variable before using it, so later hash keys will stay in same order as before,
use strict;
use warnings;
use Tie::IxHash;
tie my %CFG, 'Tie::IxHash';
%CFG = (
'servers' => {
'SRV1' => {
'IP' => '99.32.4.0',
'user' => 'aname',
'pswd' => 'p4ssw0rd',
'status' => 'unavailable'
},
'SRV2' => {
'IP' => '129.99.10.5',
'user' => 'guest',
'pswd' => 'guest',
'status' => 'unavailable'
}
},
'timeout' => 60,
'log' => {
'file' => '/var/log/my_log.log',
'level' => 'warn',
},
'temp' => 'remove me'
);
use Data::Dumper;
print Dumper \%CFG;
If you use JSON then you have the advantage that your software is safe from a malicious attack (or perhaps accidental corruption). JSON also has a simpler syntax than Perl data structures, and it is easier to recover from syntax errors.
Setting the canonical option will create the data with the keys in sorted order, and so generate the same output for the same Perl data every time. If you need the data in a specific order other than alphabetical then you can use the Tie::IxHash module as #mpapec describes in his answer.
Alternatively you can use the sort_by method from the Pure Perl version of the module that lets you pass a collation subroutine. That would let you prescribe the order of your keys, and could be as simple as using a hash that relates all the possible key values with a numerical sort order.
This program uses the sort_by method to reconstruct the JSON in the same order as the keys appear in your original hash. That is unlikely to be the order you want, but the mechanism is there. It works by looking up each key in a hash table to determine how they should be ordered. Any keys (like SVR1 and SVR2 here) that don't appear in the hash are sorted in alphabetical order by default.
use strict;
use warnings;
use JSON::PP ();
my %CFG = (
'servers' => {
'SRV1' => {
'IP' => '99.32.4.0',
'user' => 'aname',
'pswd' => 'p4ssw0rd',
'status' => 'unavailable'
},
'SRV2' => {
'IP' => '129.99.10.5',
'user' => 'guest',
'pswd' => 'guest',
'status' => 'unavailable'
}
},
'timeout' => 60,
'log' => {
'file' => '/var/log/my_log.log',
'level' => 'warn',
},
'temp' => 'remove me'
);
my %sort_order;
my $n = 0;
$sort_order{$_} = ++$n for qw/ servers timeout log temp /;
$sort_order{$_} = ++$n for qw/ IP user pswd status /;
$sort_order{$_} = ++$n for qw/ file level /;
my $json = JSON::PP->new->pretty->sort_by(\&json_sort);
print $json->encode(\%CFG);
sub json_sort {
my ($aa, $bb) = map $sort_order{$_}, $JSON::PP::a, $JSON::PP::b;
$aa and $bb and $aa <=> $bb or $JSON::PP::a cmp $JSON::PP::b;
}
generates this output
{
"servers" : {
"SRV1" : {
"IP" : "99.32.4.0",
"user" : "aname",
"pswd" : "p4ssw0rd",
"status" : "unavailable"
},
"SRV2" : {
"IP" : "129.99.10.5",
"user" : "guest",
"pswd" : "guest",
"status" : "unavailable"
}
},
"timeout" : 60,
"log" : {
"file" : "/var/log/my_log.log",
"level" : "warn"
},
"temp" : "remove me"
}
which can simply be saved to a file and similarly restored.

Test::mysqld won't close mysqld as expected

I have this script:
#!/var/home/cherry/opt/perl
use Test::More;
use DBI;
use Test::mysqld;
use Data::Dumper;
my $mysqld = Test::mysqld->new(
base_dir => '/tmp/test_mysqls',
my_cnf => {
'skip-networking' => '', # no TCP socket
}
) or plan skip_all => $Test::mysqld::errstr;
my $dbh = DBI->connect(
$mysqld->dsn(dbname => 'test'),
);
warn Dumper($mysqld);
done_testing();
When I run this, here's the output I get:
prove -lv t/test.t
t/test.t .. $VAR1 = bless( {
'_owner_pid' => 21854,
'base_dir' => '/tmp/test_mysqls',
'pid' => 21918,
'mysql_install_db' => '/usr/bin/mysql_install_db',
'auto_start' => 2,
'my_cnf' => {
'tmpdir' => '/tmp/test_mysqls/tmp',
'pid-file' => '/tmp/test_mysqls/tmp/mysqld.pid',
'skip-networking' => '',
'datadir' => '/tmp/test_mysqls/var',
'socket' => '/tmp/test_mysqls/tmp/mysql.sock'
},
'mysqld' => '/usr/sbin/mysqld'
}, 'Test::mysqld' );
1..0
The test never completes. The script waits on a newline for ever and never exits -- when I do ps aux, I can see the instance of mysqld running even after I do ctrl + c. I don't even know where to begin to troubleshoot this issue. Any hints?
Try adding a do {} and invoking $mysqld->stop inside an eval {} to shut down the mysqld.
my $mysqld = Test::mysqld->new(
base_dir => '/tmp/test_mysqls',
my_cnf => {
'skip-networking' => '', # no TCP socket
}
) or do {
eval { $mysqld->stop };
plan skip_all => $Test::mysqld::errstr;
};

Understanding name spaces in POE-Tk

I posted "How to undersand the POE-Tk use of destroy?" in an attempt to reduce the bug in my production code to a test case. But it seems that the solution to the test case is not working in the full program.
The program is 800+ lines long so I am hesitant to post it in full. I realize that the snippets I provide here may be too short to be of any use, but I hope to get some direction in either where to look for a solution or what additional information I can provide.
Here is the Session::Create section of my POE-Tk app.
POE::Session->create(
inline_states => {
_start => \&ui_start,
get_zone => \&get_zone,
ping => \&ping,
mk_disable => \&mk_disable,
mk_active => \&mk_active,
pop_up_add => \&pop_up_add,
add_button_press => sub {
my ($kernel, $session, $heap) = #_[KERNEL, SESSION, HEAP];
print "\nadd button pressed\n\n";
&validate;
},
ih_button_1_press => sub {
my ($kernel, $session, $heap) = #_[KERNEL, SESSION, HEAP];
print "\nih_button_1 pressed\n\n";
if( Tk::Exists($heap->{ih_mw}) ) {
print "\n\nih_mw exists in ih_button_1_press\n\n";
} else {
print "\n\nih_mw does not exist in ih_button_1_press\n\n";
}
1;
$heap->{ih_mw}->destroy if Tk::Exists($heap->{ih_mw});
&auth;
},
pop_up_del => \&pop_up_del,
auth => \&auth,
# validate => \&validate,
auth_routine => \&auth_routine,
raise_widget => \&raise_widget,
del_action => \&del_action,
over => sub { exit; }
}
);
add_button_press is called here;
sub pop_up_add {
...
my $add_but_2 = $add_frm_2->Button(
-text => "Add Record",
-command => $session->postback("add_button_press"),
-font => "{Arial} 12 {bold}") -> pack(
-anchor => 'c',
-pady => 6,
);
...
}
validate creates the Toplevel widget $heap->{ih_mw};
sub validate {
...
if( ! $valid ) {
print "\n! valid entered\n\n";
$heap->{label_text} .= "Add record anyway?";
my $lt_ref = \$heap->{label_text};
...
my $heap->{ih_mw} = $heap->{add_mw}->Toplevel( -title => "ih_mw");
...
if( Tk::Exists($heap->{ih_mw}) ) {
print "\n\nih_mw exists in validate\n\n";
} else {
print "\n\nih_mw does not exist in validate\n\n";
}
...
my $ih_but1 = $heap->{ih_mw}->Button( -text => "Add",
-font => 'vfont',
-command => $session->postback("ih_button_1_press"),
)->pack( -pady => 5 );
...
}
Pressing $ih_but1 results in this;
C:\scripts\alias\resource>alias_poe_V-3_0_par.pl
add button pressed
sub validate called
! valid entered
ih_mw exists in validate
ih_button_1 pressed
ih_mw does not exist in ih_button_1_press
So the $heap->{ih_mw} widget seems to be unkown to the ih_button_1_press anonymous subroutine even with the inclusion of "($kernel, $session, $heap) = #_[KERNEL, SESSION, HEAP];"
Where does $heap in &validate come from? You don't pass it as a parameter. Could $heap in &validate and $heap in &in_button_1_press not be the same thing? Have you tried printing the stringy form of $heap to see if the addresses are the same in the two functions?