Query Jenkins for job list using a perl script - perl

I am not sure if this question is a duplicate or not but I cannot find any example of how one would do this. Is there any way we can query jenkins for the list of jobs. I have tried using the Jenkins::API that cpan provides but $jenkins->current_status()->jobs() returns a list of hash values. I am not sure if i am supposed to somehow translate these to readable jobs in english. Any tips??

Have a look at http://metacpan.org/pod/Jenkins::API.
$jenkins->current_status() does indeed return hash values. Each job hash contains keys 'color','name', and 'url'. But they are nested in a list at several levels. I found Data::Dumper helpful in seeing the full structure.
current_status
Returns the current status of the server as returned by the API. This is a hash containing a fairly comprehensive list of what's going on.
$jenkins->current_status();
# {
# 'assignedLabels' => [
# {}
# ],
# 'description' => undef,
# 'jobs' => [
# {
# 'color' => 'blue',
# 'name' => 'Jenkins-API',
# 'url' => 'http://jenkins:8080/job/Jenkins-API/'
# },
# ...
# ]
Example:
use Jenkins::API;
$jenkins = Jenkins::API->new({ base_url => 'http://localhost:8080' });
#statuses = $jenkins->current_status();
for ($i = 0;$i <= $#{$statuses[0]{'jobs'}};$i++) {
print $statuses[0]{'jobs'}[$i]{'name'},"\n";
}

Related

MongoDB/Perl: find_one doesn't return data after unrelated code

mongodb is v4.0.5
Perl is 5.26.3
MongoDB Perl driver is 2.0.3
This Data::Dumper output shows what's driving me crazy
INFO - $VAR1 = [
'275369249826930689 1',
{
'conf' => {
'param' => 'argument'
},
'id' => '275369249826930689',
'lastmsg' => '604195211232139552',
'_id' => bless( {
'oid' => ']:\',&�h�GeR'
}, 'BSON::OID' )
}
];
352832438449209345 275369249826930689
INFO - $VAR1 = [
'275369249826930689 2'
];
The second INFO - $VAR1 should show the same content as the first one. This is the original code, which I have (see below) broken down to find the culprit.
ddump(["$userid 1",
$c_identities->find_one({
channel => 'chan1',
id => $userid,
})
]);
my #filtered = reverse
grep { $_->{author}->{id} == $userid } #{$answers};
ddump(["$userid 2",
$c_identities->find_one({
channel => 'chan1',
id => $userid,
})
]);
ddump is just a wrapper for Data::Dumper. If I remove the "my #filtered" line, the second find one again returns the expected result (a MongoDB document). $answers is just a listref of hashes - no objects - from some API, completely unrelated to MongoDB.
So I broke the "reverse grep" code down to see where the culprit is. The say are the two numbers you see between the dumpers above. This is what I can do, to get answer from the second find_one:
for my $answer (#{$answers}) {
say $answer->{author}->{id}, ' ', $userid;
push #filtered, $answer;
}
As long as I do just this, the second find_one delivers a result. If, however, I do this:
for my $answer (#{$answers}) {
say $answer->{author}->{id}, ' ', $userid;
if ($answer->{author}->{id} == $userid) {
}
push #filtered, $answer;
}
I get the output from above (where the second dumper yields no return from the find_one. It's insane - the if-clause containing the numeric eq causes the second find_one to fail! This is also the grep body in the intended code.
What's going on here? How can this have possibly any effect on the MongoDB methods?
Using the numeric comparison operator == numifies the value, but it's probably too large to fit into an integer and becomes a float. It can also just become an integer and lose double quotes when serialized to JSON or similar format. Using eq instead of == keeps the value unchanged.

How to fetch values that are hard coded in a Perl subroutine?

I have a perl code like this:
use constant OPERATING_MODE_MAIN_ADMIN => 'super_admin';
use constant OPERATING_MODE_ADMIN => 'admin';
use constant OPERATING_MODE_USER => 'user';
sub system_details
{
return {
operating_modes => {
values => [OPERATING_MODE_MAIN_ADMIN, OPERATING_MODE_ADMIN, OPERATING_MODE_USER],
help => {
'super_admin' => 'The system displays the settings for super admin',
'admin' => 'The system displays settings for normal admin',
'user' => 'No settings are displayed. Only user level pages.'
}
},
log_level => {
values => [qw(FATAL ERROR WARN INFO DEBUG TRACE)],
help => "http://search.cpan.org/~mschilli/Log-Log4perl-1.49/lib/Log/Log4perl.pm#Log_Levels"
},
};
}
How will I access the "value" fields and "help" fields of each key from another subroutine? Suppose I want the values of operating_mode alone or log_level alone?
The system_details() returns a hashref, which has two keys with values being hashrefs. So you can dereference the sub's return and assign into a hash, and then extract what you need
my %sys = %{ system_details() };
my #loglevel_vals = #{ $sys{log_level}->{values} };
my $help_msg = $sys{log_level}->{help};
The #loglevel_vals array contains FATAL, ERROR etc, while $help_msg has the message string.
This makes an extra copy of a hash while one can work with a reference, as in doimen's answer
my $sys = system_details();
my #loglevel_vals = #{ $sys->{log_level}->{values} };
But as the purpose is to interrogate the data in another sub it also makes sense to work with a local copy, what is generally safer (against accidentally changing data in the caller).
There are modules that help with deciphering complex data structures, by displaying them. This helps devising ways to work with data. Often quoted is Data::Dumper, which also does more than show data. Some of the others are meant to simply display the data. A couple of nice ones are Data::Dump and Data::Printer.
my $sys = system_details;
my $log_level = $sys->{'log_level'};
my #values = #{ $log_level->{'values'} };
my $help = $log_level->{'help'};
If you need to introspect the type of structure stored in help (for example help in operating_mode is a hash, but in log_level it is a string), use the ref builtin func.

Attempt to access upserted_id property in perl MongoDB Driver returns useless HASH(0x3572074)

I have a Perl script that pulls a table from a SQL database ($row variable) and attempts to do a MongoDB update like so:
my $res = $users->update({"meeting_id" => $row[0]},
{'$set' => {
"meeting_id" => $row[0],
"case_id" => $row[1],
"case_desc" => $row[2],
"date" => $row[3],
"start_time" => $row[4],
"end_time" => $row[5],
#"mediator_LawyerID" => $row[6],
"mediator_LawyerIDs" => \#medLawIds,
"case_number" => $row[6],
"case_name" => $row[7],
"location" => $row[8],
"number_of_parties" => $row[9],
"case_manager" => $row[10],
"last_updated" => $row[11],
"meeting_result" => $row[12],
"parties" => \#partyList
}},
{'upsert' => 1}) or die "I ain't update!!!";
My client now wants ICS style calendar invites sent to their mediators. Thus, I need to know whether an update or insert happened. The documentation for MongoDB::UpdateResult implies that this is how you access such a property:
my $id = $res->upserted_id;
So I tried:
bless ($res,"MongoDB::UpdateResult");
my $id = $res->upserted_id;
After this code $id is like:
HASH(0x356f8fc)
Are these the actual IDs? If so, how do I convert to a hexadecimal string that can be cast to Mongo's ObjectId type? It should be noted I know absolutely nothing about perl; if more of the code is relevant, at request I will post any section ASAP. Its 300 lines so I didn't want to include the whole file off the bat.
EDIT: I should mention before anyone suggests this that using update_one instead of update returns the exact same result.
HASH(0x356f8fc) is a Perl Hash reference. It's basically some kind of (internal) memory address of some data.
The easiest way to get the contents is Data::Dumper:
use Data::Dumper
[...]
my $result = $res->upserted_id;
print Dumper($result);
HASH(0x356f8fc) is just the human readable representation of the real pointer. You must dump it in the same process and can't pass it from one to another.
You'll probably end up with something like
`my $id = $result->{_id};`
See the PerlRef manpage for details.
See also the MongoDB documentation about write concern.
PS: Also remember that you could use your own IDs for MongoDB. You don't need to work with the generated ones.

Rose::DB::Object::Manager query with a list of object ids

I'm trying to write a Rose::DB::Object query string using either an Array or a Hash, however I'm unsure how to do it. I'm trying to write an update function based off of certain ID's in a list that are enumerated in the array. I unfortunately do not have any other unique key to filter on to build the query, so I need to query specific ID's.
Essentially I am trying to programatically write the follow:
my $list = My::DB::Manager->get_items(query => [
{id => 1},
{id => 14},
{id => 210},
{id => 1102},
{id => 3151},
]);
This is the code I have so far, but I haven't been able to successfully achieve what I am trying to do:
use My::DB::Manager;
my #ary;
foreach (#_) {
my %col = ("id", $_);
push (#ary, \%col);
}
my $list = My::DB::Manager->get_items(query => \#ary);
...
./test.pl
Now the script just hangs with no output indefinately.
I'm trying to avoid iterating through the DB::Manager and making a DB call on a per record basis as this script will be run via cron every 60 seconds and has the potential to return large sets.
The query parameter takes a reference to an array of name/value pairs, not a reference to an array of hash references. If you want objects where the value of the id column is one of a list of values, then use the name id and a reference to an array of ids as the value. This code should work (assuming the id values are in #_):
$list = My::DB::Manager->get_items(query => [ id => \#_ ]);
You push strings into #ary when you need to push perl structures:
use My::DB::Manager;
my #ary;
foreach (#_) {
push (#ary, { id => $_ });
}
my $list = My::DB::Manager->get_items(query => [#ary]);
...
However, I think you can use query => [ id => [$id1, $id2, ... ], ...]:
use My::DB::Manager;
my $list = My::DB::Manager->get_items(query => [ id => \#_ ]);
...
Never used Rose, this based on docs of the module.

How can I create a hash of hashes from an array of hashes in Perl?

I have an array of hashes, all with the same set of keys, e.g.:
my $aoa= [
{NAME=>'Dave', AGE=>12, SEX=>'M', ID=>123456, NATIONALITY=>'Swedish'},
{NAME=>'Susan', AGE=>36, SEX=>'F', ID=>543210, NATIONALITY=>'Swedish'},
{NAME=>'Bart', AGE=>120, SEX=>'M', ID=>987654, NATIONALITY=>'British'},
]
I would like to write a subroutine that will convert this into a hash of hashes using a given key hierarchy:
my $key_hierarchy_a = ['SEX', 'NATIONALITY'];
aoh_to_hoh ($aoa, $key_hierarchy_a) = #_;
...
}
will return
{M=>
{Swedish=>{{NAME=>'Dave', AGE=>12, ID=>123456}},
British=>{{NAME=>'Bart', AGE=>120, ID=>987654}}},
F=>
{Swedish=>{{NAME=>'Susan', AGE=>36, ID=>543210}}
}
Note this not only creates the correct key hierarchy but also remove the now redundant keys.
I'm getting stuck at the point where I need to create the new, most inner hash in its correct hierarchical location.
The problem is I don't know the "depth" (i.e. the number of keys). If I has a constant number, I could do something like:
%h{$inner_hash{$PRIMARY_KEY}}{$inner_hash{$SECONDARY_KEY}}{...} = filter_copy($inner_hash,[$PRIMARY_KEY,$SECONDARY_KEY])
so perhaps I can write a loop that will add one level at a time, remove that key from the hash, than add the remaining hash to the "current" location, but it's a bit cumbersome and also I'm not sure how to keep a 'location' in a hash of hashes...
use Data::Dumper;
my $aoa= [
{NAME=>'Dave', AGE=>12, SEX=>'M', ID=>123456, NATIONALITY=>'Swedish'},
{NAME=>'Susan', AGE=>36, SEX=>'F', ID=>543210, NATIONALITY=>'Swedish'},
{NAME=>'Bart', AGE=>120, SEX=>'M', ID=>987654, NATIONALITY=>'British'},
];
sub aoh_to_hoh {
my ($aoa, $key_hierarchy_a) = #_;
my $result = {};
my $last_key = $key_hierarchy_a->[-1];
foreach my $orig_element (#$aoa) {
my $cur = $result;
# song and dance to clone an element
my %element = %$orig_element;
foreach my $key (#$key_hierarchy_a) {
my $value = delete $element{$key};
if ($key eq $last_key) {
$cur->{$value} ||= [];
push #{$cur->{$value}}, \%element;
} else {
$cur->{$value} ||= {};
$cur = $cur->{$value};
}
}
}
return $result;
}
my $key_hierarchy_a = ['SEX', 'NATIONALITY'];
print Dumper(aoh_to_hoh($aoa, $key_hierarchy_a));
As per #FM's comment, you really want an extra array level in there.
The output:
$VAR1 = {
'F' => {
'Swedish' => [
{
'ID' => 543210,
'NAME' => 'Susan',
'AGE' => 36
}
]
},
'M' => {
'British' => [
{
'ID' => 987654,
'NAME' => 'Bart',
'AGE' => 120
}
],
'Swedish' => [
{
'ID' => 123456,
'NAME' => 'Dave',
'AGE' => 12
}
]
}
};
EDIT: Oh, BTW - if anyone knows how to elegantly clone contents of a reference, please teach. Thanks!
EDIT EDIT: #FM helped. All better now :D
As you've experienced, writing code to create hash structures of arbitrary depth is a bit tricky. And the code to access such structures is equally tricky. Which makes one wonder: Do you really want to do this?
A simpler approach might be to put the original information in a database. As long as the keys you care about are indexed, the DB engine will be able to retrieve rows of interest very quickly: Give me all persons where SEX = female and NATIONALITY = Swedish. Now that sounds promising!
You might also find this loosely related question of interest.