I've got the following code:
package MyPackage::ResultSet::Case;
use base 'DBIx::Class::ResultSet';
sub cases_last_fourteen_days {
my ($self, $username) = #_;
return $self->search({
username => $username,
date => { '>=' => 'DATE_SUB(CURDATE(),INTERVAL 14 DAY)' },
});
};
But when I try to use it this way:
$schema->resultset('Case')->cases_last_fourteen_days($username)
I always get zero results, can anyone tell what I'm doing wrong?
Thanks!
The way you use the SQL::Abstract condition would result in this where condition:
WHERE username = ? AND date >= 'DATE_SUB(CURDATE(),INTERVAL 14 DAY)'
When you wish to use database functions in a where clause you need to use a reference to a scalar, like this:
date => { '>=' => \'DATE_SUB(CURDATE(),INTERVAL 14 DAY)' },
ProTip: if you set the environment variable DBIC_TRACE to 1, DBIx::Class will print the queries it generates to STDERR ... this way you can check if it really does what you wish.
Related
mongodb is v4.0.5
Perl is 5.26.3
MongoDB Perl driver is 2.0.3
This Data::Dumper output shows what's driving me crazy
INFO - $VAR1 = [
'275369249826930689 1',
{
'conf' => {
'param' => 'argument'
},
'id' => '275369249826930689',
'lastmsg' => '604195211232139552',
'_id' => bless( {
'oid' => ']:\',&�h�GeR'
}, 'BSON::OID' )
}
];
352832438449209345 275369249826930689
INFO - $VAR1 = [
'275369249826930689 2'
];
The second INFO - $VAR1 should show the same content as the first one. This is the original code, which I have (see below) broken down to find the culprit.
ddump(["$userid 1",
$c_identities->find_one({
channel => 'chan1',
id => $userid,
})
]);
my #filtered = reverse
grep { $_->{author}->{id} == $userid } #{$answers};
ddump(["$userid 2",
$c_identities->find_one({
channel => 'chan1',
id => $userid,
})
]);
ddump is just a wrapper for Data::Dumper. If I remove the "my #filtered" line, the second find one again returns the expected result (a MongoDB document). $answers is just a listref of hashes - no objects - from some API, completely unrelated to MongoDB.
So I broke the "reverse grep" code down to see where the culprit is. The say are the two numbers you see between the dumpers above. This is what I can do, to get answer from the second find_one:
for my $answer (#{$answers}) {
say $answer->{author}->{id}, ' ', $userid;
push #filtered, $answer;
}
As long as I do just this, the second find_one delivers a result. If, however, I do this:
for my $answer (#{$answers}) {
say $answer->{author}->{id}, ' ', $userid;
if ($answer->{author}->{id} == $userid) {
}
push #filtered, $answer;
}
I get the output from above (where the second dumper yields no return from the find_one. It's insane - the if-clause containing the numeric eq causes the second find_one to fail! This is also the grep body in the intended code.
What's going on here? How can this have possibly any effect on the MongoDB methods?
Using the numeric comparison operator == numifies the value, but it's probably too large to fit into an integer and becomes a float. It can also just become an integer and lose double quotes when serialized to JSON or similar format. Using eq instead of == keeps the value unchanged.
I have a perl code like this:
use constant OPERATING_MODE_MAIN_ADMIN => 'super_admin';
use constant OPERATING_MODE_ADMIN => 'admin';
use constant OPERATING_MODE_USER => 'user';
sub system_details
{
return {
operating_modes => {
values => [OPERATING_MODE_MAIN_ADMIN, OPERATING_MODE_ADMIN, OPERATING_MODE_USER],
help => {
'super_admin' => 'The system displays the settings for super admin',
'admin' => 'The system displays settings for normal admin',
'user' => 'No settings are displayed. Only user level pages.'
}
},
log_level => {
values => [qw(FATAL ERROR WARN INFO DEBUG TRACE)],
help => "http://search.cpan.org/~mschilli/Log-Log4perl-1.49/lib/Log/Log4perl.pm#Log_Levels"
},
};
}
How will I access the "value" fields and "help" fields of each key from another subroutine? Suppose I want the values of operating_mode alone or log_level alone?
The system_details() returns a hashref, which has two keys with values being hashrefs. So you can dereference the sub's return and assign into a hash, and then extract what you need
my %sys = %{ system_details() };
my #loglevel_vals = #{ $sys{log_level}->{values} };
my $help_msg = $sys{log_level}->{help};
The #loglevel_vals array contains FATAL, ERROR etc, while $help_msg has the message string.
This makes an extra copy of a hash while one can work with a reference, as in doimen's answer
my $sys = system_details();
my #loglevel_vals = #{ $sys->{log_level}->{values} };
But as the purpose is to interrogate the data in another sub it also makes sense to work with a local copy, what is generally safer (against accidentally changing data in the caller).
There are modules that help with deciphering complex data structures, by displaying them. This helps devising ways to work with data. Often quoted is Data::Dumper, which also does more than show data. Some of the others are meant to simply display the data. A couple of nice ones are Data::Dump and Data::Printer.
my $sys = system_details;
my $log_level = $sys->{'log_level'};
my #values = #{ $log_level->{'values'} };
my $help = $log_level->{'help'};
If you need to introspect the type of structure stored in help (for example help in operating_mode is a hash, but in log_level it is a string), use the ref builtin func.
(Similar to, but with more concrete details that, #11526999)
My Result Classes have been built using dbicdump, however I wish to overload the default accessor for a date field.
Works, but a bodge
To hackytest my idea, I simply added an accessor attribute to the created date key of the add_columns call:
__PACKAGE__->add_columns(
"stamp_id",
{
data_type => "integer",
is_auto_increment => 1,
is_nullable => 0,
sequence => "timestamp_stamp_id_seq",
},
"date",
{ data_type => "date", is_nullable => 0, accessor => '_date' },
);
... and created my accessor routine below the Schema::Loader checksum line:
# DO NOT MODIFY THIS OR ANYTHING ABOVE! md5sum:nB5koMYAhBwz4ET77Q8qlA
sub date {
my $self = shift;
warn "overloaded date\n"; # Added for debugging
my $date;
# The date needs to be just the date, not the time
if ( #_ ) {
$date = shift;
if ( $date =~ /^([\d\-]+)/ ) {
$date = $1
}
return $self->_date($date)
}
# Fetch the column value & remove the time part.
$date = $self->_date;
if ( $date =~ /^([\d\-]+)/ ) {
$date = $1
}
return $date;
}
This works, as it returns an expected 2014-10-04, but is a bodge.
Do it the right way
The problem is that I've hacked the checksum'd code, so I can't neatly re-generate my Class objects.
Reading ResultSource and the CookBook the correct approach appears to be:
Have the ResultSource built by dbicdump as standard:
__PACKAGE__->add_columns(
"stamp_id",
{
data_type => "integer",
is_auto_increment => 1,
is_nullable => 0,
sequence => "timestamp_stamp_id_seq",
},
"date",
{ data_type => "date", is_nullable => 0 },
);
.... add a change the accessor below the line, using the + to indicate it's an alteration to an existing definition:
# DO NOT MODIFY THIS OR ANYTHING ABOVE! md5sum:nB5koMYAhBwz4ET77Q8qlA
__PACKAGE__->add_columns(
"+date", { accessor => '_date' },
);
.... use the overload method as before
Not working.
I've double-checked my spelling, I've tried add_column rather than add_columns, and I've tried putting the second add_columns to directly below the first - all to now avail.... the code uses the default accessor, and returns 2014-10-04T00:00:00
How do I over-ride the default accessor, so I can use my own method?
Thankee...
What you need here is a col_accessor_map passed in as a loader option.
col_accessor_map => {
table_name => {
date => _date,
}
}
You can pass loader options to dbicdump with -o.
$ dbicdump -o col_accessor_map="{ table_name => { date => _date } }" ... other options ...
(Replace table_name above with the name of your table - that's obvious, right?)
Update: This was posted untested, and when I finally got round to testing it, I found it didn't work. After a conversation with the author on IRC I was told that the col_accessor_map option doesn't support this nested hash approach, so if you wanted to use this approach you would need to use a coderef.
However, the author also agreed that adding this support would be a good idea and I've just got back from lunch to find this Github commit which adds the feature. I don't know how soon it will get to CPAN though.
This may be the first time that CPAN has been updated to make a SO answer correct :-)
At a different level of abstraction I believe you could use a method modifier
use Class::Method::Modifier; # or Moose/Moo
around date => sub {...};
I'm sure I'm overlooking something glaringly obvious and I apologize for the newbie question, but I've spent several hours back and forth through documentation for DBIx::Class and Catalyst and am not finding the answer I need...
What I'm trying to do is automate creation of sub-menus based on the contents of my database. I have three tables in the database to do so: maps (in which sub-menu items are found), menus (contains names of top-level menus), maps_menus (assigns maps to top-level menus). I've written a subroutine to return a hash of resultsets, with the plan of using a Template Toolkit nested loop to build the top-level and sub-menus.
Basically, for each top-level menu in menus, I'm trying to run the following query and (eventually) build a sub-menu based on the result:
select * FROM maps JOIN maps_menus ON maps.id_maps = maps_menus.id_maps WHERE maps_menus.id_menus = (current id_menus);
Here is the subroutine, located in lib/MyApp/Schema/ResultSet/Menus.pm
# Build a hash of hashes for menu generation
sub build_menu {
my ($self, $maps, $maps_menus) = #_;
my %menus;
while (my $row = $self->next) {
my $id = $row->get_column('id_menus');
my $name = $row->get_column('name');
my $sub = $maps_menus->search(
{ 'id_maps' => $id },
{ join => 'maps',
'+select' => ['maps.id_maps'],
'+as' => ['id_maps'],
'+select' => ['maps.name'],
'+as' => ['name'],
'+select' => ['maps.map_file'],
'+as' => ['map_file']
}
);
$menus{$name} = $sub;
# See if it worked...
print STDERR "$name\n";
while (my $m = $sub->next) {
my $m_id = $m->get_column('id_maps');
my $m_name = $m->get_column('name');
my $m_file = $m->get_column('map_file');
print STDERR "\t$m_id, $m_name, $m_file\n";
}
}
return \%menus;
}
I am calling this from lib/MyApp/Controller/Maps.pm thusly...
$c->stash(menus => [$c->model('DB::Menus')->build_menu($c->model('DB::Map'), $c->model('DB::MapsMenus'))]);
When I attempt to pull up the page, I get all sorts of exceptions, the top-most of which is:
[error] No such relationship maps on MapsMenus at /home/catalyst/perl5/lib/perl5/DBIx/Class/Schema.pm line 1078
Which, as far as I can tell, originates from the call to $sub->next. I take this as meaning I'm doing my query incorrectly and not getting the results I think I should be. However, I'm not sure what I'm missing.
I found the following lines, defining the relationship to maps, in lib/MyApp/Schema/Result/MapsMenus.pm
__PACKAGE__->belongs_to(
"id_map",
"MyApp::Schema::Result::Map",
{ id_maps => "id_maps" },
{ is_deferrable => 1, on_delete => "CASCADE", on_update => "CASCADE" },
);
...and in lib/MyApp/Schema/Result/Map.pm
__PACKAGE__->has_many(
"maps_menuses",
"MyApp::Schema::Result::MapsMenus",
{ "foreign.id_maps" => "self.id_maps" },
{ cascade_copy => 0, cascade_delete => 0 },
);
No idea why it's calling it "maps_menuses" -- that was generated by Catalyst. Could that be the problem?
Any help would be greatly appreciated!
I'd suggest using prefetch of the two relationships which form the many-to-many relationship helper and maybe using HashRefInflator if you don't need access to the row objects.
Note that Catalyst doesn't generate a DBIC (which is btw the official abbreviation for DBIx::Class, DBIx is a whole namespace) schema, SQL::Translator or DBIx::Class::Schema::Loader do. Looks at the docs of the module you've used to find out how to influence its naming.
Also feel free to change the names if they don't fit you.
I have an array of hashes, all with the same set of keys, e.g.:
my $aoa= [
{NAME=>'Dave', AGE=>12, SEX=>'M', ID=>123456, NATIONALITY=>'Swedish'},
{NAME=>'Susan', AGE=>36, SEX=>'F', ID=>543210, NATIONALITY=>'Swedish'},
{NAME=>'Bart', AGE=>120, SEX=>'M', ID=>987654, NATIONALITY=>'British'},
]
I would like to write a subroutine that will convert this into a hash of hashes using a given key hierarchy:
my $key_hierarchy_a = ['SEX', 'NATIONALITY'];
aoh_to_hoh ($aoa, $key_hierarchy_a) = #_;
...
}
will return
{M=>
{Swedish=>{{NAME=>'Dave', AGE=>12, ID=>123456}},
British=>{{NAME=>'Bart', AGE=>120, ID=>987654}}},
F=>
{Swedish=>{{NAME=>'Susan', AGE=>36, ID=>543210}}
}
Note this not only creates the correct key hierarchy but also remove the now redundant keys.
I'm getting stuck at the point where I need to create the new, most inner hash in its correct hierarchical location.
The problem is I don't know the "depth" (i.e. the number of keys). If I has a constant number, I could do something like:
%h{$inner_hash{$PRIMARY_KEY}}{$inner_hash{$SECONDARY_KEY}}{...} = filter_copy($inner_hash,[$PRIMARY_KEY,$SECONDARY_KEY])
so perhaps I can write a loop that will add one level at a time, remove that key from the hash, than add the remaining hash to the "current" location, but it's a bit cumbersome and also I'm not sure how to keep a 'location' in a hash of hashes...
use Data::Dumper;
my $aoa= [
{NAME=>'Dave', AGE=>12, SEX=>'M', ID=>123456, NATIONALITY=>'Swedish'},
{NAME=>'Susan', AGE=>36, SEX=>'F', ID=>543210, NATIONALITY=>'Swedish'},
{NAME=>'Bart', AGE=>120, SEX=>'M', ID=>987654, NATIONALITY=>'British'},
];
sub aoh_to_hoh {
my ($aoa, $key_hierarchy_a) = #_;
my $result = {};
my $last_key = $key_hierarchy_a->[-1];
foreach my $orig_element (#$aoa) {
my $cur = $result;
# song and dance to clone an element
my %element = %$orig_element;
foreach my $key (#$key_hierarchy_a) {
my $value = delete $element{$key};
if ($key eq $last_key) {
$cur->{$value} ||= [];
push #{$cur->{$value}}, \%element;
} else {
$cur->{$value} ||= {};
$cur = $cur->{$value};
}
}
}
return $result;
}
my $key_hierarchy_a = ['SEX', 'NATIONALITY'];
print Dumper(aoh_to_hoh($aoa, $key_hierarchy_a));
As per #FM's comment, you really want an extra array level in there.
The output:
$VAR1 = {
'F' => {
'Swedish' => [
{
'ID' => 543210,
'NAME' => 'Susan',
'AGE' => 36
}
]
},
'M' => {
'British' => [
{
'ID' => 987654,
'NAME' => 'Bart',
'AGE' => 120
}
],
'Swedish' => [
{
'ID' => 123456,
'NAME' => 'Dave',
'AGE' => 12
}
]
}
};
EDIT: Oh, BTW - if anyone knows how to elegantly clone contents of a reference, please teach. Thanks!
EDIT EDIT: #FM helped. All better now :D
As you've experienced, writing code to create hash structures of arbitrary depth is a bit tricky. And the code to access such structures is equally tricky. Which makes one wonder: Do you really want to do this?
A simpler approach might be to put the original information in a database. As long as the keys you care about are indexed, the DB engine will be able to retrieve rows of interest very quickly: Give me all persons where SEX = female and NATIONALITY = Swedish. Now that sounds promising!
You might also find this loosely related question of interest.