Is DBIx::Class::Row::set_columns clever enough to update prefetched child rows?
I've tried it, and it doesn't seem to be. I'm probably expecting too much magic.
I did something like this:
my $data = {
id => 1,
date => '2015-06-27',
# etc.
invoice_lines => [{
id => 101,
# etc.
]},
};
my $rs = $schema->resultset('Invoice')->search(
{ 'me.id' => $id },
{ prefetch => 'invoice_lines' },
)->first;
$rs->set_columns($data);
and got something like this:
SELECT me.id, me.date, ..., invoice_lines.id, ... FROM invoices me LEFT
JOIN invoice_lines invoice_lines ON invoice_lines.invoice_id = me.id WHERE ( me.id = ? ) ORDER BY me.id: '1'
Mojo::Reactor::EV: Read failed: DBIx::Class::Row::get_column(): No such column 'invoice_lines' on MyProg::DB::Schema::Result::Invoice at /home
/chris/stuff/Invoices.pm line 289
It thinks 'invoice_lines' is just another column, rather than a relationship. The relationships are working properly when inserting into or reading from the database, so I haven't included all the gory details.
This feature isn't core although it is planned.
Currently there is the DBIx::Class::ResultSet::RecursiveUpdate module that implements this.
Related
(Similar to, but with more concrete details that, #11526999)
My Result Classes have been built using dbicdump, however I wish to overload the default accessor for a date field.
Works, but a bodge
To hackytest my idea, I simply added an accessor attribute to the created date key of the add_columns call:
__PACKAGE__->add_columns(
"stamp_id",
{
data_type => "integer",
is_auto_increment => 1,
is_nullable => 0,
sequence => "timestamp_stamp_id_seq",
},
"date",
{ data_type => "date", is_nullable => 0, accessor => '_date' },
);
... and created my accessor routine below the Schema::Loader checksum line:
# DO NOT MODIFY THIS OR ANYTHING ABOVE! md5sum:nB5koMYAhBwz4ET77Q8qlA
sub date {
my $self = shift;
warn "overloaded date\n"; # Added for debugging
my $date;
# The date needs to be just the date, not the time
if ( #_ ) {
$date = shift;
if ( $date =~ /^([\d\-]+)/ ) {
$date = $1
}
return $self->_date($date)
}
# Fetch the column value & remove the time part.
$date = $self->_date;
if ( $date =~ /^([\d\-]+)/ ) {
$date = $1
}
return $date;
}
This works, as it returns an expected 2014-10-04, but is a bodge.
Do it the right way
The problem is that I've hacked the checksum'd code, so I can't neatly re-generate my Class objects.
Reading ResultSource and the CookBook the correct approach appears to be:
Have the ResultSource built by dbicdump as standard:
__PACKAGE__->add_columns(
"stamp_id",
{
data_type => "integer",
is_auto_increment => 1,
is_nullable => 0,
sequence => "timestamp_stamp_id_seq",
},
"date",
{ data_type => "date", is_nullable => 0 },
);
.... add a change the accessor below the line, using the + to indicate it's an alteration to an existing definition:
# DO NOT MODIFY THIS OR ANYTHING ABOVE! md5sum:nB5koMYAhBwz4ET77Q8qlA
__PACKAGE__->add_columns(
"+date", { accessor => '_date' },
);
.... use the overload method as before
Not working.
I've double-checked my spelling, I've tried add_column rather than add_columns, and I've tried putting the second add_columns to directly below the first - all to now avail.... the code uses the default accessor, and returns 2014-10-04T00:00:00
How do I over-ride the default accessor, so I can use my own method?
Thankee...
What you need here is a col_accessor_map passed in as a loader option.
col_accessor_map => {
table_name => {
date => _date,
}
}
You can pass loader options to dbicdump with -o.
$ dbicdump -o col_accessor_map="{ table_name => { date => _date } }" ... other options ...
(Replace table_name above with the name of your table - that's obvious, right?)
Update: This was posted untested, and when I finally got round to testing it, I found it didn't work. After a conversation with the author on IRC I was told that the col_accessor_map option doesn't support this nested hash approach, so if you wanted to use this approach you would need to use a coderef.
However, the author also agreed that adding this support would be a good idea and I've just got back from lunch to find this Github commit which adds the feature. I don't know how soon it will get to CPAN though.
This may be the first time that CPAN has been updated to make a SO answer correct :-)
At a different level of abstraction I believe you could use a method modifier
use Class::Method::Modifier; # or Moose/Moo
around date => sub {...};
I'm sure I'm overlooking something glaringly obvious and I apologize for the newbie question, but I've spent several hours back and forth through documentation for DBIx::Class and Catalyst and am not finding the answer I need...
What I'm trying to do is automate creation of sub-menus based on the contents of my database. I have three tables in the database to do so: maps (in which sub-menu items are found), menus (contains names of top-level menus), maps_menus (assigns maps to top-level menus). I've written a subroutine to return a hash of resultsets, with the plan of using a Template Toolkit nested loop to build the top-level and sub-menus.
Basically, for each top-level menu in menus, I'm trying to run the following query and (eventually) build a sub-menu based on the result:
select * FROM maps JOIN maps_menus ON maps.id_maps = maps_menus.id_maps WHERE maps_menus.id_menus = (current id_menus);
Here is the subroutine, located in lib/MyApp/Schema/ResultSet/Menus.pm
# Build a hash of hashes for menu generation
sub build_menu {
my ($self, $maps, $maps_menus) = #_;
my %menus;
while (my $row = $self->next) {
my $id = $row->get_column('id_menus');
my $name = $row->get_column('name');
my $sub = $maps_menus->search(
{ 'id_maps' => $id },
{ join => 'maps',
'+select' => ['maps.id_maps'],
'+as' => ['id_maps'],
'+select' => ['maps.name'],
'+as' => ['name'],
'+select' => ['maps.map_file'],
'+as' => ['map_file']
}
);
$menus{$name} = $sub;
# See if it worked...
print STDERR "$name\n";
while (my $m = $sub->next) {
my $m_id = $m->get_column('id_maps');
my $m_name = $m->get_column('name');
my $m_file = $m->get_column('map_file');
print STDERR "\t$m_id, $m_name, $m_file\n";
}
}
return \%menus;
}
I am calling this from lib/MyApp/Controller/Maps.pm thusly...
$c->stash(menus => [$c->model('DB::Menus')->build_menu($c->model('DB::Map'), $c->model('DB::MapsMenus'))]);
When I attempt to pull up the page, I get all sorts of exceptions, the top-most of which is:
[error] No such relationship maps on MapsMenus at /home/catalyst/perl5/lib/perl5/DBIx/Class/Schema.pm line 1078
Which, as far as I can tell, originates from the call to $sub->next. I take this as meaning I'm doing my query incorrectly and not getting the results I think I should be. However, I'm not sure what I'm missing.
I found the following lines, defining the relationship to maps, in lib/MyApp/Schema/Result/MapsMenus.pm
__PACKAGE__->belongs_to(
"id_map",
"MyApp::Schema::Result::Map",
{ id_maps => "id_maps" },
{ is_deferrable => 1, on_delete => "CASCADE", on_update => "CASCADE" },
);
...and in lib/MyApp/Schema/Result/Map.pm
__PACKAGE__->has_many(
"maps_menuses",
"MyApp::Schema::Result::MapsMenus",
{ "foreign.id_maps" => "self.id_maps" },
{ cascade_copy => 0, cascade_delete => 0 },
);
No idea why it's calling it "maps_menuses" -- that was generated by Catalyst. Could that be the problem?
Any help would be greatly appreciated!
I'd suggest using prefetch of the two relationships which form the many-to-many relationship helper and maybe using HashRefInflator if you don't need access to the row objects.
Note that Catalyst doesn't generate a DBIC (which is btw the official abbreviation for DBIx::Class, DBIx is a whole namespace) schema, SQL::Translator or DBIx::Class::Schema::Loader do. Looks at the docs of the module you've used to find out how to influence its naming.
Also feel free to change the names if they don't fit you.
I tried to emulate this SQL in DBIx::Class against the update_or_new function.
UPDATE user SET lastseen = GREATEST( lastseen, ?::timestamp ) WHERE userid = ?
It gives an error on inflate column saying it is unable to invoke is_infinity on undef .
$schema->resultset('user')->update_or_new( {
userid => 'peter',
lastseen => \[ 'GREATEST( lastseen, ?::timestamp )', DateTime->from_epoch(epoch => 1234) ]
} );
I guess this is because the InflateColumn::DataTime does not expect a function there. Is there any clean workaround for this issue?
This is a bug in DBIx::Class ( addressed here: https://github.com/dbsrgits/dbix-class/pull/44 ) and the fix is merged. It should be fine on the next release.
That said, if you're using DBIx::Class <= 0.08270...
You're using update_or_new, but the function only makes sense if the row exists already:
GREATEST( lastseen, ?::timestamp ) lastseen is undefined if the row doesn't exist yet.
I read through the source+docs a bunch and cannot find a way to sidestep the InflateColumn code and still have bind values. You can pass in literal SQL with a scalar ref ( \'NOW()' ) but not an array ref.
Your best bet would be to use the ResultSet's update method instead, which does not 'process/deflate any of the values passed in. This is unlike the corresponding "update" in DBIx::Class::Row.'
my $dtf = $schema->storage->datetime_parser; #https://metacpan.org/pod/DBIx::Class::Storage::DBI#datetime_parser
my $user_rs = $schema->resultset('User')->search({ userid => 'peter' });
my $dt = DateTime->from_epoch(epoch => 1234);
#select count(*) where userid = 'peter';
if( $user_rs->count ) {
$user_rs->update(
lastseen => \[ 'GREATEST( lastseen, ? )', $dtf->format_datetime($dt) ]
);
} else {
$user_rs->create({ lastseen => $dt });
}
apologises and thanks in advance for what, even as I type, seems likely silly question, but here goes anyway.
I have basic Catalyst application using DBIx::Class with an 'Author' and associated 'Book' table. In addition I also use DBIx::Class::Cursor::Cached to cache data as appropriate.
The issue is that, following an edit, I need to clear cached data BEFORE it has actually expired.
1.) Author->show_author_and_books which fetchs and caches resultset.
2.) the Book->edit_do which needs to clear the cached data from the Author->show_author_and_books request.
See basic/appropriate setup below.
-- MyApp.pm definition including backend 'Cache::FileCache' cache.
__PACKAGE__->config(
name => 'MyApp',
...
'Plugin::Cache' => { 'backend' => { class => 'Cache::FileCache',
cache_root => "./cache",
namespace => "dbix",
default_expires_in => '8 hours',
auto_remove_stale => 1
}
},
...
-- MyApp::Model::DB definition with 'Caching' traits set using 'DBIx::Class::Cursor::Cached'.
...
__PACKAGE__->config(
schema_class => 'MyApp::Schema',
traits => [ 'Caching' ],
connect_info => { dsn => '<dsn>',
user => '<user>',
password => '<password>',
cursor_class => 'DBIx::Class::Cursor::Cached'
}
);
...
-- MyApp::Controller::Author.pm definition with 'show_author_and_books' method - resultset is cached.
...
sub show_author_and_books :Chained('base') :PathPart('') :Args(0)
{
my ( $self, $c ) = #_;
my $author_id = $c->request->params->{author_id};
my $author_and_books_rs = $c->stash->{'DB::Author'}->search({ author_id => $author_id },
{ prefetch => 'book' },
cache_for => 600 } ); # Cache results for 10 minutes.
# More interesting stuff, but no point calling $author_and_books_rs->clear_cache here, it would make no sense:s
...
}
...
-- MyApp::Controller::Book.pm definition with 'edit_do' method which updates book entry and so invalidates the cached data in show_author_and_books.
...
sub edit_do :Chained('base') :PathPart('') :Args(0)
{
my ( $self, $c ) = #_;
# Assume stash contains a book for some author, and that we want to update the description.
my $book = $c->stash->{'book'}->update({ desc => $c->request->params->{desc} });
# How do I now clear the cached DB::Author data to ensure the new desc is displayed on next request to 'Author->show_author_and_books'?
# HOW DO I CLEAR CACHED DB::Author DATA?
...
}
Naturally I'm aware that $author_and_books_rs, as defined in Author->show_author_and_books, contains a method 'clear_cache', but obviously this is out of scope in Book->edit_do ( not to mention another problem there might be).
So, is the correct approach to make the DBIx request again , as per ...show_author_and_books and then call the 'clear_cache' again that or is there a more direct way where I can just say something like this $c->cache->('DB::Author')->clear_cache?
Thank you again.
PS. I'm sure when I look at this tomorrow, the full silliness of the question will hit me:s
Try
$c->model( 'DB::Author' )->clear_cache() ;
The solution I went for in the end was to NOT use 'DBIx::Class::Cursor::Cached', but instead directly use the Catalyst Cache plugin defining multiple
backend caches to handle the different namespaces I trying to manage in the real-world scenario.
I backed away from D::C::Cursor::Cached as all data was/is held in the same namespace plus there doesn't appear to be a method to expire data in advance of
time already set.
So for completeness, from the code above, the MyApp::Model::DB.pm definition would lose the 'traits' and 'cursor_class' key/values.
Then...
The MyApp.pm Plugin::Cache' would expand to contain multiple cache namespaces...
-- MyApp.pm definition including backend 'Cache::FileCache' cache.
...
'Plugin::Cache' => { 'backends' => { Authors => { class => 'Cache::FileCache',
cache_root => "./cache",
namespace => "Authors",
default_expires_in => '8 hours',
auto_remove_stale => 1
},
CDs => { class => 'Cache::FileCache',
cache_root => "./cache",
namespace => "CDs",
default_expires_in => '8 hours',
auto_remove_stale => 1
},
...
}
...
-- MyApp::Controller::Author.pm definition with 'show_author_and_books' method - resultset is cached.
...
sub show_author_and_books :Chained('base') :PathPart('') :Args(0)
{
my ( $self, $c ) = #_;
my $author_id = $c->request->params->{author_id};
my $author = $c->get_cache_backend('Authors')->get( $author_id );
if( !defined($author) )
{
$author = $c->stash->{'DB::Author'}->search({ author_id => $author_id },
{ prefetch => 'book', rows => 1 } )->single;
$c->get_cache_backend('Authors')->set( $author_id, $author, "10 minutes" );
}
# More interesting stuff, ...
...
}
...
-- MyApp::Controller::Book.pm definition with 'edit_do' method which updates book entry and so invalidates the cached data in show_author_and_books.
...
sub edit_do :Chained('base') :PathPart('') :Args(0)
{
my ( $self, $c ) = #_;
# Assume stash contains a book for some author, and that we want to update the description.
my $book = $c->stash->{'book'}->update({ desc => $c->request->params->{desc} });
# How do I now clear the cached DB::Author data to ensure the new desc is displayed on next request to 'Author->show_author_and_books'?
# HOW DO I CLEAR CACHED DB::Author DATA? THIS IS HOW, EITHER...
$c->get_cache_backend('Authors')->set( $c->stash->{'book'}->author_id, {}, "now" ); # Expire now.
# ... OR ... THE WHOLE Authors namespace...
$c->get_cache_backend('Authors')->clear;
...
}
NOTE : as you'll expect from the use of Author and CDs, this isn't the real world scenario I'm working, but should serve to show my intent.
As I'm relatively new to the wonder of DBIx and indeed Catalyst, I'd be interested to hear if there a better approach to this (I very much expect there is), but it will serve for the moment as I'm attempting to update a legacy application.
The plugin could probably be patched to make per result set caches easy to namespace and clear independently, and alternatively it would probably not be so hard to add a namespace to the attributes. If you want to work on that hit #dbix-class and I'd be willing to mentor you - jnap
I have an array of hashes, all with the same set of keys, e.g.:
my $aoa= [
{NAME=>'Dave', AGE=>12, SEX=>'M', ID=>123456, NATIONALITY=>'Swedish'},
{NAME=>'Susan', AGE=>36, SEX=>'F', ID=>543210, NATIONALITY=>'Swedish'},
{NAME=>'Bart', AGE=>120, SEX=>'M', ID=>987654, NATIONALITY=>'British'},
]
I would like to write a subroutine that will convert this into a hash of hashes using a given key hierarchy:
my $key_hierarchy_a = ['SEX', 'NATIONALITY'];
aoh_to_hoh ($aoa, $key_hierarchy_a) = #_;
...
}
will return
{M=>
{Swedish=>{{NAME=>'Dave', AGE=>12, ID=>123456}},
British=>{{NAME=>'Bart', AGE=>120, ID=>987654}}},
F=>
{Swedish=>{{NAME=>'Susan', AGE=>36, ID=>543210}}
}
Note this not only creates the correct key hierarchy but also remove the now redundant keys.
I'm getting stuck at the point where I need to create the new, most inner hash in its correct hierarchical location.
The problem is I don't know the "depth" (i.e. the number of keys). If I has a constant number, I could do something like:
%h{$inner_hash{$PRIMARY_KEY}}{$inner_hash{$SECONDARY_KEY}}{...} = filter_copy($inner_hash,[$PRIMARY_KEY,$SECONDARY_KEY])
so perhaps I can write a loop that will add one level at a time, remove that key from the hash, than add the remaining hash to the "current" location, but it's a bit cumbersome and also I'm not sure how to keep a 'location' in a hash of hashes...
use Data::Dumper;
my $aoa= [
{NAME=>'Dave', AGE=>12, SEX=>'M', ID=>123456, NATIONALITY=>'Swedish'},
{NAME=>'Susan', AGE=>36, SEX=>'F', ID=>543210, NATIONALITY=>'Swedish'},
{NAME=>'Bart', AGE=>120, SEX=>'M', ID=>987654, NATIONALITY=>'British'},
];
sub aoh_to_hoh {
my ($aoa, $key_hierarchy_a) = #_;
my $result = {};
my $last_key = $key_hierarchy_a->[-1];
foreach my $orig_element (#$aoa) {
my $cur = $result;
# song and dance to clone an element
my %element = %$orig_element;
foreach my $key (#$key_hierarchy_a) {
my $value = delete $element{$key};
if ($key eq $last_key) {
$cur->{$value} ||= [];
push #{$cur->{$value}}, \%element;
} else {
$cur->{$value} ||= {};
$cur = $cur->{$value};
}
}
}
return $result;
}
my $key_hierarchy_a = ['SEX', 'NATIONALITY'];
print Dumper(aoh_to_hoh($aoa, $key_hierarchy_a));
As per #FM's comment, you really want an extra array level in there.
The output:
$VAR1 = {
'F' => {
'Swedish' => [
{
'ID' => 543210,
'NAME' => 'Susan',
'AGE' => 36
}
]
},
'M' => {
'British' => [
{
'ID' => 987654,
'NAME' => 'Bart',
'AGE' => 120
}
],
'Swedish' => [
{
'ID' => 123456,
'NAME' => 'Dave',
'AGE' => 12
}
]
}
};
EDIT: Oh, BTW - if anyone knows how to elegantly clone contents of a reference, please teach. Thanks!
EDIT EDIT: #FM helped. All better now :D
As you've experienced, writing code to create hash structures of arbitrary depth is a bit tricky. And the code to access such structures is equally tricky. Which makes one wonder: Do you really want to do this?
A simpler approach might be to put the original information in a database. As long as the keys you care about are indexed, the DB engine will be able to retrieve rows of interest very quickly: Give me all persons where SEX = female and NATIONALITY = Swedish. Now that sounds promising!
You might also find this loosely related question of interest.