I have a DBIx Class schema where I have;
A Device that has many Interfaces
An Interface has many Rules Applied
Each Rule has many Rule Entries.
I want to search for all of the Rule Entries for a Particular device name and Rule Name.
I am still learning DBIx so I don’t know if this is even the most efficient way.
I am doing this like so;
my $rs = $self->search( { devicename => ‘DeviceA’ } )->search_related('interfaces')->search_related(’Rules’, { rulename => ‘RuleA’ } )->search_related(‘RuleEntries’, {},
{ columns => [qw/source destination port/], result_class => 'DBIx::Class::ResultClass::HashRefInflator'} );
What I am trying to do is get the ‘RuleName’ as a column of my result set.
at the moment I’m getting all of the Rule Entries for DeviceA with a RuleName on an interface called RuleA, The columns returned are
‘source destination port’.
I want this to look like
‘rulename source destination port’
As you are already restricting the rule name it doesn't make sense to query it from the database.
Besides that you should always search for objects of the type you want to get back, in your case that's rule entries:
my $rs = $schema->resultset('Rule_Entries')->search({
'rel_device.name' => 'DeviceA',
'rel_rule.name' => 'Rule',
},{
columns => [ 'rel_rule.name', 'me.source', 'me.destination', 'me.port' ],
join => { rel_rule => { rel_interface => 'rel_device' }},
});
It seems your doing something very similar what I do: storing firewall rules. You might want to have the rule directly related to the device and the interface being an optional attribute of the rule because some vendors don't have interface specific rules (Checkpoint).
Related
This question already has answers here:
Client side GroupBy is not supported
(6 answers)
Closed 2 years ago.
I am trying to run GroupBy() command in northwind db this is my code
using(var ctx = new TempContext())
{
var customer = (from s in ctx.Customers
group s by s.LastName into custByLN
select custByLN);
foreach(var val in customer)
{
Console.WriteLine(val.Key);
{
foreach(var element in val)
{
Console.WriteLine(element.LastName);
}
}
}
}
it gives System.InvalidOperationException: 'Client side GroupBy is not supported'
Apparently you are trying to make groups of Customers with the same value for LastName. Some database management systems don't support GroupBy, although this is very rare, as Grouping is a very common database action.
To see if your database management system supports grouping, try the GroupBy using method syntax. End with ToList, to execute the GroupBy:
var customerGroupsWithSameLastName = dbContext.Customers.GroupBy(
// Parameter KeySelector: make groups of Customers with same value for LastName:
customer => customer.LastName)
.ToList();
If this works, the DBMS that your DbContext communicates with accepts GroupBy.
The result is a List of groups. Every Group object implements IGrouping<string, Customer>, which means that every Group has a Key: the common LastName of all Customers in this group. The group IS (not HAS) a sequence of all Customers that have this LastName.
By the way: a more useful overload of GroupBy has an extra parameter: resultSelector. With the resultSelector you can influence the output: it is not a sequence of IGrouping objects, but a sequence of objects that you specify with a function.
This function has two input parameters: the common LastName, and all Customers with this LastName value. The return value of this function is one of the elements of your output sequence:
var result = dbContext.Customers.GroupBy(
customer => customer.LastName,
// parameter resultSelector: take the lastName and all Customers with this LastName
// to make one new:
(lastName, customersWithThisLastName) => new
{
LastName = lastName,
Count = customersWithThisLastName.Count(),
FirstNames = customersWithThisLastName.Select(customer => customer.FirstName)
.ToList(),
... // etc
})
.ToList();
Back to your question
If the above code showed you that the function is not supported by your DBMS, you can let your local process do the grouping:
var result = dbContext.Customer
// if possible: limit the number of customers that you fetch
.Where(customer => ...)
// if possible: limit the customer properties that you fetch
.Select(customer => new {...})
// Transfer the remaining data to your local process:
.AsEnumerable()
// Now your local process can do the GroupBy:
.GroupBy(customer => customer.LastName)
.ToList();
Since you selected the complete Customer, all Customer data would have been transferred anyway, so it is not a big loss if you let your local process do the GroupBy, apart maybe that the DBMS is probably more optimized to do grouping faster than your local process.
Warning: Database management systems are extremely optimized in selecting data. One of the slower parts of a database query is the transfer of the selected data from the DBMS to your local process. So if you have to use AsEnumerable(), you should realize that you will transfer all data that is selected until now. Make sure that you don't transfer anything that you won't use anyhow after the AsEnumerable(); so if you are only interested in the FirstName and LastName, don't transfer primary keys, foreign keys, addresses, etc. Let your DBMS do the Where and Select`
I am working on a Entity-Framework-Core 2.0 query. The query needs to sort 2 tables by the "order" field. So far that's what I have:
return await _context.FieldsetGroup
.Include(e => e.Fieldsets.OrderBy(o => o.Order))
.ThenInclude(e => e.FieldsetFields.OrderBy(o => o.Field.Order))
.ThenInclude(e => e.Field)
.FirstOrDefaultAsync(fsg => fsg.FieldsetGroupId == fieldSetGroupId);
This query returns an exception:
"The property expression 'e => {from Fieldset o in e.Fieldsets orderby [o].Order asc select [o]}' is not valid. The expression should represent a property access: 't => t.MyProperty'. For more information on including related data, see http://go.microsoft.com/fwlink/?LinkID=746393."
How can I sort the 2 tables?
One of the slower parts of database queries is the transport of your selected data from the DBMS to your local process. Hence it is wise to limit the amount of transferred data.
Apparently your FieldSetGroup has zero or more FieldSets. Each FieldSet belongs to exactly one FieldsetGroup. This is identified by the foreign key FieldSetGroupId. The value of this field equals the Id of the FieldSetGroup.
So if you have FieldSetGroupwith Id = 10, and this FieldSetGroup has, 1000 FieldSets, then every FieldSet will have a value of foreign key FieldSetGroupId of 10. No need to transfer this value 1000 times.
Advice: To limit the amount of transferred data, avoid transferring more data than needed, use Select instead of Include and select only the data you actually plan to
use. Use Include if you plan to update the fetched data.
If you use Select, you can order whatever you want:
var result = dbContext.FieldsetGroup
.Where((fieldSetGroup => fieldSetGroup.FieldsetGroupId == fieldSetGroupId)
.Select(fieldSetGroup => new
{
... // select the fieldSetGroup properties you plan to use
FieldSets = fieldSetGroup.FieldSets
.OrderBy(fieldSet => fieldSet.Order)
.Select(fieldSet => new
{
... // only select the fieldSet properties you plan to use
FieldSetFields = fieldSet.FieldSetFields
.OrderBy(fieldSetField => fieldSetField.Order)
.Select(fieldSetField => new
{
...
})
.ToList(),
})
.ToList(),
})
.ToList(),
})
.FirstOrDefault();
You cannot do sorting (OrderBy) inside the Include method. Sort the data after the query.
I need to be able to pluck specific values from data that I receive from different 3rd parties. The data can structured differently depending on the 3rd party. For example:
my $first =
{
email => "joe\#example.com",
firstname => "Joe",
lastname => "Regular",
};
my $second =
{
user => {
e-mail => "joe\#example.com",
firstName => "Joe",
lastName => "Regular",
}
};
I know what the data structure will be for each 3rd party, so I can define that as config. What I want to end up with is
my $email = _magic($first_config,$first);
my $other_email = _magic($second_config,$second);
Any ideas much appreciated.
Build a look-up table. And you can use a dispatch table, hash with values being code references, so that when a party-identification is used as the key the code for that party executes
my %get_value = ( first => \&fetch_first, second => \&fetch_second );
my $party = 'first'; # input via command-line options, STDIN ...
my $email = $get_value{$party}->();
where \&fetch_first is a reference to the subroutine fetch_first. You can also enter it directly, first => sub { ... }, suitable for simple code. See perlreftut, perlref, and perlsub.
There are many ways to carry data in your program, and so to implement the lookup itself.
Here is an illustration, built in steps. It uses the (confirmed) fact that the data is in valid Perl data structures, and for simplicity it specifies the data right in each sub.
sub fetch_first {
my $data = {
email => '...',
firstName => '...',
};
return $data->{email};
}
This only delivers the email address, but we can do better.
Once you dereference a code reference you can also pass arguments
my $first_name = $get_value{$party}->('firstName');
where the subs are now written to use this input to return the required field
sub fetch_first {
my ($query) = #_;
my $data = {
email => '...',
firstName => '...',
};
return $data->{$query};
}
A big weakness of the above is that the calling code must use valid names of keys, so it needs to know the details of implementation of what it is using.
This can be improved, for example by choosing an interface for the call which is then translated in the subs into key names (or via yet another look-up structure). Then you make calls such as
my $email = $get_value{$party}->('email'); # or: 'first', 'last'
and somewhere you have association first => 'firstName' (etc) which subs can look up.
The flexibility is greatly helped by data being set up in a consistent way. The whole thing can also be quite maintainable if the code is organized thoughtfully.
If this grows more complex the solution is to write a class. Then you can build a very nice system.
I have two tables in my database and one of the tables is associated with my Accounts table.
So in my Schema Result for Account.pm I added the following line.
__PACKAGE__->has_many('subjects', 'MyApp::DBIC::Schema::Subject', {'foreight.account_id' => 'self.account_id'});
Then in my controller I make a search like this.
$c->stash->{search_results} = $c->model('DB::Account')->search(
{ -or => [
firstname => {like => '%'.$search_term.'%'},
'subjects.subject_title' => {like => '%'.$search_term.'%'},
]
},
{
join => 'subjects',
rows => '3',
},
{
order_by => 'first name ASC',
page => 1,
rows => 10,
}
);
It does not output any errors, but I can't figure out how to output the results on my view file. Is this a correct method of making relations between two tables?
My goal: provided a search_term, search two tables and output the result in view file. My SQL would look something like this:
SELECT FROM Accounts,Subjects WHERE Accounts.firstname=$search_term OR Subjects.subject_title=$search_term LEFT JOIN Subjects ON Accounts.account_id=Subject.account_id
And would want to output the result in view file, as I stated above.
I am fairly new to Perl and some of the documentations don't make that much sense to me, still. So any help and tips are appreciated.
The join looks OK to me, but it would make sense to try a simplified version without the join to check that everything else is OK.
The behaviour of DBIx::Class::ResultSet::search differs depending on the context in which it's called. If it's called in list context then it executes the database query and returns an array of MyApp::DBIC::Schema::Account objects. For example:
my #accounts = $c->model('DB::Account')->search();
In your case you're calling search in scalar context, which means that rather than returning an array it will return a DBIx::Class::ResultSet object (or a subclass thereof), and crucially it won't actually execute a db query. For that to happen you need to call the all method on your resultset. So, assuming you're using the default template toolkit view you probably want something like this:
[% FOREACH search_result IN search_results.all %]
[% search_result.first_name %]
[% END %]
This 'lazy' behaviour of DBIx::Class is actually very useful, and in my opinion somewhat undersold in the documentation. It means you can keep a resultset in a variable and keep executing different search calls on it without actually hitting the DB, it can allow much nicer code in cases where you want to conditionally build up a complex query. See the DBIx::Class::Resultset documentation for further details.
You have error in your query:
Try:
$c->stash->{search_results} = $c->model('DB::Account')->search(
{ -or => [
firstname => {like => '%'.$search_term.'%'},
'subjects.subject_title' => {like => '%'.$search_term.'%'},
]
},
{
join => 'subjects',
order_by => 'firstname ASC',
page => 1,
rows => 10,
}
);
I have a model Item with an indexed field named _key, that is array of strings (keywords for search).
Now I need to do autocompletion for this model (through JSON) in another form, and the problem is that instead of exact search by all words input by user, I need to do exact search by all but one last word. So I made this scopes in this model:
scope :find_by_keywords, lambda { |keys| where(:_keys.all => keys) }
scope :for_autocomplete, lambda { |keys| where(:_keys.all => keys[0..-2], :_keys => /^#{keys[-1]}/i ) }
the first scope for exact search works well, but I have problems with second scope for autocomplete. MongoID optimises (or something like) this query, so it becomes
db_development['items'].find({:_keys=>/^qwer/i}, {})
i.e. it allways misses the first condition. It's not surprising, because it needs different criterias on field for different conditions.
So I have tried many-many options. Different combinations of .all and .in, separate to different 'wheres', 'all_in' method, 'find(:conditions => ...)' and so on. Could you please suggest, how I can do this job?
I've found such solution:
scope :for_autocomplete, lambda { |keys| where(:_keys.all => keys[0..-2]+ [ /^#{keys[-1]}/ ] ) }
Seems to be working.