.NET EntityStoreSchemaFilterEntry filter patterns - entity-framework

First question to SO, I hope I'm doing this right. ;)
Regarding System.Data.Entity.Design.EntityStoreSchemaFilterEntry :
I'm looking for some detailed documentation on this class. The MSDN docs have nothing but an indication of what properties exist and their data types. I want to create a well-defined list of filters for
EntityStoreSchemaGenerator.GenerateStoreMetadata(
IEnumerable<EntityStoreSchemaFilterEntry> filters
)
Specifically:
Do we need to set all Excludes before the Allows so that Allow entries are the only ones that are returned?
What are the consequences of using null in any of the parameters? What about empty string "" ? Comments about this seem to be conflicting and don't match my experience with their usage.
Is the proper "all" wildcard a simple "%"?
My goal is to Exclude all Tables, Views, and Filters, then Allow just the ones that I want. If I try to do this I get an edmx file with no entities. It seems my Exclude All takes precedence over all of the tables that I tried to include. If I don't try to exclude tables that I don't want, I get the tables I've Allowed plus all other tables in the database, which sort of renders filtering useless.
For reference, the only info I can find about proper wildcard patterns for filters is here:
http://msdn.microsoft.com/en-us/library/ms710171(VS.85).aspx
Note that I've gone way beyond EdmGen, noted bugs and limitations in EdmGen2, and am now trying to accomplish what I need with a majorly extended EdmGen2 base.
Thanks!
Related keywords to assist people searching on this topic:
AEF ADO.NET Entity Framework
Tables Views Functions
EntityStoreSchemaFilterObjectTypes EntityStoreSchemaFilterEffect
EntityStoreSchemaGenerator GenerateStoreMetadata
EntityModelSchemaGenerator
SSDL CSDL MSL EDMX
EdmGen EdmGen2

I found the follow filters were sufficient to generate the SSDL for a single table.
List<EntityStoreSchemaFilterEntry> filters = new List<EntityStoreSchemaFilterEntry>();
// Just generate for the Document table.
filters.Add(new EntityStoreSchemaFilterEntry(null, "dbo", "TargetTableNameHere", EntityStoreSchemaFilterObjectTypes.Table, EntityStoreSchemaFilterEffect.Allow));
filters.Add(new EntityStoreSchemaFilterEntry(null, "dbo", "%", EntityStoreSchemaFilterObjectTypes.Function, EntityStoreSchemaFilterEffect.Exclude));
// generate the SSDL
string ssdlNamespace = modelName + "Model.Store";
EntityStoreSchemaGenerator essg = new EntityStoreSchemaGenerator(provider, connectionString, ssdlNamespace);
essg.GenerateForeignKeyProperties = includeForeignKeys;
IList<EdmSchemaError> ssdlErrors = essg.GenerateStoreMetadata(filters, version);
I only needed to explicitly exclude the functions.

Related

EFCore - HasDiscriminator - HasValue - Else option?

Using EFCore 6 ...
I have a variety of values in a column I want to use as the discriminator column.
There are certain values (SERVER_STS) I am mapping to entity classes
modelBuilder.Entity<QueueEntry>()
.HasDiscriminator<string>("TaskType")
.HasValue<StatusEntry>(Constants.TaskServerStatus)
.IsComplete(false);
I, ideally would want to map all the other values to generate a single entity class, QueueEntry. So something like
modelBuilder.Entity<QueueEntry>()
.HasDiscriminator<string>("TaskType")
.HasValue<StatusEntry>(Constants.TaskServerStatus)
.HasOtherValue<QueueEntry>();
I thought the IsComplete might help, but I'm only getting back the StatusEntry rows.
I can't see a way of doing this. I see there is an Expression overload for the HasDiscriminator method, but can't quite get my head around whether this might give me what I need.
So is this possible or will I need to segregate the entities at a higher level?

How to dynamically create index on JSON Object properties (JSON Object props are also dynamic)

I have a scenario where I want to dynamically create index on keys of JSON Object (JSON Object attributes will vary). I am able to store the JSON Object as index (by implementing FieldBridge).
eg1: preference:{"sport":"football", "music":"pop")
eg2: preference:{"sport":"cricket", "music":"jazz", "cuisine":"mexican"}
But I am unable to query the individual fields like:
preference.sport
or preference.cuisine
Is there any way / configuration in hibernate search through which we can achieve that?
If your fields are dynamic, there is no pre-defined schema and Hibernate Search is unable to determine how to query these fields. There are significant differences in how a match query should be executed on a text field or a date field, for example.
For that reason, you cannot use the Hibernate Search Query DSL to build your queries.
However, you can use native APIs.
If you're using the Lucene integration, just creating the relevant queries yourself will work fine (as long as you create the right one):
new TermQuery(new Term("sport", "value"))
If you're using the experimental Elasticsearch integration, you can use org.hibernate.search.elasticsearch.ElasticsearchQueries.fromJson( ... ). You will have to write the whole query as JSON, though, and will not be able to take advantage of the Hibernate Search QueryBuilder at all, even for queries on statically defined fields. See https://docs.jboss.org/hibernate/search/5.11/reference/en-US/html_single/#_queries
Better support for native queries, as well as dynamic fields with pre-defined types, which would be targetable in the Query DSL, is planned for Hibernate Search 6, but it's not there yet. See HSEARCH-3273.

Support of PostgreSQL specific array_agg function in scala frameworks?

Is there some scala relational database framework (anorm, squeryl, etc...) using postgres-like aggregators to produce lists after a group-by, or at least simulating its use?
I would expect two levels of implementation:
a "standard" one, where at least any SQL grouping with array_agg is translated to a List of the type which is being aggregated,
and a "scala ORM powered" one where some type of join is allowed so that if the aggregation is a foreign key to other table, a List of elements of the other table is produced. Of course this last thing is beyond the reach of SQL, but if I am using a more powerful language, I do not mind some steroids.
I find specially intriguing that the documentation of slick, which is based precisely in allowing scala group-by notation, seems to negate explicitly the output of lists as a result of the group-by.
EDIT: use case
You have the typical many-to-many table of, say, products and suppliers, pairs (p_id, s_id). You want to produce a list of suppliers for each product. So the postgresql query should be
SELECT p_id, array_agg(s_id) from t1 group by p_id
One could expect some idiomatic way to to this in slick, but I do not see how. Furthermore, if we go to some ORM, then we could also consider the join with the tables products and suppliers, on p_id and s_id respectively, and get as answer a zip (product, (supplier1, supplier2, supplierN)) containing the objects and not only the ids
I am also not sure if I understand you question correct, could you elaborate?
In slick you currently can not use postgres "array_agg" or "string_agg" as a method on type Query. If you want to use this specific function then you need to use custom sql. But: I added an issue some time ago (https://github.com/slick/slick/issues/923, you should follow this discussion) and we have a prototype from cvogt ready for this.
I needed to use "string_agg" in the past and added a patch for it (see https://github.com/mobiworx/slick/commit/486c39a7ed90c9ccac356dfdb0e5dc5c24e32d63), so maybe this is helpful to you. Look at "AggregateTest" to learn more about it.
Another possibility is to encapsulate the usage of "array_agg" in a database view and just use this view with slick. This way you do not need "array_agg" directly in slick.
You can use slick-pg.
It supports array_agg and other aggregate functions.
Your question is intriguing, care to elaborate a little on how it might ideally look? When you group by you often have an additional column, such as count(*) over and above the standard columns from your case class, so what would the type of your List be?
Most of my (anorm) methods either return a singleton (perhaps Option) or a List of that class's type. For each case class, I have an sqlFields variable (e.g. m.id, m.name, m.forManufacturer) and a single parser variable that I reference as either .as(modelParser.singleOpt) or .as(modelParser *). For foreign keys, a lazy val at the case class level (or def if it needs to be) is pretty useful. E.g. if I had Model and Manufacturer entities, with a foreign key forManufacturer on Model, then I might define a lazy val manufacturer : Manufacturer = ... in the case class of the model, so that at any time I can refer to model.manufacturer. I can define joins as their own methods, either in this way, or as methods in the companion object.
Not 100% sure I am answering your question, but thought this was a bit long for a comment.
Edit: If your driver supported parsing of postgresql arrays, you could map them directly to a class like ProductSuppliers(id:Int, suppliers:List[Int]) (or even List[Supplier]?) In anorm that's about as idiomatic as one could get, I think? For databases that don't support it, it seems to me similar to an order by version, i.e. select p1, s1 from t1 order by p1, which you could groupBy p1 and similarly map to ProductSuppliers.

Change the generated file name in EF Power Tools Beta 3

I've searched but haven't been able to find an answer to this question. Currently our Db users prefixes of tables - e.g. tblUsers. I've updated the EF templates to remove the "tbl" from the generated class names. However I still can't figure out how to change the output file name to match.
Is it possible or am I asking for the moon? I’m using EF Power Tools Beta 3 in VS 2012. Any help would be GREATLY appreciated!
Patrick, what you need is to modify the T4 template used by the EF Power Tools. When you want to create a code-first with all the mappings, instead of Reverse Engineer Code First option, choose Customize Reverse Engineer Template. You should get three files:
Context.tt
Entity.tt
Mapping.tt
For example, in Mapping.tt there is a line that reads MetadataProperties from a TableSet, and which extracts Table name. The line looks like this:
var tableSet = efHost.TableSet;
var tableName = (string)tableSet.MetadataProperties["Table"].Value ?? tableSet.Name;
This is where you need to make changes and do something like:
var newTableName = tableName.Replace("tbl", String.Empty);
Of course, you should opt for a different strategy and use Substring method or Regular expression to read and remove the first three characters. After that you have to go through the tt file and use your logic where you want to use tableName and where to user newTableName variable. You will keep tableName where the mapping is done with the table in a database, and newTableName where you want to use that name for your POCO classes and filenames.
Repeat the process for the other two files. For more information have a look at Rowan Miller's blog article. This should give you a pretty good idea how to proceed.

how to create a PostgreSQL table from a XML file...

I have a XML Document file. The part of the file looks like this:
-<attr>
<attrlabl>COUNTY</attrlabl>
<attrdef>County abbreviation</attrdef>
<attrtype>Text</attrtype>
<attwidth>1</attwidth>
<atnumdec>0</atnumdec>
-<attrdomv>
-<edom>
<edomv>C</edomv>
<edomvd>Clackamas County</edomvd>
<edomvds/>
</edom>
-<edom>
<edomv>M</edomv>
<edomvd>Multnomah County</edomvd>
<edomvds/>
</edom>
-<edom>
<edomv>W</edomv>
<edomvd>Washington County</edomvd>
<edomvds/>
</edom>
</attrdomv>
</attr>
From this XML file, I want to create a PostgreSQL table with columns of attrlabl, attrdef, attrtype, and attrdomv. I appreciate your suggestions!
While Erwin is right that this can be done with PostgreSQL tools, I would suggest still going the custom translation yourself as there are a few reasons here.
The first is determining appropriate XML to PostgreSQL type conversions. You probably want to choose these yourself. But this example highlights a very different problem, what to do with nested data structures. You could, for example, store XML fragments. You could store text, json, or the like. You could create other tables and fkey in.
In general I have almost always found the best approach is to simply manually create the tables. This substitutes human judgement for automated mappings and allows you to create better matches than a computer will.