Pivot script file per part? - codefluent

Is it possible to generate one pivot script file per part in CFE ?
In our model, we imagine using pivot runner to update database later on. In our model we have one part that would be used to instanciate many structures (let's call it "Common"), while having one named "Global" shared accross all those ones.
I would like my producer to generate one pivot file based on the Common part only, thus not having any reference of the Global entities;
Is it achievable ?
Thanks for your answer,

XML parts are storage units. It allows you to split a large model into multiple files, but this doesn't change the inferred model. Producers use the inferred model.
What I would do is separate entities into different schema, so you'll have two schema: "Common" and "Global". The file generated by the Pivot Script producer will still contain all the objects, but you are able to distinguish them thanks to the schema. Then you can use the PivotRunner and change a little its behavior to only preserve objects in a specific schema:
// References: CodeFluent.Runtime.dll and CodeFluent.Runtime.Database.dll
PivotRunner runner = new PivotRunner("pivot.xml");
foreach (var table in runner.Tables.Where(t => t.Schema != "Common").ToList())
{
runner.Tables.Remove(table);
}
// TODO stored procedures, functions, views, table types, etc.
runner.ConnectionString = "...";
runner.Run();
http://blog.codefluententities.com/2013/10/10/the-new-sql-server-pivot-script-producer/

Related

Lagom persistent read side and model evolution

To learn lagom i created a simple application with some simple persistent entities and a persistent read side (as per the official documentation, using cassandra)
The official doc contains a section about model evolution, describing how to change the model. However, there is no mention of evolution when it comes to the read side.
Assuming i have an entity called Item, with an ID and a name, and the read side creates a table like CREATE TABLE IF NOT EXISTS items (id TEXT, name TEXT, PRIMARY KEY (id))
I now want to change the item to include a description. This is trivial for the persistent entity, but the read side has to be changed as well.
I can see several approaches to (maybe) achieve that:
use a model evolution tool like liquibase or play evolutions to change the read side tables.
somehow include update table statements in createTables that migrate the model
create additional tables containing the additional information, and keep the old tables without modifications
Which approach would be the most fitting? Is there something better?
Creating a new table and dropping the old table is an option too IMHO.
It is simple as modifying your "create table" command ("create table mytable_v2 ..." and "drop table mytable...") and changing the offset name and modifying your event handlers.
override def buildHandler( ): ReadSideProcessor.ReadSideHandler[MyEvent] = {
readSide.builder[MyEvent]("myOffset") // change it to "myOffset_v2"
...
}
This results in all events to be replayed and your read side table to be reconstructed from the scratch. This may not be an option if the current table is really huge as the recostruction may last very long time.
Regarding what #erip says I see perfectly normal adding a new column to your read side table. Suppose there are lots of records in this table with list of all entities and you want to retrieve a list of entities based on some criteria so you need some columns to be included in the where clause. Retrieving list of all entities and asking each of them if it complies with the criteria is not an option at all - it could be very unefficient as it needs more time, memory and network usage.
The point of a read-side is to materialize views from entity state changes from your service's event stream in your service. In this respect, you as the service controller can decide what is important for your subscribers to know about. This is handled by creating read-sides with an anti-corruption layer (or ACL).
Typically your subscribers will subscribe to API events which should experience no evolution. Your internal events (or impl events) will likely need to evolve; because of this, there should be a transformation from the impl to the API.
This is why it's very important to consider your domain very carefully before design: you really need to nail down what subscribers will need to know about. In the case of a description, it strikes me as unlikely that subscribers will need (or want!) to know about that.

Is it possibile to use a single transaction (on EF) with two different contexts pointing different schemas?

I'm currenly designing an application where I need to use two different database schemas (on the same instance): one as the application base, the other one to customize the application and the fields for every customer.
Since I read something about Repository pattern and as I've understood is possible to use two different contexts without efficiency loose, I'm now asking if I can use a single database transaction between two schemas with Entity Framework, as I'm actually doing directly on the database (SQL Server 2008-2012).
Sorry for my English an Thanks in advance!
If your connection strings are the same (which in your case will be as you have different schemas only for different contexts) then you are ok with this approach.
Basically you will have two different contexts that will be connected via the same connection string to the database and which will represent two different schemas.
using (var scope = new TransactionScope()) {
using (var contextSO = new ContextSchemaOne()) {
// Add, remove, change entities from context schema one
ContextSchemaOne.SaveChanges;
}
using (var contextST = new ContextSchemaTwo()) {
// Add, remove, change entities from context schema two
ContextSchemaTwo.SaveChanges;
}
scope.Complete();
}
I wasn't very successful in the past with this approach, and we switched to one context per database.
Further reading: Entity Framework: One Database, Multiple DbContexts. Is this a bad idea?
Maybe it's better to read something about unit of work before taking a decision about this.
You will have to do something like this: Preparing for multiple EF contexts on a unit of work - TransactionScope

OData REST API where table has columns unique to customer

We would like to create an OData REST API. Our data model is such that each customer has their own database. All database objects have the same definition across all customer databases, with the exception of a single table.
The customer specific table we will call Contact. When a customer adds a column the system creates a column with a standardised name with a definition translated from options selected by the user in the UI. The user only refers to the column data by a field name they have specified to enable the user to be able to generate friendly queries.
It seems to me that the following approaches could be used to enable OData for the model described:
1) Create an OData open type to cater for the dynamic properties. This has the disadvantage of user requests for a customer not providing an indication of the dynamic properties that can be queried against. Even though they will be known for the user (via token authentication). Also, because dynamic properties are a dictionary, some data pivoting and inefficient query writing would be required. Not sure how to implement the IQueryable handling of query options for the dynamic properties to enable our own custom field querying.
2) Create a POCO class with e.g. 50 properties; CustomField1, CustomField2... Then somehow control which fields are exposed for use in OData calls. We would then include a separate API call to expose the custom field mapping. E.g. custom field friendly name of MobileNumber = CustomField12.
3) At runtime, check to see if column definitions of table changed since last check. If have, generate class specific to customer using CodeDom and register it with OData. Aiming for a unique URL for each customer. E.g. http://domain.name/{customer guid}/odata
I think the ideal for us is option 2. However, the fact the CustomField1 could be an underlying SQL data type of nvarchar, int, decimal, datetime, etc, there are added complications.
Has anyone a working example of how to achieve what has been described, satisfactorily?
Thanks in advance for any help.
Rik
We have run into a similar situation but with our entire dataset being unknown until runtime. Using the ODataConventionModelBuilder and EdmModel classes, you can add properties dynamically to the model at runtime.
I'm not sure whether you will have to manually add all of the properties for this object type even though only some of them are unknown or whether you can add your main object and then add your dynamic ones afterwards, but I guess either would be workable.
If you can get hold of which type of user it is on the server, you could then add only the properties that you are interested in (like option 3 but not having to CodeDom).
There is an example of this kind of untyped OData server in the OData samples here that should get you started: https://github.com/OData/ODataSamples/tree/master/WebApi/v4/ODataUntypedSample
The research we carried out actually posed Option 1 as the most suitable approach for some operations. i.e. Create an SQL view that unpivots the data in a table to a key/value pair of column name/column value for each column in the table. This was suitable for queries returning small datasets. This was far less effort than Option 3 and less confusing for the user than Option 2. The unpivot query converted the field values to nvarchar (string) values and thus meant that filtering in the UI by column value data types was not simple to achieve. (If we decide to implement this ability, I believe this can be achieved by creating a custom attribute that derives from EnablQueryAttribute, marking the controller action with it and manipulate the IQueryable before execution).
However, we wanted to expose a /Contacts/Export endpoint that when called would output the columns from a table with a fixed schema joined on a table with a client specific schema and output to a CSV file. All the while utilising the OData supported filter syntax. One of our customer databases has more than 12 million rows of data and is made up of approximately 30 columns.
To achieve this it looks like our best bet would have been to work with the Microsoft.OData.Core.UriParser.UriQueryExpressionParser class, unfortunately Microsoft in their wisdom have declared this as internal, as well as many of it's dependants.
Walking an abstract syntax tree built from OData supported query options and applying our own visitor to each node to build some dynamic Linq query/SQL seems like a possible solution.
For the time-being we will simply implement a cut-down set of supported $filter criteria without the support for grouping parenthesis.

classes and data presentation

I hope someone can give me some guidance in how to best approach this situation.
I am using dbcontext, wpf and sql server.
I am having situations were the presentation of the data requires other data than just what is coming from a single table. For example, if I had a person table but wanted to show also how many books they had read from related data, say fields would be name, address, NoOfBooks.
I currently create a new class, called say PersonBookPM, that I fill up with data from a linq query which combines the two tables which includes the above three fields.I create an observablecollection of that and make that the itemssource of the grid/listbox.
When I am then adding data to that I then need to use the selecteditem, convert that back to the single entity of person, and attach it back in to the context.
It seems like the classes have already been defined by the code gen and I am repeating the process only slightly differently.
Am I going round the houses here?
Thanks Scott

Is there a way of avoiding 2 repository objects for the same database table?

Im currently working in a team that uses EF as the ORM of choice.
We have a common project that contains many EDMX files.
The reason for this is to keep the EDMX files small and manageable while also allowing them to focus on a conceptual set of tables on the database.
Eg
Orders.edmx
Users.edmx
Trades.edmx
These all point to a different set of tables on the same db.
I now need to add the user table to the Trade.edmx file. Since the user table is already in the user.edmx file, this creates the same User type twice under a different namespace which means I would need 2 UserRepository objects.
Common.data.trade.User
Common.data.users.User
Is there a way of avoiding 2 repository objects for the same table?
Any suggestions would be greatly appreciated
If you are using POCO generator you can update template for Trades.edmx to not generate new User class and its context template to use User class from Users namespace. EF matches POCO classes with entities in designer only by the class name (namespace is omitted) so it will work.
The disadvantage is that you have User entity in two mapping files and you must update it in both files or your application throw exception at runtime.
The reason for this problem is your architecture - at the beginning you wanted separated models but know you want to combine entities from different models. Those are contradicting requirements. Either use separated model where Trade knows only userID without any navigation property (like if it is defined in another database) or move all entities to single EDMX to support your new requirements.