Is it possible to relate two CodeFluent collections at run-time? - codefluent

I'm looking for a way to link seemingly unrelated CodeFluent collections at run-time without having defined a relation between the entities (typename and relationpropertyname of each property would not be set) in the CodeFluent modeler. Each entity would have a common key value.
I could convert the collections to DataTables, add them to a DataSet, and create the relations in the DataSet; but then I would have to convert the data back to their related CF entity. This would be a bit messy. I'd rather let CodeFluent BOM handle everything.

Related

Dynamic Data Model in Entity Framework Core

I have a database model that can be modify by users at runtime:
adding new columns to existing tables
adding new tables
I want to use Entity Framework Core to access such model.
I'm able of creating the types for the new tables and fields using reflection but I'm not able of creating the DbSet members inside the DbContext class for these new types as the DbSet needs to know the type at compile time.
Does anyone know if this is something that can be achieved with EF Core?
A way of injecting the type to the DbSet member dynamically?
It sounds pretty weird to me that the users are the ones defining the tables and their columns, relationships, etc on runtime. Probably what you actually need is to have a structure of tables to support dynamic data, which is much more manageable, that is, a table that defines the UserModels, another table that defines the properties of those models, etc. That will vary a lot depending on your needs.
You could also consider using some special properties like XML data-type fields as suggested here: Dynamically adding a property to an entity framework object

Can I utilise JPA 2.1 #Converter with DB entities?

Maybe, I'm a bit wrong, however, I'm trying to refactor my code right now via making use of #Converter annotation from JPA 2.1 to out-source the attribute-to-dbdata converting from the POJO class to a separate class. I'm mainly utilising a custom transformation for storing a kind of JSON blob into a database column. I have several cases, where I need to rely on the order of child entities, i.e., I store the set of utilised child entities in a many-to-many table to keep the relationship between the items and, furthermore, the order in a JSON array that just keeps the child entity identifiers (to keep the order). Then I have a resolving mechanism that keeps both sides always up-to-date, i.e., the db-data (string) will be converted to a (ordered) list of child entities (that are also stored in the DB and available via the set of child entities (many-to-many relationship).
So right now I'm wondering, whether I can handle this with a #Converter (AttributeConverter) implementation, since I'll require the set of child entities to resolve the db-data (string) to a (ordered) list of child entities (i.e. the "convertToEntityAttribute" method implementation)? Or whether I need to rely on my (a bit cumbersome) mechanism in the POJO class to convert between both sides?
AttributeConverter is for simple types only, not collections/maps, and as such provides a mapping between a java type and a database column. Some JPA implementations may allow mapping to multiple columns (I know the JPA implementation I use does, DataNucleus JPA, and some others may also allow it), but I doubt you'll get one that allows mapping to some other table entirely.
Better to look at your entity mappings and consider creating a dummy entity for this information somehow

Get values in NotMapped property in model class Entity Framwork Code First using linq

I have the below scenario. I am using EF 5 Code first, MVC 4 on VS 2010. I am using the Unit of Work and Repository pattern for my project.
I am not sure if this is possible or not. Kindly suggest.
I have a model class representing a database table. In the model class, I have a property that is decorated as [NotMapped]. I have a Stored Proc that returns data, similar to the model class. However, when I get the data in a List from the SP, it does not contain value for the [NotMapped] column (SP returns data for the [NotMapped] column though). This may be logically correct with respect to EF.
All I want to know is, do we have a way to get data populated for the [NotMapped] column. I want to achieve, CRUD using LINQ (excluding R - Read).
I would recommend to create a separate complex type for the stored procedure results. Otherwise sooner or later you will find yourself writing code to distinguish between entities coming from the DbSet or from the stored procedure. When the come from the stored procedure they can't be used in joins, for example. Or checks whether or not the unmapped property is set.
A very dirty approach could be to have two different contexts. With code first it is possible to have different contexts with different mappings to the same types, with and without the column ignored (if you use fluent mapping, not with data annotations). But that only succeeds if you tell EF not to check the database schema, so using migrations is ruled out as well. I would not do it!! For the same reason as I mentioned above. I would hate to have a type with a property that sometimes is and sometimes isn't set.

POCO entity without some fields (Free modeling of the entities)

I understand that, when working with POCO entities, you should work against your model (POCO Entities). Also I supose that part of the benefits of programming against models like those should provide benefits like defining classes that don't match exactly what you see in the db.
However, there are simple operations that I don't know how to do and that I assume they should be possible. For example, in some scenarios it can be useful to change the name of one column (atribute in the entity). Also I would like to know if it's possible to generate POCO models that only represents some fields of the table that supports the object in the db.
Is there any documentation about this kind of operations?
¡Thanks a lot!
POCO entity is just mapped class. The model in your question means mapping. The point of mapping is to define map between class and database table including mapping between properties and columns. So you can have different property and column names as long as it is correctly configured in mapping.
So if you are using EDMX file (designer) for generating the mapping you can simply change the name of property or entity and it will be reflected in your generated POCO entity. Also EDMX file will correctly update mapping. If you are using code first you must manually define mapping either through data annotations or through fluent API.
Entity should represent single data structure persisted to the database. Because of this each table can be mapped only once. Moreover EDMX designer demands that each non-nullable column without default value must be mapped to the entity. POCO generator is not tool for generating your different data views. What you are looking for is called projection. There are ways how to include mapped projections in EDMX file (DefiningQuery and QueryView) but both requires manual modifications of EDMX file and the first one also requires manual maintenance of EDMX file.
If you need to remove some properties from entity just to improve some query or because you don't need all data for some operation you can always use projection to anonymous or custom class directly in the query.
var query = from x in context.XEntities
select new XView
{
A = x.A,
B = x.B
};
POCO generator is only tool for generating classes for mapped entities and projections not for generating all data related classes you will ever need.

Compose queries across Entity Data Models

Is there a way to compose queries from 2 different entity models if the models are hitting the same underlying database.
The scenario I have is this:
I have a framework that uses EF for data access.(EDM 1)
I have a client application that uses services of the framework and also uses EF for it's own data access.(EDM2)
There are situations where I need to compose queries and join on entities that span the 2 EDMs.
Is there a way to do this without getting the data in memory from the first EDM and then apply additional predicates/joins in memory from the entities of the 2nd EDM?
I hope I'm articulating this the right way
EDIT
#Ladislav Mrnka:
The first EDM is the data access layer for a reusable framework.
It doesn't make sense to couple the EF generated entities from this EDM with
those of the consuming client
It defeats reusabilty of the API if I did this and I'd have to carry around additonal bloat
(EF metadata and DB tables of the client) everytime I wanted to redeploy the framework.
Also this would make managing the model in the designer unwieldy.
I'm currently using what you mention n item 7 as the solutuon and the performance is abysmal
due to the fact that I have to end up returning more data(i.e. entities) than needed from framework using
EDM1 and then filter out the ones not needed based on predicates/conditions based on value of
properties from entities in the second EDM. End result is a huge performance degradation and an unhappy DBA.
For this reason I ended up pushing the logic needed to retrieve the entities
to a SPROC in which I can access the tables that both EDMs use and apply the predicates needed
and have the entire query run in the DB as opposed to bringing the data in memory
and then filter out unnecessary ones.Downside is that I can't use LINQ
Item 8 that you mention sounds interesting but from what it sounds like I doubt that
you get strong typing at design time, or do you?
Can you upload your code sample someplace so that I can try it out?
Important edit
There is no build in support for achieving this with two ObjectContext types. Your query must always be executed against single ObjectContext.
Probably the best way to go: This was interesting enough for me to try it myself. I started with very simple idea. Two EDMX files (used with POCO T4 generators), each containing single entity. I take metadata description from second connection string and added it to first connection string. I used ObjectContext and ObjectSet directly. By doing this I was able to query and modify both entities from single ObjectContext instance. I also tryed to create query joining entities from both models and it worked.
This obviously works only if both EDMX map to the same database (same db connection string).
The important part is connections string:
<configuration>
<connectionStrings>
<add name="TestEntities" connectionString="metadata=res://*/FirstModel.csdl|res://*/FirstModel.ssdl|res://*/FirstModel.msl|res://*/SecondModel.csdl|res://*/SecondModel.ssdl|res://*/SecondModel.msl;provider=System.Data.SqlClient;provider connection string="Data Source=.;Initial Catalog=Test;Integrated Security=True;MultipleActiveResultSets=True"" providerName="System.Data.EntityClient" />
</connectionStrings>
</configuration>
This connection string contains metadata from two models - FirstModel.edmx and SecondModel.edmx.
Another problem is to force EF to use mapping from both these files. Each EDMX file must define unique container for SSDL and CSDL. ObjectContext offers property called DefaultContainerName. This property can be set directly or through some constructor overloads. Once you set this property you bind your ObjectContext instance to single EDMX - for this scenario you must not set this property. Omitting DefaultContainerName can have some consequences because some features and declarations can stop working (you will get runtime errors). You should not have problems with POCO unless you want to use some advanced features. You will most probably have proplems if you are using Entity objects (heavy EF entities. All methods using entity sets defined as strings are dependent on container. Due to this I suggest using such configuration only when necessary - for cross models queries.
Last problem is generating entities and "strongly typed" derived ObjectContext. The way to go is modify T4 template so that one template reads data from multiple EDMX files and generates context end entities for all of them - I already doing this in my project and it works. Default T4 implementation doesn't follow needed approach described in previous paragraph. Derived ObjectContext from default T4 implementation is dependent on single EDMX and entity container.
This part has been written before previous edit.
I'm leaving the rest of information just because some of them can be useful in other scenarios including work with multiple databases.
ORM like entity framework operates on top of mapping between object world and database world. In EF the object world is described by CSDL, database world is described as SSDL and mapping between them is described as MSL (all are just XML with well known schema). At design time these descriptions are part of model stored in EDMX file. During compilation these descriptions are extracted from EDMX and by default included as resource files to compiled assembly.
When you create instance of ObjectContext it receives connections string which contains reference to CSDL, SSDL and MSL resource files. SSDL or MSL do not specify include element to add information from other files. CSDL offers Using element which will allow you reusing existing mapping but this feature is not supported by designer. ConnectionString is used to initialize EntityConnection instance which is in turn used to initialize ObjectContext's MetadataWorkspace (runtime mapping information). Also ObjectContext doesn't provide any functionality of nesting multiple contexts into hiearchy. Connection string can't contain reference to multiple instances of these files. Edit: It can. I just tested it. See the initial paragraphs.
When you run Linq or ESQL query on the instance of ObjectContext it usese MSL to map your entities or POCO classes (defined by CSDL) into DB query (defined by SSDL description of database tables). If it doesn't have this information it will not work (and it can't have that information if it is stored in separate EDMX).
So how to solve this problem? There are several ways:
Always consider: Merge your mapping into one file (if multiple files are used for same database). That is supposed way to use EF and as you mentioned you are querying same DB so two EF models are not needed.
Duplicate entity description in second model. If you use EF4 and POCO you can map same descriptions from multiple models into one POCO class definition. I don't like this solution but sometimes it can help.
Define DB View or Stored procedure containing your query (or core of your query) and map it in one model to new entity.
Use DefiningQuery in one model (you will probably need 3rd one if you use Update from database feature) and map it to new entity. DefiningQuery is custom SQL query defined in SSDL instead of table or view description.
Use Function with custom CommandText specifying DB query. It is similar to using DefiningQuery and it has the same limitation. You must manually (in EDMX) map the result of the function into new complex type (another difference to DefiningQuery which is mapped to new entity).
Define new type for result of the query (properties of the type must have same names as returned columns in query) and use ObjectContext's ExecuteStoreQuery (only in EF4).
Divide query into two parts each executed separately on its own context and use linq-to-objects to get result. I don't like this solution.
This one is only high level idea - I didn't try it and I don't know if it works. As described above runtime mapping is dependent on the content of MetadataWorkspace instance which is filled from EntityConnection. EntityConnection also provides constructor which receives MetadataWorkspace instance directly. So generally if it would be possible to fill MetadataWorkspace from multiple EDMX you will not need multiple ObjectContext instances but your mapping would be still separated into two EDMXs. This would hopefully allow you writing custom Linq queries on top of two mapping files). Edit: It should be possible because it is exactly what EF is doing if you define multiple mappings in connection string.
Use CSDL Using feature for breaking the model into multiple reused parts.