We want to use Mass transit sagas in an existing EF Core 6 solution. For ease of use we want most sagas to be code first, we dont really care about their design as long as we can migrate the long running ones if needed.
Rest of the domain we use hand written fluent mapping to map to an existing database migrated with dacpac.
I have a hard time finding information how to let code first coexist with an existing hand written mapping.
Related
I wish to add custom SQL to my model creation.
(Right now I want to do that because I have used strongly typed ids in my domain model; so now ef core won't let me use .UseIdentityAlwaysColumn() on them. (As of 2021 this is a still-open issue). Even it it did, I also want to add specific Postgres sequence options).
A simple workaround is just a single line of Alter Table Alter Column... sql straight after the model creation.
I can see that MigrationBuilder.Sql() can do custom sql. So
Can ModelBuilder do custom Sql? I can't find it.
Alternatively, can I shoehorn a short Migration into the OnModelCreating()?
I wish to keep all the data definition code in sync in one place, not have most of it in OnModelCreating but bits of it elsewhere.
The short answer to both your questions is no. Or if I can use your phraseology, as of 2021 this is still not possible.
Seriously, EF Core is ORM, thus the main focus is on M(apping). Physical database attributes are not a priority, given the fact that one can use EF Core just to map to an exiting database (a.k.a. Database first). There is some limited support for indexes (not used by EF Core) and small set of other physical attributes, but no views, synonyms, triggers etc. The only SQL supported is in fact HasDefaultValueSql.
I wish to keep all the data definition code in sync in one place, not have most of it in OnModelCreating but bits of it elsewhere.
OnModelCreating is creating the mappings. At the time it is called, there is no real database involved. The model could be created for generating a migration, but that's only one (an completely optional) of the many usage scenarios. That's why you can't "execute" anything there. All you can do is to specify metadata (a.k.a. annotations) which then eventually are processed by the services responsible for different functionalities. Migration SQL generator is one of them, but it needs to understand these annotations when processing the corresponding operations. Which basically is the definition of supporting something or not.
In theory you could create your own annotations, provide custom metadata/fluent API for specifying them, but then you have also implement them for every database provider you want to support. This is a lot of work, practically impossible as every database provider implements the migration SQL generator for their specific attributes and DDL dialects.
So, whether what you wish it better or not, the practical approach would be to use what you got from ORM. Which currently is MigrationBuilder.Sql(). No more, no less. That's all. Period.
To recap shortly, if the questions are if there is some hidden "magic" way which you can't find, there isn't.
In my spare time, I'm trying restart my effort to learn F#. I'm doing so by trying to create a simple application that will allow me to analyze my financial transactions.
My first attempt at creating this application failed due to the persistence step. I used SQL and the EntityFramework package, but the latter generated database entities, which I did not want to use throughout my application since they're all mutable (I think..). Instead I had to map these database entities to domain entities. Much manual glue code later it worked....until I found a bug and was forced to replace much of that glue code. That was the tipping point that made me quit.
On SO I found a question describing my situation, e.g. Saving F# types to a database. Mark Seeman suggested that the pain of mapping can be overcome if I'd not use objects for persistence. At work I have recently been introduced to MongoDb, which at least saves me the pain of mapping from database entities to domain entities. These entities all need some ID, and I chose to use an ObjectId from Mongo. Ooops, there comes the deja vu, in order not to have my domain entities being dependent on Mongo, I will once more have to create database and domain entities....as well as the mapping. Bah & Ugh.
In C# I'm used to do such mapping with tools like Automapper, but they don't really work for special F# types. So now I'm wondering what Mark Seeman ment by "using objects for persistance". How is this solved in F#? So far I haven't been able to fine more info on this topic besides the aforementioned question on SO.
Working on a brand new project from the ground up. That means the data model is in a constant flux, doubly so because things are, inevitably, not as well planned as they should be. Model classes are being created and changed fairly regularly.
The plan was to use the latest version of EF with all the neat code-first stuff in it. But we're constantly tripping over the limitations the framework has in terms of adding or updating tables. The initialization options seem to allow only the complete deletion and re-creation of the database, which isn't really ideal.
I've had a look at the migrations. But this seems a sledgehammer to crack a nut: we don't need to detail every single small change and update with a new migration scaffold.
Are there some better strategies to deal with this? For instance, I started writing some unit tests to pre-populate one of the contexts with some test data, but because this causes the whole Db to drop and re-create, it causes problems with all the other contexts. Or perhaps making use of a custom initialiser to seed the data for us? How can we easily exclude these in production code?
We're also wondering about perhaps abandoning code-first and going back to EDMX diagrams. At least that way changes result in updated SQL commands which can be run directly against the database.
Any suggestions gratefully received.
I think, imho, that:
as the database schema must at least match your model you should/must detail every single change, and code first migration allows that and trace the changes over time
code first migration also allows to migrate the database schema for you
code first migration also allows you to produce sql that allows you to migrate the schema
For these reasons code first is as good (if not better) as the edmx approach
Please take few minutes to implement http://msdn.microsoft.com/en-us/data/jj591621.aspx
One other point, always imho and in a perfect world, if you unit test the business of you model you should not need the DAL, use generic collection. Be aware of different comportement of linq to object vs linq to entities, for example concerning the case sensitivity.
I have a project with an existing database which was initially created for a legacy application. It works fine, but over time quite a few of the tables / fields have been lost or under-utilized, but the historical data MAY be useful someday so they're not going anywhere.
Enter 2012 ('13) and Entity Framework 5, an ORM with built in POCO generation (Nice Add!). So bang.. Get a connection to the Oracle Database, gen. up a context and some POCO's.. suh-weet!! But wait.. my POCO's arent really the POCO's I would like to deal with... There's a bunch of fields which i dont need anymore (not to say I'll NEVER need them, but i can't know for sure), so now i've got these POCO's which are basically bloated table mappers... So what should I do.
I see a few solutions here..
1). I could throw them around and only use the fields that I need.
2). I could get into the Model Surface and start axing the unused fields.
3). "Code-First" approach and tie the objects into the existing DB, it's a large DB though (i'm pretty sure this is possible, right?)
4). Create my own POCO / DTO's in it's own model project and these will essentially become my "domain model", but the mapping back into the context could be painful..
Lastly, do these POCO's / DTO's need to be in their own project?? What is there REALLY to gain.. seeing things like "YAGNI", i feel like it can sit right under the .edmx and never bother anyone..
On a side note, i will be needing a few of these via JSON too, so the whole serializable ability needs to be considered..
Can i just partial class the generated POCO's and only "Attribute" the properties I'll be needing?
anyhow, it'd be great to hear from past experience, or thoughts on the matter..
I could see this being in Programmers, but i figured I'd start it here.
We have a very similar situation, a large legacy DB2 database of which we need small portions of specific tables for our applications.
To do this we used entity framework code first models for the relevant subsections of data we were interested in. This meant we could do a few important things:
remove irrelevant data from the model to make code more discoverable
rename fields inside our model and map them to names that make sense in the app rather than existing column names
reduce the volume of data pulled back by queries (ie our selects dont grab all the extra bits)
where 2 formats of data exist use the modern standard rather than historical format
This works out really well for us, however a couple of things to note:
if you are writing make sure you include all required fields in the model
you can generate you CF classes but you will have to trim them a bit
generating from non mssql can sometimes be more tricky
In terms of json serialisation we do this too however we use a different model for this and use automapper to translate. You should in most cases be able to serialise without needing to add extra attributes but if they are required you can just add them to your pocos alongside any ef attributes.
I'm currently assign to a project where their legacy system is designed in a horrible way and it's been too much focus on database design. I trying to put together a new design where the customer can migrate the legacy system bit by bit.
They are currently using EF 4.1 BUT not code first approach with entity descriptive/mapping is located in an edmx file. They do Reverse engineering everytime to want to extend the model (First make changes in database, then reflect them upwards to Model layer through a custom tool).
What I would like to know, if anyone has used BOTH edmx and code first approach with mapping classes. And is there drawbacks to know about?
You can use EDMX and code mapping together only if you have separate context type for each approach (you cannot mix approaches in single context type). That is probably the biggest disadvantage because it leads to more complex code and maintenance.
For example if you need to have some entity in both contexts types to use it with both new and legacy code you must maintain its mapping twice. You must also be very careful about not duplicating entity class itself = your code first must use class generated by custom tool for EDMX but this will not be possible if they are not using POCOs in current solution.
Another problem will be database integrity. If you will need to save changes to both context types in single transaction you will have to use TransactionScope and distributed transaction = MSDTC (each context instance will handle its own database connection).
If you are sure that whole system will be migrated you can probably think about using code first instead of EDMX (but be aware that code first mapping and DbContext generally offers more limited feature set). If you are not sure that you will be able to complete whole migration don't even think about using code first because leaving system in the state where half uses code first and half EDMX will make everything only worse and much more horrible.
Being sure is little bit theoretical because in SW development the only think you can be sure about is that requirements / situation will change. It means that migration should be very carefully considered.
I also was struck with this problem. What I found was that you can model the database and "generate the database from the model" in a "Ado.NET Entity model Project".
But you can not create stored procedures in that project, What only you can do is you can import the stored procedures from the server.
But if you do not want to create stored procedures on the server, you can create another project on VS, "SQl CLR Database Project" and you can code your stored procedures and tigers in that project and deploy them to the server.
then you can again import these stored procedures from the "Ado.NET Entity model Project" by "Update Model From Database".
Like wise you can develop your server project using both approaches(Code first and Model first)
Hope this will add something more :)