I am working on a very large application with over 100 modules, and almost 500 tables in the database. We are converting this application to WPF/WCF using Entity Framework 4.2 Code First. Our database is SQL Anywhere 11. Because of the size of the database, we are using an approach similar to Bounded DbContexts, as described here http://msdn.microsoft.com/en-us/magazine/jj883952.aspx by Julie Lerman.
Each of our modules creates its own DbContext, modeling only the subset of the database that it needs.
However, we have run into a serious problem with the way DbContexts are created. Our modules are not neatly self-contained, nor can they be. Some contain operations that are called from several other modules. And when they are, they need to participate in transactions started by the calling modules. (And for architectural reasons, DTC is not an option for us.) In our old ADO architecture, there was no problem passing an open connection from module to module in order to support transactions.
I've looked at various DbContext constructor overloads, and tried managing the transaction from EntityConnection vs. the StoreConnection, and as far as I can tell, there is no combination that allows ModuleA to begin a transaction, call a function in ModuleB, and have ModuleB's DbContext participate in the transaction.
It comes down to two simple things:
Case 1. If I construct DbContextB with DbContextA's EntityConnection, DbContextB is not built with its own model metadata; it reuses DbContextA's metadata. Since the Contexts have different collections of DbSets, all ModuleB's queries fail. (The entity type is not a part of the current context.)
Case 2. If I construct DbContextB with ModuleA's StoreConnection, DbContextB does not recognize the StoreConnection's open transaction at the EntityConnection level, so EF tries to start a new transaction when ModuleB calls SaveChanges(). Since the database connection in fact has an open transaction, this generates a database exception. (Connection does not support parallel transactions.)
Is there any way to 1) force DbContextB to build its own model in Case 1, or 2) get DbContextB's ObjectContext to respect its StoreConnection's transaction state in Case 2?
(By the way, I saw some encouraging things in the EF6 alpha, but after testing it out, found the only difference was that I could create DbContextB on an open connection. But even then, the above 2 problems still exist.)
I suggest you try use the TransactionScope object to manage this task for you. As long as all of your DbContexts use the same connection string (not connection object) the transaction should not try to enlist MS-DTC.
Related
I´m getting PK Violation Exception when using EF Core 2.1 DbContext in an Azure QueueTrigger function. Guess is due to the nature of DbContext not being thread-safe, and the Azure Function running different instances in parallel. I have read quite a few, but I can´t find a good approach to solve this.
Here is my scenario (producer-consumer pattern):
I have a Scheduled Azure Function that is calling an API to get Projects from different external systems. To get all the required info for a project, I need to run different Queries to other external services, so I´m decoupling this to another Azure function, so the Scheduled function just queues a message per Project, as “Sync Project ID 101”.
Another QueueTrigger Function fires every time a message is queued, so, it means different instances running in parallel. This function must gather all the data of a specific Project, and that means more calls to other external services / APIs, to (some kind of) aggregate all the info about a Project. IMHO it´s good to do it that way, as I can process multiple Projects in parallel, and I can scale the Function if I need it.
Once I have all this Project info, I want to persist it in a SQL DB using EF Core (and here comes the issue)
Project data includes Users in the Project, and each user have a specific GUID as PK (coming from the external system). That means I can have repeated Users IDs in different Function instances, and here is the problem, as when I try to persist User info in a SQL Table, I can get PK Duplication exception, as multiple Function instances can try to Insert the same User at the same time (when the instance A check if user exists, it gets False, but another instance B is actually adding this User, so when instance A tries the Insert, it fails).
Guess I can lock DbContext somehow, but not sure if is good, as I also have a website doing Queries to the SQL DB (read-only queries for now, but could be updates in future too).
Another idea could be to send the entire Project info to another Queue / Blob file, and have another function in Singleton mode that Insert the data into SQL.
I´ve created this project simplifying my scenario, but enough to reproduce the issue and understand the problem.
https://github.com/luismanez/queuetrigger-efcore-multithreading
Any other ideas or recommended approaches? (open to change the architecture if find something better)
Many thanks!
A "more easy" way could be to do some kind of upsert in the database. There is a sample of how to do that with EF Core: https://www.flexlabs.org/2018/02/adding-upsert-support-for-entity-framework-core
Our system manages N data dases which all of them share the same tables (standard tables), but in addition each data base has its own specific tables too.
The modules which access the standard tables are programmed in the core. To access specefic tables we load assemblies by reflection in which there are a specific assembly for every data base.
How can we solve an operation which works with standard tables (core programming) and specific ones (due to reflection) in which the whole operation is in a transaction?.
We can not use 2 EF context, due to we are not able to use distributed transaction
Thank you in advance
If you use the same connection string for both DbContexts, you do not need a distributed transaction with EF6, as you can pass a SqlTransaction that you own to each DbContext - see https://msdn.microsoft.com/en-us/library/dn456843(v=vs.113).aspx under the "Passing an existing transaction to the context" heading
Our company has been using Telerik Open Access for years. We have multiple projects using it including some in development and some in production that need updated. Because Telerik no longer updates or supports Open Access, we are having a variety of problems. We've got users that have to go to another work station because we can't get Open Access on their computers and we've got projects where we can't add or update tables because the visual designer doesn't work in modern Visual Studio versions. So my question is, how do we convert these and what do we convert these to?
I've heard of Microsoft Entities Framework and we used to just call stored procedures instead of having a separate data project. Obviously our clients aren't going to pay us for hours to switch so we need something that works quick. How do we convert our existing Telerik Open Access project to Microsoft Entities Framework, straight SQL queries, or some other data layer option?
Here's an example of what we have currently.
A separate Visual Studio project that acts as our data layer where all the code was created by Telerik Open Access's visual designer:
We then have a DataAccess.cs class in our main project that creates the instance of the data layer:
Then we call it by using linq statements in the main project:
I don't know OA hands-on, so I can only put my two-cents in.
I'm afraid this isn't going to be be an easy transition. I've yet to see the first seamless transition from one data layer implementation to another (and I've seen a few). The main cause for this is that IQueryable is a leaky abstraction. That is, the data layer exposes IQueryables, but
it doesn't support all features of the interface, and
it adds its own features, and
it's got its own interpretation of how to implement the features that are supported.
This means that if you're going to port your data layer to EF, you may notice that some LINQ queries throw runtime errors because they contain unsupported .Net methods/properties (for instance DateTime.Date), or perform worse -- or better, or return data in subtly different shapes or sorting orders.
Some important OA features that are missing in EF:
Runtime mappings (EF's mapping is static)
Bulk update/delete functions (with EF: only by using third-party libraries)
Second-leve cache
Profiler and Tuning Advisor
Streaming of large objects
Mixing database-side and client-side evaluation of LINQ queries (EF6: only db-evaluation)
On the other hand, the basic architectures of OA and EF don't seem to be too different. They both -
support POCOs
work with simple navigation properties
have a fluent mapping API
support LINQ through IQueryable<T>, where T is an entity class.
most importantly: both have revolve around the Unit of Work and Repository patterns. (For EF: DbContext and DbSet, respectively)
All-in-all it's goinig to be a delicate process of converting, trying and testing. One good thing is that your current DAL is already abstracted to a certain extent. Another is that the syntax doesn't even look too different. Where you have ...
dbContext.Add(newDockReport);
dbContext.SaveChanges();
... using EF this would become ...
dbContext.DockReports.Add(newDockReport);
dbContext.SaveChanges();
With EF-core it wouldn't even have to change one bit.
But that's another important choice to make: EF6 or EF-core? EF6 is stable, mature, feature-rich, but at the end of its life cycle (a phrase you've probably come to hate by now). EF-core, on the other hand, is the future, but is presently struggling to get bug-free in its major functions, not yet as feature-rich as EF6 (and some features will never return, although other new features are great improvements compared to EF6). At the moment, I'd be wary of using EF-core for enterprise applications, but I'm pretty sure that a year from now this is not an issue any more.
Whichever way you go, I'd start the process by writing large amounts of integration tests, if you didn't do so already. The advantage of integration tests is that you can avoid the hassle of mocking either framework first (which isn't trivial).
I have never heard of a tool that can do that.Expect it to take time.
You have to figure how to do it by yourself :
( for the exact safer way to migrate )
1rst you must have a simple page that use your DataLayer it will be your test page. A simple one that you can adapt the LinQ logic .
Add a LinQ to SQL Class, Right click> Add > LinQ to SQL Class.
Drop your table for this page only the usefull one, put the link if needed.
In the test page create a new data context of the linQtoSql.
Use it fixing the type and rename what have to be rename.
Fix error till it compile.
Stock coffee and anything that can boost your brain.
Add table to your context to match the one you had in telerik data access.
Debug for days.
Come back with new question on how to fix a new issue twice a day.
To help with the dbContext.Add() difference between the 2 frameworks you could use this extension in the EF 6.x :
public static void Add<T>(this DbContext db, T entityToCreate) where T : class
{
db.Set<T>().Add(entityToCreate);
db.SaveChanges();
}
then you could do :
db.Add(obj);
See Gert Arnold answer : In Entity Framework, how do I add a generic entity to its corresponding DbSet without a switch statement that enumerates all the possible DbSets?
I'm using EF (EF Core, actually, with ASP.NET Core on OSX, but I believe this is more of a general "newbie-style" EF question, so please read on...)
I built a little logging routine that uses EF to publish log entries to my database. Sort of like this, called from a repository class:
WebLog log = new WebLog(source, path, message);
Context.WebLogs.Add(log);
Context.SaveChanges();
Where WebLog is a simple model class, Context.WebLogs is a DbSet<WebLog> collection, and Context is obviously the DbContext. I believe this is quite straightforward.
But my question is this: if I continue to add new log entries to the Context.WebLogs collection and I never do anything like reboot my server, isn't the collection just going to grow without bounds? Is there some kind of "purge" or "flush" action I can take periodically to manage memory usage (without affecting the committed rows in the database, of course--I want those to persist). Or is DbSet some sort of a special collection that won't do this?
As mentioned by DevilSuichiro above, the recommended approach is to limit the lifetime of the instances of DbContext. E.g. in a Web application you typically use a DbContext instance per request, so an unbounded number of entities added doesn't become a problem.
The closest thing to a "flush" operation is SaveChanges() that method will not try to remove references to tracked entities, as DbContext is designed to be reused after SaveChanges().
In previous versions of EF we had a Detach() API that you could use to get rid of an individual tracked reference but we don't have that API in DbContext or anywhere in EF Core.
BTW, having an instance of DbContext that is shared between multiple requests is extremely problematic because DbContext is not thread safe.
I'm starting a new project and have decided to try to incorporate DDD patterns and also include Linq to Entities. When I look at the EF's ObjectContext it seems to be performing the functions of both Repository and Unit of Work patterns:
Repository in the sense that the underlying data level interface is abstracted from the entity representation and I can request and save data through the ObjectContext.
Unit Of Work in the sense that I can write all my inserts/updates to the objectContext and execute them all in one shot when I do a SaveChanges().
It seems redundant to put another layer of these patterns on top of the EF ObjectContext? It also seems that the Model classes can be incorporated directly on top of the EF generated entities using 'partial class'.
I'm new at DDD so please let me know if I'm missing something here.
I don't think that the Entity Framework is a good implementation of Repository, because:
The object context is insufficiently abstract to do good unit testing of things which reference it, since it is bound to the DB access. Having an IRepository reference instead works much better for creating unit tests.
When a client has access to the ObjectContext, the client can do pretty much anything it cares to. The only real control you have over this at all is to make certain types or properties private. It is hard to implement good data security this way.
On a non-trivial model, the ObjectContext is insufficiently abstract. You may, for example, have both tables and stored procedures mapped to the same entity type. You don't really want the client to have to distinguish between the two mappings.
On a related note, it is difficult to write comprehensive and well-enforce business rules and entity code. Indeed, whether or not it this is even a good idea is debatable.
On the other hand, once you have an ObjectContext, implementing the Repository pattern is trivial. Indeed, for cases that are not particularly complex, the Repository is something of a wrapper around the ObjectContext and the Entity types.
I would say that you should look at the ObjectContext as your UnitOfWork, and not as a repository.
An ObjectContext cannot be a repository -imho- since it is 'to generic'.
You should create your own Repositories, which have specialized methods (like GetCustomersWithGoldStatus for instance) next to the regular CRUD methods.
So, what I would do, is create repositories (one for each aggregate-root), and let those repositories use the ObjectContext.
I like to have a repository layer for the following reasons:
EF gotcha's
When you look at some of the current tutorials on EF (Code First version), it is apparent there's a number of gotcha's to be handled, particularly around object graphs (entities containing entities) and disconnected scenarios. I think a repository layer is great for wrapping these up in one place.
A clear picture of data access mechanisms
A repository gives a specific picture as to how the BL is accessing and updating the data store. It exposes methods that have a clear single purpose, and can be tested independently of the BL. Standard example from the textbooks, Find() to find a single entity. A more application specific example, Clear() to clear down a db table.
A place for optimizations
Inevitably you come up against performance hits when using vanilla EF. I use the repository to hide the optimization mechanisms from the BL.
Examples,
GetKeys() to project cached keys from the tables (for Insert/Update decisions). The reading of key only is faster and uses less memory than reading the full entity.
Bulk load via SqlBulkCopy. EF will insert by individual SQL statements. If you want a single statement to insert multiple rows, SqlBulkCopy is a good mechanism. The repository encapsulates this and provides metadata for SqlBulkCopy. As well as the Insert method, you need a StartBatch() and EndBatch() method, which is also an argument for a UnitOfWork layer.