Entity Framework Order of Operations - entity-framework

I have a transaction type system where an object is created and stored and then sent off to another service for further processing and then the object is updated upon return. I have been doing some quality checks in my system and I noticed that a handful of records are logged in the secondary system but not in my main application. Is this possible? Does context.savechanges() move on before returning a result?

Related

Eclipselink disable cache for stored procedure

I have two stored procedure calls that return a User entity. One looks to see if the user is registered by two parameters not included in the user entity. If the procedure does not return any users, a second stored procedure is called to register that user.
The behavior I'm seeing is that when called in this order, the second stored procedure returns a User entity from the cache that has nearly all the fields as null. When I disable caching it returns the user object appropriately. It would seem that the first call is caching the user object.
In normal operation where a user is logging in, I want it to cache, so I do not want to disable caching for the first call. I want the second stored procedure call to not use the cache. After doing some research and testing a few options, I've found few options.
This doesn't work on a stored procedure:
proc.setHint("javax.persistence.cache.retrieveMode", CacheRetrieveMode.BYPASS);
java.lang.IllegalArgumentException: Query linkUser, query hint javax.persistence.cache.retrieveMode is not valid for this type of query.
This looks like evicts all the cache for all Users.
em.getEntityManagerFactory().getCache().evict(User.class);
And these options either disable cache for all instances of the entity or across the application.
How can I not use cache for a single stored procedure call with Eclipselink?
Bonus: Why would a stored procedure call that returns a null user be cached?
The JPA specification requires that all entities returned from JPA queries (which includes native and Stored proc queries) be managed, which implies they are also cached to maintain object identity. If your first query is returning an incomplete entity, this too will be cached. Applications need to be careful when using queries that return entities that they return a complete set of data or they can corrupt the cache, and also note that their entities may be pulled from the cache instead of rebuilt with the data from their query, and may want to return java objects (constructor queries) rather than JPA entities.
For the answer to the first part, see https://stackoverflow.com/a/4471109/496099
Found a Java EE Tutorial that shows how evict an individual cache. I'm still not sure why it's even being cached because the first call never has the userId.
Cache cache = em.getEntityManagerFactory().getCache();
cache.evict(User.class, userId);

Entity Framework Transaction across multiple SaveChanges

I am creating an application which uses WCF (4.5), EF (6.1), Unity (3.5) and Unity3.Wcf (3.5)
The application needs to run a monthly process which checks for changes that have happened in last month and create a record for an approval process.
This process will be triggered by a call to a WCF service method.
This is the basic logic:
Get collection of Things
For each Thing:
Get collection of ThingChanges
Calculate changed Amount
Create New ThingApproval
Update each ThingChange in ThingChanges with ThingApproval.ID
Now, as far as I am aware, in order to get ThingApproval.ID, I need to do SaveChanges after Create New ThingApproval which will populate with the ID from the DB. I then need to do a further SaveChanges either after each Update or once after the for each completes to commit all the updates.
If any part of this process fails, it needs to rollback ALL changes, back to before the first SaveChanges
How can I implement this?
I ended up implementing the GNaP.Data.Scope.EntityFramework package which gives full control of the DB Context, including transactional.

Breeze JS : Entity Errors preventing patch-up on the client

I my client application I am calling entityManager.saveChanges to send all currently changed entities from the client up to the server. Then in the BeforeSaveEntity event on the server I am performing some server side validation on each entity to see if it should be excluded from the save map. So for example, my entity may have a value for description that is too long. So I return false from BeforeSaveEntity, and generate a new EntityError which will then be added to the saveResult.EntityErrors collection. All of the valid records that haven't been excluded from the save map then save off successfully, and my saveResult is returned to the client. But because of this single entity error, the auto-patchup of the entities returned does not occur back on the client. I looked at the source and basically there seems to be a check that says if there is anything in the saveResult.EntityErrors collection, don't bother with the patch-up. But There was only 1 entity that purposefully wasn't saved, so I still want to be able to patch up the others. Is this behavior by design? I want to be able to exclude certain entities from the save (which I can do using the BeforeSaveEntity event), but there doesn't seem to be any way of then getting the entity errors back to the client using the inbuilt mechanism, without the full patch-up being abandoned.
Saves in breeze are transactional if at all possible ( some backend providers, like MongoDb are not because they don't support it.). This means that if any failures are experienced with any entities within a save bundle the entire save is reverted and an error is returned to the client. This is by design.

WF4 TransactionScope containing several custom activities with EF4 database updates

I have created several custom activities that update tables in my DB (in this case SQL Server Compact), using Entity Framework 4 with POCOs.
If I put more than one of these inside a WF4 TransactionScope activity, I'm running into problems: EF disposes the DB connection after the first activity has finished, and when the next DB activity tries to do a DB update a new connection is built up. At this moment an exception is thrown.
System.Activities.WorkflowApplicationAbortedException : The workflow has been aborted.
----> System.Data.EntityException : The underlying provider failed on Open.
----> System.InvalidOperationException : The connection object can not be enlisted in transaction scope.
Do I have to keep the EF connection open during the whole transaction scope? How can I do that? Create an explicit custom activity for that, or is there a standard way?
My current workaround goes like this: I created a new code activity that creates our ObjectContext and explicitely calls dbContext.Connection.Open(). It returns the ObjectContext, which is then saved in a workflow variable. That one is passed to all the DB related activities as an InArgument<>. Inside my DB activities, I use this ObjectContext if it is passed in, otherwise I create a new one.
This does work, but I'm not satisfied with this solution: It needs the new InArgument for every DB related activity. In the workflow designer, I have to insert that special OpenDatabaseConnection activity inside the transaction scope, and then make sure that the correct variable is passed into all DB activities. This seems to be very inelegant and error prone, especially if other team members have to use these DB activities.
What would be a better way to handle this?
The problem is that when you open a second connection in the same transaction scope, an attempt is made to promote the transaction to a distributed transaction (even though there's nothing distributed about it since you connect to the same database). SQL Server CE doesn't support this scenario.
What I would do is create a custom 'container' activity that opens (and closes) the connection and makes it available to child activities. This is still not optimal but at least you no longer need to pass InArgument's around. You get the following activity tree:
TransactionScope
InitializeConnection
Sequence
CustomDataActivity1
CustomDataActivity2
CustomDataActivity3
InitializeConnection is a NativeActivity that uses NativeActivityContext.Properties to expose the connection (or the ObjectContext) to child activities.
Make sure you implement proper error handling to ensure you close the connection at all times.
NOTE: Distributed transactions are supported by the full SQL Server only through a Windows service called MSDTC (Microsoft Distributed Transaction Coordinator). You can find this one in your 'Local Services'. Since SQL Server CE is a database that should be able to operate completely standalone, it makes sense that it has no dependency on MSDTC. Therefore it has no support for distributed transactions.

New entity ID in domain event

I'm building an application with a domain model using CQRS and domain events concepts (but no event sourcing, just plain old SQL). There was no problem with events of SomethingChanged kind. Then I got stuck in implementing SomethingCreated events.
When I create some entity which is mapped to a table with identity primary key then I don't know the Id until the entity is persisted. Entity is persistence ignorant so when publishing an event from inside the entity, Id is just not known - it's magically set after calling context.SaveChanges() only. So how/where/when can I put the Id in the event data?
I was thinking of:
Including the reference to the entity in the event. That would work inside the domain but not necesarily in a distributed environment with multiple autonomous system communicating by events/messages.
Overriding SaveChanges() to somehow update events enqueued for publishing. But events are meant to be immutable, so this seems very dirty.
Getting rid of identity fields and using GUIDs generated in the entity constructor. This might be the easiest but could hit performance and make other things harder, like debugging or querying (where id = 'B85E62C3-DC56-40C0-852A-49F759AC68FB', no MIN, MAX etc.). That's what I see in many sample applications.
Hybrid approach - leave alone the identity and use it mainly for foreign keys and faster joins but use GUID as the unique identifier by which i pull the entities from the repository in the application.
Personally I like GUIDs for unique identifiers, especially in multi-user, distributed environments where numeric ids cause problems. As such, I never use database generated identity columns/properties and this problem goes away.
Short of that, since you are following CQRS, you undoubtedly have a CreateSomethingCommand and corresponding CreateSomethingCommandHandler that actually carries out the steps required to create the new instance and persist the new object using the repository (via context.SaveChanges). I will raise the SomethingCreated event here rather than in the domain object itself.
For one, this solves your problem because the command handler can wait for the database operation to complete, pull out the identity value, update the object then pass the identity in the event. But, more importantly, it also addresses the tricky question of exactly when is the object 'created'?
Raising a domain event in the constructor is bad practice as constructors should be lean and simply perform initialization. Plus, in your model, the object isn't really created until it has an ID assigned. This means there are additional initialization steps required after the constructor has executed. If you have more than one step, do you enforce the order of execution (another anti-pattern) or put a check in each to recognize when they are all done (ooh, smelly)? Hopefully you can see how this can quickly spiral out of hand.
So, my recommendation is to raise the event from the command handler. (NOTE: Even if you switch to GUID identifiers, I'd follow this approach because you should never raise events from constructors.)