Breeze JS : Entity Errors preventing patch-up on the client - entity-framework

I my client application I am calling entityManager.saveChanges to send all currently changed entities from the client up to the server. Then in the BeforeSaveEntity event on the server I am performing some server side validation on each entity to see if it should be excluded from the save map. So for example, my entity may have a value for description that is too long. So I return false from BeforeSaveEntity, and generate a new EntityError which will then be added to the saveResult.EntityErrors collection. All of the valid records that haven't been excluded from the save map then save off successfully, and my saveResult is returned to the client. But because of this single entity error, the auto-patchup of the entities returned does not occur back on the client. I looked at the source and basically there seems to be a check that says if there is anything in the saveResult.EntityErrors collection, don't bother with the patch-up. But There was only 1 entity that purposefully wasn't saved, so I still want to be able to patch up the others. Is this behavior by design? I want to be able to exclude certain entities from the save (which I can do using the BeforeSaveEntity event), but there doesn't seem to be any way of then getting the entity errors back to the client using the inbuilt mechanism, without the full patch-up being abandoned.

Saves in breeze are transactional if at all possible ( some backend providers, like MongoDb are not because they don't support it.). This means that if any failures are experienced with any entities within a save bundle the entire save is reverted and an error is returned to the client. This is by design.

Related

How to properly use EFCore with SignalR Core (avoid caching entities)

I just found some really strange behaviour which turns out it is not so strange at all.
My select statement (query from database) worked only the first time. The second time, query from database was cached.
Inside Hub method I read something from database every 10 seconds and return result to all connected clients. But if some API change this data, Hub context does not read actual data.
In this thread I found this:
When you use EF it by default loads each entity only once per context. The first query creates entity instance and stores it internally. Any subsequent query which requires entity with the same key returns this stored instance. If values in the data store changed you still receive the entity with values from the initial query. This is called Identity map pattern. You can force the object context to reload the entity but it will reload a single shared instance.
So my question is how to properly use EFCore inside SignalR Core hub method?
I could use AsNoTracking, but I would like to use some global setting. Developer can easily forget to add AsNoTracking and this could mean serving outdated data to user.
I would like to write some code in my BaseHub class which will tell context do not track data. If I change entity properties, SaveChanges should update data. Can this be achieved? It is hard to think all the time to add AsNoTracking when querying from hub method.
I would like to write some code in my BaseHub class which will tell context do not track data.
The default query tracking behavior is controlled by the ChangeTracker.QueryTrackingBehavior property with default value of TrackAll (i.e. tracking).
You can change it to NoTracking and then use AsTracking() for queries that need tracking. It's a matter of which are more commonly needed.
If I change entity properties, SaveChanges should update data.
This is not possible if the entity is not tracked.
If you actually want tracking queries with "database wins" strategy, I'm afraid it's not possible currently in EF Core. I think EF6 object context services had an option for specifying the "client wins" vs "database wins" strategy, but EF Core currently does not provide such control and always implements "client wins" strategy.

Can EF 6 Data Annotations be different for POST than PUT or GET?

We are building a RESTful web service where there are sometimes different required fields for a POST than a PUT. For example, a field like CustomerSinceDate is allowed to be set on an insert, but not on an update. Is there is a way to set that up with Data Annotations?
EntityFramework does not (and should not) know anything about your web service. It deals only with what rules exist in the persistence layer.
What you are looking for is validation.
So in your REST service, you should check whether CustomerSinceData has been changed, and the entity is being updated. If so, you should throw an Exception with an appropriate message to the consumer.
Here is an article on writing your own DataAnnotations, if you prefer using those:
http://msdn.microsoft.com/en-us/data/jj819164#attributes
Otherwise, take a look at this article on how to write your own custom validation: http://msdn.microsoft.com/en-us/data/gg193959.aspx
(in particular, the section on IValidatableObject).
Your rule could be formulated as (pseudo code)
//if object exists in db AND CustomerSinceData has changed
DataAnnotations will get you a long way, but can be tedious to write if you are writing business logic that will never be reused anywhere else.

Add or update in Entity Framework, complex deserialized objects

We're creating a WebAPI using Entity Framework in MVC 4. Our client wants to send complex objects containing related objects - both new and updated. The root object maybe new or existing one too. The client generates primary keys - we're using Guids for that. So on server we really can't tell that we got an existing object update or a new one. What would be the best way to handle this situation? We need some sort of add or update functionality and it's not yet clear to us how to proceed with Entity Framework for this.
EF doesn't have any build in support for discovering changes in detached object graph. You either have to include some field into every object describing if the object is new, not modified, updated or deleted (you will also need similar behavior to track changes in many-to-many relationships). If you don't use such field you have no other way than querying database and comparing current DB state with data received from client to find what has changed.

Is it possible to tell if an entity is tracked?

I'm using Entity Framework 4.1. I've implemented a base repository using lots of the examples online. My repository get methods take a bool parameter to decide whether to track the entities. Sometimes, I want to load an entity and track it, other times, for some entities, I simply want to read them and display them (i.e. in a graph). In this situation there is never a need to edit, so I don't want the overhead of tracking them. Also, graph entities are sent to a silverlight client, so the entities are disconnected from the context. Hence my Get methods can return a list of entities that are either tracked or not. This is achieved dynamically creating the query as follows:
DbQuery<E> query = Context.Set<E>();
// Track the entities in the context?
if (!trackEntities)
{
query = query.AsNoTracking();
}
However, I now want to enable the user to interact with the graph and edit it. This will not happen very often, so I still want to get some entities without tracking them but to have the ability to save them. To do this I simply attach them to the context and set the state as modified. Everything is working so far.
I am auditing any changes by overriding the SaveChanges method. As explained above I may, in some low cases, need to save modified entities that were disconnected. So to audit, I have to retrieve the current values from the database and then compare to work out what was changed while disconnected. If the entity has been tracked, there is no need to get the old values, as I've got access to them via the state manager. I'm not using self tracking entities, as this is overkill for my requirements.
QUESTION: In my auditing method I simply want to know if the modified entity is tracked or not, i.e. do I need to go to the db and get the original values?
Cheers
DbContext.ChangeTracker.Entries (http://msdn.microsoft.com/en-us/library/gg679172(v=vs.103).aspx) returns DbEntityEntry objects for all tracked entities. DbEntityEntry has Entity property that you could use to find out whether the entity is tracked. Something like
var isTracked = ctx.ChangeTracker.Entries().Any(e => Object.ReferenceEquals(e.Entity, myEntity));

Saving a doctrine2 entity to cache to speed up the page load

Let's say I have an entity called Product and this entity is loaded every time user hits the product information page. Usually I'd save the object in Zend_Cache (memcache) for an hour to avoid hitting the db for each request but as far as I understand that's not possible with Doctrine2 entities because of the Proxy objects.
So my question is, how can I avoid loading the same entity from the database for each request?
[EDIT]
I tried using Doctrine Cache like this
$categoryService = App_Service_Container::getService('\App\Service\Category');
$cache = $categoryService->getEm()->getConfiguration()->getResultCacheImpl();
$apple = $cache->fetch('apple');
But I get the following error
Warning: require(App/Entity/Proxy/_CG_/App/Entity/Category.php)
[function.require]: failed to open stream: No such file or directory
in /opt/vhosts/app/price/library/Doctrine/Common/ClassLoader.php on
line 163
This is same for Zend Cache as well as you can't serialize the entity because of the Proxy class
You've got several options:
Use Doctrine's built-in result caching
Try just sticking entity in memcache via Zend_Cache. When you pull it out, you may need to merge() the Product back into the EM so proxies can be dereferenced. If you fetch-join any associations you need to display the product info, and you're only doing reads, this shoudl work fine.
Don't cache the entity at all. Cache whatever output you generate instead.
EDIT: If you don't care about the hydration overhead, you're using mysql, and your Products and associated tables don't change very often, you might prefer to just rely on the mySQL query cache. It's a fairly blunt object, but useful enough to mention.
You might want to try implementing __sleep or __wakeup methods for your entity class, as Doctrine 2 has special requirements and limitations concerning serialization/deserialization of entities (which is what happens when storing them in Zend_Cache).
There is this guidance.
General information about limitations including serialization.
I find this extremely strange since i just messed around with this myself and didn't have any issues with the proxy object being stored in the database. So im guessing your configuration is not setup 100% ?
If you find the issue with your configuration then be very aware of what timdev said you MUST merge the object back into the EntityManager else you will have weird bugs down the line.
A fourth solution available for you is also to retrieve the data as an array instead of an object, but then of course you lose all the functionality connected to your module which might not be exactly want you wanted.
It seems to me more like a configuration error. Either Proxies have not been generated or there is something wrong with the proxy directory and namespace.
Depending on your configuration, proxies can be either generated automatically or manually. Does your proxies have been indeed generated under App/Entity/Proxy ? Is this indeed the right directory?
FYI proxies can be manually generated by executing doctrine orm:generate-proxies <dest-dir>
Seconding what timdev says: Doctrine has built-in caching, you want to use it.
I also wonder from your question if you are experiencing any performance issues or if you are a victim of overly eager optimisation.