When a user unknowingly attempts to delete an entity that has children it fails because breeze does not currently support cascading deletes, which is expected.
But a side effect of this action is that it detaches entities from the local cache. So from the user interface it appears that those entities have been deleted.
Is this expected behavior?
Is there a straight forward way of checking if the entity has children (ex. hasChildren()) and prevent the user from getting into this state on the client side?
Is this an expected behavior?
Yes it is. When you delete a row. logically, it should disappear from the user's point of view, even though it may fail during the save. Any failure can be avoided by handling it in the savechanges fail method.
Is there a straight forward way of checking if the entity has children (ex. hasChildren()) and prevent the user from getting into this state on the client side?
Yes there is..
Let's say you have a parent "customer" and it's children "orders"
your code may be something like this:
if (customer.orders()) return 'Your message';
else
return customer().entityAspect.setDeleted();
However, I agree with PW Kad.. I would go for the the database choice and sit back.
If you are using SQL server then you might want to set a delete cascade rule from there.
For the sake of customer-orders, expand "orders" table; you'll find the FK constraint under "Keys".Right Click then Modify. a dialog will show; expand "Insert and Update specification"change the "Delete Rule" to "Cascade" You can set the update rule as well..
Related
I have a collection of items and some of them may or may not be deleted, depending on some preconditions. If a user wants to delete a resource (DELETE /collection/1) and there are external dependencies on that resource, the server will return an error. But what should happen if the user wants to delete the entire collection (DELETE /collection)?
Should all the resources which can be deleted be deleted and the server return a 2xx, or should the server leave everything intact and return a 4xx? Which would be the expected behavior?
As a REST API consumer, I'd expect the operation to be atomic and maybe get back a 409 Conflict with details if one of the deletes fails. Plus the DELETE method is theoretically idempotent as #jbarrueta pointed out.
Now if undeletable resources is a normal event in your use case and happens frequently, you may want to stray from the norm a little bit, delete all that can be deleted and return something like a 206 Partial Content (don't know if that's legal for DELETE though) with details about undeleted resources.
However, if you need to manage error cases finely, you might be better off sending separate DELETE commands.
I think the proper result is 204 no content by success and 409 conflict by failure because of the dependencies (as the others pointed out). I support atomicity as well.
I think you are thinking about REST as SOAP/RPC, which it is clearly not. Your REST service MUST fulfill the uniform interface constraint, which includes the HATEOAS interface constraint, so you MUST send hyperlinks to the client.
If we are talking about a simple link, like DELETE /collection, then you must send the link to the client, only if the resource state transition it represents, is available from the current resource state. So if you cannot delete the collection because of the dependencies, then you don't send a link about this transition, because it is not possible.
If it is a templated link, then you have to attach the "removable" property to the items, and set the checkboxes to disabled if it is false.
This way conflict happens only when the client got the link from a representation of a previous (stale) resource state, and in that case you must update the client state by querying the server again with GET.
Another possible solution (ofc. in combination with the previous ones) to show the link and automatically remove dependencies.
I guess you can use PATCH for bulk updates, which includes bulk removal, so that can be another solution as well.
I'd say it depends on your domain (although I'd rather use DELETE /collection/all instead of DELETE/collection/),
When you have the situation where you use delete all but some items can't be deleted, it depends on your domain where if you are doing the delete all to free up resources where if not your business process suffers, then it's better to delete what can be deleted and put other into a retry queue makes sense. in that case response should be OK.
Also situations could arise where there could be two operations
Clean Up - only delete unused
Delete All - delete all
In either situation I'd rather use specific method rather than using DELETE on the root URL,
for Clean Up - DELETE /collection/unused
for Delete ALL - DELETE /collection/all
I am wondering what the best practice would be for updating a record using JPA? I currently have devised my own pattern, but I suspect it is by no means the best practice. What I do is essentially look to see if the record is in the db, if I don't find it, I call the enityManager.persist(object<T>) method. if it does exist I call the entityManager.Merge(Object<T>) method.
The reason that I ask, is that I found out that the the merge method looks to see if the record is in the database allready, and if it is not in the db, then it proceeds to add it, if it is, it makes the changes necessary. Also, do you need to nestle the merge call in getTransaction().begin() and getTransaction.commit()? Here is what I have so far...
try{
launchRet = emf.find(QuickLaunch.class, launch.getQuickLaunchId());
if(launchRet!=null){
launchRet = emf.merge(launch);
}
else{
emf.getTransaction().begin();
emf.persist(launch);
emf.getTransaction().commit();
}
}
If the entity you're trying to save already has an ID, then it must exist in the database. If it doesn't exist, you probably don't want to blindly recreate it, because it means that someone else has deleted the entity, and updating it doesn't make much sense.
The merge() method persists an entity that is not persistent yet (doesn't have an ID or version), and updates the entity if it is persistent. You thus don't need to do anything other than calling merge() (and returning the value returned by this call to merge()).
A transaction is a functional atomic unit of work. It should be demarcated at a higher level (in the service layer). For example, transfering money from an account to another needs both account updates to be done in the same transaction, to make sure both changes either succeed or fail. Removing money from one account and failing to add it to the other would be a major bug.
I am a bit confused by this message sent by Xcode:
Setting the No Action Delete Rule on Passenger.taxi is an advanced setting [5]
These are the specifications
When I delete a Taxi instance, it should also delete all its Passenger instances. Current Delete Rule: Cascade
When I delete a Passenger instance, it should just delete that particular instance. Even if it is the last Passenger instance of a Taxi instance. A Taxi can exist without Passengers (1:mc). Current Delete Rule: No Action
What delete rule do I need here to meet the requirements?
Set the delete rule to nullify, which simply nils out the link. "No Action" is a bit weird in that you can think of it as leaving a pointer that does not really exist I'm not sure if that's what it would really do).
Say that I have a User table in my ReadDatabase (use SQL Server). In a regulare read/write database I can put like a index on the table to make sure that 2 users aren't addedd to the table with the same emailadress.
So if I try to add a user with a emailadress that already exist in my table for a diffrent user, the sql server will throw an exception back.
In Cqrs I can't do that since if I decouple the write to my readdatabas from the domain model, by puting it on an asyncronus queue I wont get the exception thrown back to me, and I will return "OK" to the UI and the user will think that he is added to the database, when infact he will never be added to the read database.
I can do a search in the read database checking if there is a user already in my database with the emailadress, and if there is one, then thru an exception back to the UI. But if they press the save button the same time, I will do 2 checks to the database and see that there isn't any user in the database with the emailadress, I send back that it's okay. Put it on my queue and later it will fail (by hitting the unique identifier).
Am I suppose to load all users from my EventSource (it's a SQL Server) and then do the check on that collection, to see if I have a User that already has this emailadress. That sounds a bit crazy too me...
How have you people solved it?
The way I can see is to not using an asyncronized queue, but use a syncronized one but that will affect perfomance really bad, specially when you have many "read storages" to write to...
Need some help here...
Searching for CQRS Set Based Validation will give you solutions to this issue.
Greg Young posted about the business impact of embracing eventual consistency http://codebetter.com/gregyoung/2010/08/12/eventual-consistency-and-set-validation/
Jérémie Chassaing posted about discovering missing aggregate roots in the domain http://thinkbeforecoding.com/post/2009/10/28/Uniqueness-validation-in-CQRS-Architecture
Related stack overflow questions:
How to handle set based consistency validation in CQRS?
CQRS Validation & uniqueness
I'm using the Entity Framework to model a simple parent child relationship between a document and it's pages. The following code is supposed to (in this order):
make a few property updates to the document
delete any of the document's existing pages
insert a new list of pages passed into the method.
The new pages do have the same keys as the deleted pages because there is an index that consists of the document number and then the page number (1..n).
This code works. However, when I remove the first call to SaveChanges, it fails with:
System.Data.SqlClient.SqlException: Cannot insert duplicate key row in object
'dbo.DocPages' with unique index 'IX_DocPages'.
Here is the working code with two calls to SaveChanges:
Document doc = _docRepository.GetDocumentByRepositoryDocKey(repository.Repository_ID, repositoryDocKey);
if (doc == null) {
doc = new Document();
_docRepository.Add(doc);
}
_fieldSetter.SetDocumentFields(doc, fieldValues);
List<DocPage> pagesToDelete = (from p in doc.DocPages
select p).ToList();
foreach (DocPage page in pagesToDelete) {
_docRepository.DeletePage(page);
}
_docRepository.GetUnitOfWork().SaveChanges(); //IF WE TAKE THIS OUT IT FAILS
int pageNo = 0;
foreach (ConcordanceDatabase.PageFile pageFile in pageList) {
++pageNo;
DocPage newPage = new DocPage();
newPage.PageNumber = pageNo;
newPage.ImageRelativePath = pageFile.Filespec;
doc.DocPages.Add(newPage);
}
_docRepository.GetUnitOfWork().SaveChanges(); //WHY CAN'T THIS BE THE ONLY CALL TO SaveChanges
If I leave the code as written, EF creates two transactions -- one for each call to SaveChanges. The first updates the document and deletes any existing pages. The second transaction inserts the new pages. I examined the SQL trace and that is what I see.
However, if I remove the first call to SaveChanges (because I'd like the whole thing to run in a single transaction), EF mysteriously does not do the deletes at all but rather generates only the inserts?? -- which result in the duplicate key error. I wouldn't think that waiting to call SaveChanges should matter here?
Incidentally, the call to _docRepository.DeletePage(page) does a objectContext.DeleteObject(page). Can anyone explain this behavior? Thanks.
I think a more likely explanation is that EF does do the deletes, but probably it does them after the insert, so you end up passing through an invalid state.
Unfortunately you don't have low level control over the order DbCommands are executed in the database.
So you need two SaveChanges() calls.
One option is to create a wrapping TransactionScope.
Then you can call SaveChanges() twice and it all happens inside the same transaction.
See this post for more information on the related techniques
Hope this helps
Alex
Thank you Alex. This is very interesting. I did indeed decide to wrap the whole thing up in a transaction scope and it did work fine with the two SaveChanges() -- which, as you point out, appear to be needed due to the primary key conflict with the deletes and subsequent inserts. A new issue now arises based on the article to which you linked. It properly advises to call SaveChanges(false) -- instructing EF to hold onto it's changes because the outer transaction scope will actually control whether those changes ever actually make it to the database. Once the controlling code calls scope.Complete(), the pattern is to then call EF's context.AcceptAllChanges(). But I think this will be problematic for me because I'm forced to call SaveChanges TWICE for the problem originally described. If both those calls to SaveChanges specify False for the accept changes parameter, then I suspect the second call will end up repeating the SQL from the first. I fear I may be in a Catch-22.