Modeling editable lists in DTOs used by services - rest

Say you have the following Contact DTO. Address/PhoneNumber/EmailAddress/WebSiteAddress classes are simple DTOs as well (just data no behavior)
public class Contact
{
public Address[] Addresses { get; set; }
public PhoneNumber[] PhoneNumbers { get; set; }
public EmailAddress[] EmailAddresses { get; set; }
public WebSiteAddress[] WebSiteAddresses { get; set; }
}
How should I model DTOs to allow implementing the following behavior?
The client can submit a request that will
add a phone number, update two phone numbers and delete two add two
add two email addresses, update one email address and delete three
add three website addresses, update two website addresses and delete
two. You get the idea.
One option is to add an Action attribute to each Address / PhoneNumber / EmailAddress / WebSiteAddress.
Then the code the update addresses look like this:
var addressesToUpdate = serviceContact.Addresses.Where(x => x.AddressAction.ToUpper() == "UPDATE");
var addressesToAdd = serviceContact.Addresses.Where(x => x.AddressAction.ToUpper() == "ADD");
var addressesToDelete = serviceContact.Addresses.Where(x => x.AddressAction.ToUpper() == "DELETE").Select(x => x.AddressId);
Repeating this for all other lists will probably create duplication.
My question is:
How should I model service DTOs with updatable lists while avoiding duplication?

Generally I'll try to keep my writes idempotent which means it should have the same side-effect (i.e. end result) of calling it when you have no records or all records (i.e. Store or Update).
Basically this means that the client sends the complete state: i.e.
What entries don't exist => gets created,
The entities that already exist => get updated,
Whilst the entities that aren't in the request DTO => get deleted.
OrmLite's db.Save() command has nice support for this where it detects if a record(s) already exist and will issue an UPDATE otherwise will INSERT.

You can use ETags with conditional requests instead of providing the complete state. Use the ETag as a version of the list and change it each time the list changes. On the client side, use the ETag to request an update using the If-None-Match http header and be prepared to receive a 402 Precondition Failed status if the list changed while the request was sent.

Related

EF Core update untracked entity with many to many collection

I'm facing a problem with EF Core and collections; I have persons who read books, the books can be read by multiple people and people can read multiple books (it's a many-to-many relationship). My EF generates the 3 tables Books, Persons and BookPersons.
When I insert new persons with a set of books they read, there is no problem. Still when I recreate one of the persons outside the db context (so same id, but mutated collection of read books) and I try to save it, it fails on the many-to-many relation. Because the relation between the existing already exists (not unique constraint)
I've tried:
to attach the book collection to the context (same error)
the person (no error but no change either)
only change person details not the collection (the untracked entity is saved but my books read is not saved)
I'm not very fond of managing the BookPersons table or doing queries first to get existing entities. My goal is to do an update of a person and its read books in one go. I do know how to write it in SQL but it seems EF is quite a challenge.
If you want to view my code, visit: https://github.com/CasperCBroeren/EfCollectionsProblem/blob/master/Program.cs
Thanks for explaining what I'm missing or not getting
I would create PersonBooks model to handle that,
Book model
public class Book
{
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public int Id { get; set; }
public string Title { get; set; }
[ForeignKey("BookId")]
public virtual ICollection<PersonBook> PersonBooks { get; set; }
}
Person Model
public class Person
{
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public int Id { get; set; }
public string Name { get; set; }
[ForeignKey("PersonId")]
public virtual ICollection<PersonBook> PersonBooks { get; set; }
}
PersonBook Model
public class PersonBook
{
public int Id { get; set; }
public int PersonId { get; set; }
public int BookId { get; set; }
public virtual Person Person { get; set; }
public virtual Book Book { get; set; }
}
then you can get all book Id's readen by person by using
var personId = 15; // what ever you want
db.PersonBooks.Where(a=> a.PersonId == personId);
or get all persons Id's who read a book by id
var bookId= 11; // what ever you want
db.PersonBooks.Where(a=> a.BookId== bookId);
Note:
you can reach the Book entity by using for example
db.PersonBooks.Where(a=> a.PersonId == personId).FirstOrDefault().Book;
A key factor in EF is dealing with object references. Any reference that a DbContext isn't tracking will be treated as a new entity. The Update method on DbSets should actually be avoided as it can lead to inefficient and potentially dangerous data changes.
This option: "to attach the book collection to the context" works with singular references, but doesn't work with collections. The trouble is that what you want to say is "add any book the person isn't already associated with" however, the DbContext has no knowledge of what books that person is already associated with unless you fetch that information first.
... or doing queries first to get existing entities.
This is actually what you should do in most cases. In the case of a simple console application to test out ideas and learn how EF works it may look like overkill, but in real-world systems this is the recommended approach for a number of reasons.
Keeping payloads small. Take an API or web site where you allow a user to associate books to people. Sending entire representation of people, their books, etc. back and forth between server and client can get potentially expensive in terms of data size. If I have an API that allows me to associate books to a person, if those books already reflect known data state (already exist in the db) then all I need to pass are IDs. When passing data to views the idea is to only pass what the view needs rather than entire entity graphs.
Keeping payloads safe. Passing entire entities around and using methods like Update can make your systems prone to tampering. Update will update all columns in an entity whether you expect, or allow them to change or not. By minimizing the data coming back you ensure only the expected details can change, and you by definition validate that the provided values are safe.
For example, if I have a service that wanted to update books associated to a person. In the UI I had loaded that John had "Jungle Book (ID: 1)", and I wanted to update the associations so John now had "Jungle Book" and "Tom Sawyer". While my UI might now allow it, it is certainly possible that the client browser can intercept the call to my controller / web service, and seeing a Book { ID: 1, Name: "Jungle Book" }, tamper with that data to send Book { ID: 1, Name: "Hitchhiker's Guide to the Galaxy"}. Provided you did solve this issue in a way that resulted in attaching entities and doing an Update or such, the consequence of this tampering would be that an attacker could rename a book. That would have a flow-on effect to every Person that referenced Book ID #1.
Instead if I want to have something like an "UpdateBooks" method that can reassign books for a person, I would have a method something like this:
private void UpdateBooks(int personId, params int[] bookIds)
{
using (var context = new AppDbContext())
{
var person = context.Persons
.Include(x => x.Books)
.Single(x => x.PersonId == personId);
var existingBookIds = person.Books.Select(x => x.BookId).ToList();
var bookIdsToAdd = bookIds.Except(existingBookIds).ToList();
var bookIdsToRemove = existingBookIds.Except(bookIds).ToList();
foreach(var bookId in bookIdsToRemove)
{
var book = person.Books.Single(x => x.BookId == bookId);
person.Books.Remove(book);
}
if (bookIdsToAdd.Any())
{
var booksToAdd = context.Books
.Where(x => bookIdsToAdd.Contains(x.BookId))
.ToList();
if(booksToAdd.Count != bookIdsToAdd.Count)
{
// Handle scenario where one or more book IDs provided weren't found.
}
person.Books.AddRange(booksToAdd);
}
context.SaveChanges();
}
}
This assumes that EF is handling PersonBooks entirely behind the scenes where PersonBook consists of just PersonId and BookId so-as Person can have a collecton of Books rather than PersonBooks.
This example runs up to two SELECT queries. One to get the Person and it's current books, and one to get any new books if any need to be added. There is no risk of tampering with books, and we can easily validate scenarios such as passing an unknown book ID. The temptation might be to avoid querying, seeing it as expensive, but in most cases EF can provide data quite quickly and efficiently. It is the exception rather than the norm that you might need to get creative to get around possible performance bottlenecks with data access.
A third consideration is to focus on keeping operations atomic, especially for things like web services / web applications. This doesn't apply when just getting familiar with the workings of EF, entities, and such, but a consideration for more real-world applications. Rather than having more complex methods like UpdateBooks(), using actions like "AddBook" and "RemoveBook" can keep operations faster and simpler. One argument for a larger method is that you might expect all of the operations to be committed (or not) as one operation, such as UpdateBooks gets called as part of one big "SavePerson" method reflecting changes to the person and all of it's associated details. In these cases having atomic actions is still recommended, except instead of updating data state, they can update server (session) state waiting for a "Save" call to come through to persist the changes as one operation, or discarding the changes. Add/Remove methods can still provide the validation checks ultimately setting things up for entities to be loaded, modified, and persisted.

Adding Navigation property breaks breeze client-side mappings (but not Server Side EF6)

I have an application that I developed standalone and now am trying to integrate into a much larger model. Currently, on the server side, there are 11 tables and an average of three navigation properties per table. This is working well and stable.
The larger model has 55 entities and 180+ relationships and includes most of my model (less the relationships to tables in the larger model). Once integrated, a very strange thing happens: the server sends the same data, the same number of entities are returned, but the exportEntities function returns a string of about 150KB (rather than the 1.48 MB it was returning before) and all queries show a tenth of the data they were showing before.
I followed the troubleshooting information on the Breeze website. I looked through the Breeze metadata and the entities and relationships seem defined correctly. I looked at the data that was returned and 9 out of ten entities did not appear as an object, but as a function: function (){return e.refMap[t]} which, when I expand it, has an 'arguments' property: Exception: TypeError: 'caller', 'callee', and 'arguments' properties may not be accessed on strict mode functions or the arguments objects for calls to them.
For reference, here are the two entities involved in the breaking change.
The Repayments Entity
public class Repayment
{
[Key, Column(Order = 0)]
public int DistrictId { get; set; }
[Key, Column(Order = 1)]
public int RepaymentId { get; set; }
public int ClientId { get; set; }
public int SeasonId { get; set; }
...
#region Navigation Properties
[InverseProperty("Repayments")]
[ForeignKey("DistrictId")]
public virtual District District { get; set; }
// The three lines below are the lines I added to break the results
// If I remove them again, the results are correct again
[InverseProperty("Repayments")]
[ForeignKey("DistrictId,ClientId")]
public virtual Client Client { get; set; }
[InverseProperty("Repayments")]
[ForeignKey("DistrictId,SeasonId,ClientId")]
public virtual SeasonClient SeasonClient { get; set; }
The Client Entity
public class Client : IClient
{
[Key, Column(Order = 0)]
public int DistrictId { get; set; }
[Key, Column(Order = 1)]
public int ClientId { get; set; }
....
// This Line lines were in the original (working) model
[InverseProperty("Client")]
public virtual ICollection<Repayment> Repayments { get; set; }
....
}
The relationship that I restored was simply the inverse of a relationship that was already there, which is one of the really weird things about it. I'm sure I'm doing something terribly wrong, but I'm not even sure at this point what information might be helpful in debugging this.
For defining foreign keys and inverse properties, I assume I must use either data annotations or the FluentAPI even if the tables follow all the EF conventions. Is either one better than the other? Is it necessary to consistently choose one approach and stay with it? Does the error above provide any insight as to what I might be doing wrong? Is there any other information I could post that might be helpful?
Breeze is an excellent framework and has the potential to really increase our reach providing assistance to small farmers in rural East Africa, and I'd love to get this prototype working.
THanks
Ok, some of what you are describing can be explained by breeze's default behavior of compressing the payload of any query results that return multiple instances of the same entity. If you are using something like the default 'json.net' assembly for serialization, then each entity is sent with an extra '$id' property and if the same entity is seen again it gets serialized via a simple '$ref' property with the value of the previously mentioned '$id'.
On the breeze client during deserialization these '$refs' get resolved back into full entities. However, because the order in which deserialization is performed may not be the same as the order that serialization might have been performed, breeze internally creates deferred closure functions ( with no arguments) that allow for the deferred resolution of the compressed results regardless of the order of serialization. This is the
function (){return e.refMap[t]}
that you are seeing.
If you are seeing this value as part of the actual top level query result, then we have a bug, but if you are seeing this value while debugging the results returned from your server, before they have been returned to the calling function, then this is completely expected ( especially if you are viewing the contents of the closure before it should be executed.)
So a couple of questions and suggestions
Are you are actually seeing an error processing the result of your query or are simply surprised that the results are so small? If it's just a size issue, check and see if you can identify data that should have been sent to the client and is missing. It is possible that the reference compression is simply very effective in your case.
take a look at the 'raw' data returned from your web service. It should look something like this, with '$id' and '$ref' properties.
[{
'$id': '1',
'Name': 'James',
'BirthDate': '1983-03-08T00:00Z',
},
{
'$ref': '1'
}]
if so, then look at the data and make sure that an '$'id' exists that correspond to each of your '$refs'. If not, something is wrong with your server side serialization code. If the data does not look like this, then please post back with a small example of what the 'raw' data does look like.
After looking at your Gist, I think I see the issue. Your metadata is out of sync with the actual results returned by your query. In particular, if you look for the '$id' value of "17" in your actual results you'll notice that it is first found in the 'Client' property of the 'Repayment' type, but your metadata doesn't have 'Client' navigation property defined for the 'Repayment' type ( there is a 'ClientId' ). My guess is that you are reusing an 'older' version of your metadata.
The reason that this results in incomplete results is that once breeze determines that it is deserializing an 'entity' ( i.e. a json object that has $type property that maps to an actual entityType), it only attempts to deserialize the 'known' properties of this type, i.e. those found in the metadata. In your case, the 'Client' navigation property on the 'Repayment' type was never being deserialized, and any refs to the '$id' defined there are therefore not available.

Delete a child from an aggregate root

I have a common Repository with Add, Update, Delete.
We'll name it CustomerRepository.
I have a entity (POCO) named Customer, which is an aggregate root, with Addresses.
public class Customer
{
public Address Addresses { get; set; }
}
I am in a detached entity framework 5 scenario.
Now, let's say that after getting the customer, I choose to delete a client address.
I submit the Customer aggregate root to the repository, by the Update method.
How can I save the modifications made on the addresses ?
If the address id is 0, I can suppose that the address is new.
For the rest of the address, I can chose to attach all the addresses, and mark it as updated no matter what.
For deleted addresses I can see no workaround...
We could say this solution is incomplete and inefficient.
So how the updates of aggregate root childs should be done ?
Do I have to complete the CustomerRepository with methods like AddAddress, UpdateAddress, DeleteAddress ?
It seems like it would kind of break the pattern though...
Do I put a Persistence state on each POCO:
public enum PersistanceState
{
Unchanged,
New,
Updated,
Deleted
}
And then have only one method in my CustomerRepository, Save ?
In this case it seems that I am reinventing the Entity "Non-POCO" objects, and adding data access related attribute to a business object...
First, you should keep your repository with Add, Update, and Delete methods, although I personally prefer Add, indexer set, and Remove so that the repository looks like an in memory collection to the application code.
Secondly, the repository should be responsible for tracking persistence states. I don't even clutter up my domain objects with
object ID { get; }
like some people do. Instead, my repositories look like this:
public class ConcreteRepository : List<AggregateRootDataModel>, IAggregateRootRepository
The AggregateRootDataModel class is what I use to track the IDs of my in-memory objects as well as track any persistence information. In your case, I would put a property of
List<AddressDataModel> Addresses { get; }
on my CustomerDataModel class which would also hold the Customer domain object as well as the database ID for the customer. Then, when a customer is updated, I would have code like:
public class ConcreteRepository : List<AggregateRootDataModel>, IAggregateRootRepository
{
public Customer this[int index]
{
set
{
//Lookup the data model
AggregateRootDataModel model = (from AggregateRootDataModel dm in this
where dm.Customer == value
select dm).SingleOrDefault();
//Inside the setter for this property, run your comparison
//and mark addresses as needing to be added, updated, or deleted.
model.Customer = value;
SaveModel(model); //Run your EF code to save the model back to the database.
}
}
}
The main caveat with this approach is that your Domain Model must be a reference type and you shouldn't be overriding GetHashCode(). The main reason for this is that when you perform the lookup for the matching data model, the hash code can't be dependent upon the values of any changeable properties because it needs to remain the same even if the application code has modified the values of properties on the instance of the domain model. Using this approach, the application code becomes:
IAggregateRootRepository rep = new ConcreteRepository([arguments that load the repository from the db]);
Customer customer = rep[0]; //or however you choose to select your Customer.
customer.Addresses = newAddresses; //change the addresses
rep[0] = customer;
The easy way is using Self Tracking entities What is the purpose of self tracking entities? (I don't like it, because tracking is different responsability).
The hard way, you take the original collection and you compare :-/
Update relationships when saving changes of EF4 POCO objects
Other way may be, event tracking ?

How to prevent cyclic loading of related entities in Entity Framework Code First

I'm new to Entity Framework and am trying to learn how to use Code First to load entities from the database.
My model contains a user:
public class User
{
public int UserID { get; set; }
[Required]
public string Name { get; set; }
// Navigation Properties
public virtual ICollection<AuditEntry> AuditEntries { get; set; }
}
Each user can have a set of audit entries each of which contains a simple message:
public class AuditEntry
{
public int AuditEntryID { get; set; }
[Required]
public string Message { get; set; }
// Navigation Properties
public int UserID { get; set; }
public virtual User User { get; set; }
}
I have a DBContext which just exposes the two tables:
public DbSet<User> Users { get; set; }
public DbSet<AuditEntry> AuditEntries { get; set; }
What I want to do is load a list of AuditEntry objects containing the message and the related User object containing the UserID and Name properties.
List<AuditEntry> auditEntries = db.AuditEntries.ToList();
Because I have my navigation properties marked as virtual and I haven't disabled lazy loading, I get an infinitely deep object graph (each AuditEntry has a User object, which contains a list of the AuditEntries, each of which contains a User object, which contains a list of AuditEntries etc)
This is no good if I then want to serialize the object (for example to send as the result in a Web API).
I've tried turning off lazy loading (either by removing the virtual keywords from my navigation properties in the model, or by adding this.Configuration.LazyLoadingEnabled = false; to my DBContext). As expected this results in a flat list of AuditEntry objects with User set to null.
With lazy loading off, I've tried to eager load the User like so:
var auditentries = db.AuditEntries.Include(a => a.User);
but this results in the same deep / cyclic result as before.
How can I load one level deep (e.g. include the user's ID and name) without also loading back-references / following navigation properties back to the original object and creating a cycle?
After much hacking, I've come up with the following potential solution using a dynamic return type and projection in my Linq query:
public dynamic GetAuditEntries()
{
var result = from a in db.AuditEntries
select new
{
a.AuditEntryID,
a.Message,
User = new
{
a.User.UserID,
a.User.Username
}
};
return result;
}
This produces (internally) the following SQL which seems sensible:
SELECT
[Extent1].[AuditEntryID] AS [AuditEntryID],
[Extent1].[Message] AS [Message],
[Extent1].[UserID] AS [UserID],
[Extent2].[Username] AS [Username]
FROM [dbo].[AuditEntries] AS [Extent1]
INNER JOIN [dbo].[Users] AS [Extent2] ON [Extent1].[UserID] = [Extent2].[UserID]
This produces the results that I'm after, but it seems a bit long winded (especially for real life models that would be significantly more complex than my example), and I question the impact this will have on performance.
Advantages
This gives me a lot of flexibility over the exact contents of my returned object. Since I generally do most of my UI interaction / templating on the client side, I frequently find myself having to create multiple versions of my model objects. I generally need a certain granularity over which users can see which properties (e.g. I might not want to send every user's email address to low-privilege user's browser in an AJAX request)
It allows entity framework to intelligently build the query and only select the fields that I have chosen to project. For example, inside each top level AuditEntry object, I want to see User.UserID and User.Username but not User.AuditEntries.
Disadvantages
The returned type from my Web API is no longer strongly typed so I couldn't create a strongly typed MVC view based on this API. As it happens this is not a problem for my particular case.
Projecting manually in this way from a large / complex model could result in a lot of code, seems like a lot of work and has the potential to introduce errors in the API. This would have to be carefully tested.
The API method becomes tightly coupled with the structure of the model and since this is no longer fully automated based on my POCO classes, any changes made to the model would have to be reflected in the code that loads them.
Include method?
I'm still a little confused about the use of the .Include() method. I understand that this method will specify that related entities should be "eager loaded" along with the specified entity. However, since the guidance seems to be that navigation properties should be placed on both sides of a relationship and marked as virtual, the Include method seems to result in a cycle being created which has a significant negative impact on it's usefulness (especially when serializing).
In my case the "tree" would look a little like:
AuditEntry
User
AuditEntries * n
User * n
etc
I'd be very interested to hear any comments about this approach, the impact of using dynamic in this way or any other insights.

How to pass validation for exception at SL client side?

Suppose I have an entity Person(id, dept, EmailAddress,DOB, ...), when model created with EF, then create a metadata class for this class to put validation rule on server side like:
[CustomValidation(typeof(MyValidator), "DOBValidator")]
public Nullable<DateTime> DOB { get; set; }
[RegularExpression("^([\\w-\\.]+)#((\\[[0–9]{1,3}\\.[0–9]{1,3}\\.[0–9]{1,3}\\.)|(([\\w-]+\\.)+))([a-zA-Z]{2,4" + "}|[0–9]{1,3})(\\]?)$", ErrorMessage = "Invalid email address")]
[StringLength(128)]
public string EmailAddress { get; set; }
when the validation rule is in place, for any data sent from client side will go through the validation with no exception when submit any data for saving.
but now I want exception for the rule: from UI, when get data from UI by binding for entity Person, based on the data, I want to ignore validation. for example, when Dept=A, do not check EmailAddress validation, for dept=B, do not check DOB validation.
How to resolve this issue?
I believe you need Class-level validation. Have a look at this question
Of course, your code need to be compiled client-side. (If using WCF ria services there are a couple of ways to reach this)
HTH