How to design the persistence layer in a .NET and EF application with concurrency control? - entity-framework

I have read this post and the theory I think that is clear. I have a DAL that only has the methods to add, get, update and delete information in a database.
So I guess that I have an application in which I have clients, orders and type of client. Type of client has a percent that set the discount to make to a type of client.
The business layer request to DAL the type of client to know the discount.
The business layer create the order with the price and apply the discount according to the type of client.
The business layer send to the DAL the command to add the new order, sending the new order.
In code I can have this:
DAL:
public async getClientType(long paramIDClientType)
{
using(Entities myDbContext = new Entities())
{
return await myDbContext.ClientTypes.Where(x=> x.IDType == paramIDClientType).SingleOrDefault();
}
}
public async addOrder(Orders paramNewOrder)
{
using(Entities myDbContext = new Entities())
{
myDbContext.Orders.Local.Add(paramNewOrder);
await myDbContext.SaveChangesAsync();
}
}
Business layer:
public void addOrderToClient(CLients paramClient)
{
ClientTypes myType = myDAL.getClientType(paramClient.IDClient);
ORder myNewOrder = myNewOder();
myNewOrder.IDClient = paramClientIdCLient;
myNewOder.Amount = 300;
myNewOrder.Discount = myType.Discount;
myNewOder.Total = nyNewOrder.Total - myNewOder.Amount * myNewOder.Discount;
myDAL.AddOrder(nyNewOrder);
}
But I have a problem with the concurrency in this case, because I want to ensure that I use the correct discount, so I want to avoid the discount of a type of client is changed by another user un the middle of the process of add the new order.
If I use optimistic concurrency, I have to have a timestamp column in my ClientTypes table, but this not solve my problem, because in the addOrder method in my DAL layer, I only pass as parameter the new order, so the method don't have the timestamp value that has the business layer to check if the type of the client has changed to ensure that the discount used is the correct.
SO I am thinking in this solution:
public async addOrder(Orders paramNewOrder)
{
using(Entities myDbContext = new Entities())
{
string sql = "select ct.* from ClientTypes as ct, CLients as c"
+ " where ct.IdType = c.IdType and c.IdType = " + paramNewOrder.IdCLient;
ClientTypes myClientType = await myDbContext.CLientTypeSqlQuery<CLientTypes>(sql).SingleOrDefaultAsync();
if(paramNewOrder.Discount != myCLientType)
{
throw new Exception("Discount incorrect.");
}
paramNewOrder.Total = paramNewOrder.Amount - paramNewOrder.Amount * myClientType.Discount;
myDbContext.Orders.Local.Add(paramNewOrder);
await myDbContext.SaveChangesAsync();
}
}
This is my business layer, but use EF to get the data, so I think that this solution merge DAL abd business layer. Is this true? If this is true, I guess that is a not good solution. But then, how could I control concurrency?
Thanks.

Yes, optimistic concurrency control doesn't help you to prevent inserting a faulty new order, because you don't commit the ClientType. Only updating a ClientType would raise an exception if the discount was changed in the mean time.
But carefully consider the requirements. Is it really of paramount importance that the correct discount is used milliseconds after it's modified? If so, you have to look for a locking mechanism. Otherwise, just fetch the current discount at the very last moment, do the calculation and commit the order.
You could implement a locking/calculation/insert mechanism in a stored procedure that is mapped to the insert action of an Order. EF can map CUD actions to stored procedures..

Related

EF Cannot be translated

i have problem with translated query, ToList(), AsEnumerable etc.
I need construct or create query which is shared.
Branches -> Customer -> some collection -> some collection
Customer -> some collection -> some collection.
Do you help me how is the best thingh how to do it and share the query.
i access to repository via graphql use projection etc.
public IQueryable<CustomerTableGraphQL> BranchTableReportTest(DateTime actualTime, long userId)
{
var r =
(
from b in _dbContext.Branches
let t = Customers(b.Id).ToList()
select new CustomerTableGraphQL
{
Id = b.Id,
Name = b.Name,
Children =
(
from c in t
select new CustomerTableGraphQL
{
Id = c.Id,
Name = c.Name
}
)
.AsEnumerable()
}
);
return r;
}
public IQueryable<Customer> Customers(long branchId) =>
_dbContext.Customers.Where(x => x.BranchId.Value == branchId).ToList().AsQueryable();
Some example how to doit and share iquearable between query
Using ToList / AsEnumerable etc. entirely defeats the potential benefits of using IQueryable. If your code needs to do this rather than return an IQueryable<TEntity> then you should be returning IEnumerable<TResult> where TResult is whatever entity or DTO/ViewModel you want to return.
An example of an IQueryable<TEntity> repository pattern would be something like this:
public IQueryable<Customer> GetCustomersByBranch(long branchId) =>
_dbContext.Customers.Where(x => x.BranchId.Value == branchId);
Normally I wouldn't really even have a repository method for that, I'd just use:
public IQueryable<Customer> GetCustomers() =>
_dbContext.Customers.AsQueryable();
... as the "per branch" is simple enough for the consumer to request without adding methods for every possible filter criteria. The AsQueryable in this case is only needed because I want to ensure the result matches the IQueryable type casting. When your expression has a Where clause then this is automatically interpreted as being an IQueryable result.
So a caller calling the Repository's "GetCustomers()" method would look like:
// get customer details for our branch.
var customers = _Repository.GetCustomers()
.Where(x => x.BranchId == branchId)
.OrderBy(x => x.LastName)
.ThenBy(x => x.FirstName)
.Select(x => new CustomerSummaryViewModel
{
CustomerId = x.Id,
FirstName = x.FirstName,
LastName = x.LastName,
// ...
}).Skip(pageNumber * pageSize)
.Take(pageSize)
.ToList();
In this example the repository exposes a base query to fetch data, but without executing/materializing anything. The consumer of that call is then free to:
Filter the data by branch,
Sort the data,
Project the data down to a desired view model
Paginate the results
... before the query is actually run. This pulls just that page of data needed to populate the VM after filters and sorts as part of the query. That Repository method can serve many different calls without needing parameters, code, or dedicated methods to do all of that.
Repositories returning IQueryable that just expose DbSets aren't really that useful. The only purpose they might provide is making unit testing a bit easier as Mocking the repository is simpler than mocking a DbContext & DbSets. Where the Repository pattern does start to help is in enforcing standardized rules/filters on data. Examples like soft delete flags or multi-tenant systems where rows might belong to different clients so a user should only ever search/pull across one tenant's data. This also extends to details like authorization checks before data is returned. Some of this can be managed by things like global query filters but wherever there are common rules to enforce about what data is able to be retrieved, the Repository can serve as a boundary to ensure those rules are applied consistently. For example with a soft-delete check:
public IQueryable<Customer> GetCustomers(bool includeInactive = false)
{
var query = _context.Customers.AsQueryable();
if (!includeInactive)
query = query.Where(x => x.IsActive);
return query;
}
A repository can be given a dependency for locating the current logged in user and retrieving their roles, tenant information, etc. then use that to ensure that:
a user is logged in.
The only data retrieved is available to that user.
An appropriate exception is raised if specific data is requested that this user should never be able to access.
An IQueryable repository does require a Unit of Work scope pattern to work efficiently within an application. IQueryable queries do no execute until something like a ToList or Single, Any, Count, etc. are called. This means that the caller of the repository ultimately needs to be managing the scope of the DbContext that the repository is using, and this sometimes rubs developers the wrong way because they feel the Repository should be a layer of abstraction between the callers (Services, Controllers, etc.) and the data access "layer". (EF) To have that abstraction means adding a lot of complexity that ultimately has to conform to the rules of EF (or even more complexity to avoid that) or significantly hamper performance. In cases where there is a clear need or benefit to tightly standardizing a common API-like approach for a Repository that all systems will conform to, then an IQueryable pattern is not recommended over a general IEnumerable typed result. The benefit of IQueryable is flexibility and performance. Consumers decide and optimize for how the data coming from the Repository is consumed. This flexibility extends to cover both synchronous and asynchronous use cases.
EF Core will translate only inlined query code. This query will work:
public IQueryable<CustomerTableGraphQL> BranchTableReportTest(DateTime actualTime, long userId)
{
var r =
(
from b in _dbContext.Branches
select new CustomerTableGraphQL
{
Id = b.Id,
Name = b.Name,
Children =
(
from c in _dbContext.Customers
where c.BranchId == b.Id
select new CustomerTableGraphQL
{
Id = c.Id,
Name = c.Name
}
)
.AsEnumerable()
}
);
return r;
}
If you plan to reuse query parts, you have to deal with LINQKit and its ExpandableAttribute (will show sample on request)

Paging and sorting Entity Framework on a field from Partial Class

I have a GridView which needs to page and sort data which comes from a collection of Customer objects.
Unfortunately my customer information is stored separately...the customer information is stored as a Customer ID in my database, and the Customer Name in a separate DLL.
I retrieve the ID from the database using Entity Framework, and the name from the external DLL through a partial class.
I am getting the ID from my database as follows:
public class DAL
{
public IEnumberable<Customer> GetCustomers()
{
Entities entities = new Entities();
var customers = (from c in entities.Customers
select c);
//CustomerID is a field in the Customer table
return customers;
}
}
I have then created a partial class, which retrieves the data from the DLL:
public partial class Customer
{
private string name;
public string Name
{
if (name==null)
{
DLLManager manager = new DLLManager();
name= manager.GetName(CustomerID);
}
return name;
}
}
In my business layer I can then call something like:
public class BLL
{
public List<Customer> GetCustomers()
{
DAL customersDAL = new DAL();
var customers = customersDAL.GetCustomers();
return customers.ToList();
}
}
...and this gives me a collection of Customers with ID and Name.
My problem is that I wish to page and sort by Customer Name, which as we have seen, is populated from a DLL. This means I cannot page and sort in the database, which is my preferred solution. I am therefore assuming I am going to have to call of the database records into memory, and perform paging and sorting at this level.
My question is - what is the best way to page and sort an in-memory collection. Can I do this with my List in the BLL above? I assume the List would then need to be stored in Session.
I am interested in people's thoughts on the best way to page and sort a field that does not come from the database in an Entity Framework scenario.
Very grateful for any help!
Mart
p.s. This question is a development of this post here:
GridView sorting and paging Entity Framework with calculated field
The only difference here is that I am now using a partial class, and hopefully this post is a little clearer.
Yes, you can page and sort within you list in the BLL. As long as its fast enough I wouldn't care to much about caching something in the session. An other way would be to extend your database with the data from you DLL.
I posted this question slightly differently on a different forum, and got the following solution.
Basically I return the data as an IQueryable from the DAL which has already been forced to execute using ToList(). This means that I am running my sorting and paging against an object which consists of data from the DB and DLL. This also allows Scott's dynamic sorting to take place.
The BLL then performs OrderBy(), Skip() and Take() on the returned IQueryable and then returns this as a List to my GridView.
It works fine, but I am slightly bemused that we are perfoming IQueryable to List to IQueryable to List again.
1) Get the results from the database as an IQueryable:
public class DAL
{
public IQueryable<Customer> GetCustomers()
{
Entities entities = new Entities();
var customers = (from c in entities.Customers
select c);
//CustomerID is a field in the Customer table
return customers.ToList().AsQueryable();
}
}
2) Pull the results into my business layer:
public class BLL
{
public List<Customer> GetCustomers(intint startRowIndex, int maximumRows, string sortParameter)
{
DAL customersDAL = new DAL();
return customersDAL.GetCustomers().OrderBy(sortParameter).Skip(startRowIndex).Take(maximumRows).ToList();
}
}
Here is the link to the other thread.
http://forums.asp.net/p/1976270/5655727.aspx?Paging+and+sorting+Entity+Framework+on+a+field+from+Partial+Class
Hope this helps others!

How can I create a generic update method for One to Many structures in Entity Framework 5?

I am writing a web application, such that I get different objects back from the web that need to be either updated or added to the database. On top of this, I need to check that the owner is not modified. Since a hacker could potentially get an account and send an update to modify the foreign key to the user model. I don't want to have to manually code all of these methods, instead I want to make a simple generic call.
Maybe something as simple as this
ctx.OrderLines.AddOrUpdateSet(order.OrderLines, a => a.Order)
Based on old persisted records that have a foreign key to Order, and on the new incoming records.
Delete old records that are not on the new records list.
Add new records that are not on the old records list.
Update new records that exist on both lists.
ctx.Entry(orderLine).State=EntityState.Deleted;
...
ctx.Entry(orderLine).State=EntityState.Added;
...
ctx.Entry(orderLine).State=EntityState.Modified;
This gets a bit complicated when the old record is loaded to verify that ownership did not change. I get an error if I don't do.
oldorder.OrderLines.remove(oldOrderLine); //for deletes
oldorder.OrderLines.add(oldOrderLine); //for adds
ctx.Entry(header).CurrentValues.SetValues(header); //for modifications
With Entity Framework 5 there is a new extension function called AddOrUpdate. And there was a very interesting (please read) blog entry on how to create this method before it was added.
I'm not sure if this is too much to ask as a question in StackOverflow, any clues on how to approach the problem may be sufficient. Here are my thoughts so far:
a) leverage AddOrUpdate for some of the functionality.
b) create a secondary context hoping to avoid loading order into the context and avoid extra calls.
c) Set the state of all the saved objects to initially deleted.
Since you have linked to this question from my own question, I thought I'd throw in some newly-aquired experience with Entity Framework for me.
To achieve a common save method in my generic repository with Entity Framework, I do this. (Please note that the Context is a member of my repository, as I am implementing the Unit of Work pattern as well)
public class EFRepository<TEntity> : IRepository<TEntity> where TEntity : class
{
internal readonly AwesomeContext Context;
internal readonly DbSet<TEntity> DbSet;
public EFRepository(AwesomeContext context)
{
if (context == null) throw new ArgumentNullException("context");
Context = context;
DbSet = context.Set<TEntity>();
}
// Rest of implementation removed for brevity
public void Save(TEntity entity)
{
var entry = Context.Entry(entity);
if (entry.State == EntityState.Detached)
DbSet.Add(entity);
else entry.State = EntityState.Modified;
}
}
Honestly, I can't tell you why this works, because I just kept changing the state conditions - however I do have unit (integration) tests to prove that it works. Hopefully someone more into EF than myself can shed some light on this.
Regarding the "cascading updates", I was curious myself as if it would work using the Unit of Work pattern (my question I linked to was when I did not know it existed, and my repositories would basically create a unit of work whenever I wanted to save/get/delete, which is bad), so I threw in a test case in a simple relational DB. Here is a diagram to give you an idea.
IMPORTANT In order for test case number 2 to work, you need to make your POCO reference properties virtual, in order for EF to provide lazy loading.
The repository implementation is just derived from the generic EFRepository<TEntity> as shown above, so I'll leave out that implementation.
These are my test cases, both pass.
public class EFResourceGroupFacts
{
[Fact]
public void Saving_new_resource_will_cascade_properly()
{
// Recreate a fresh database and add some dummy data.
SetupTestCase();
using (var ctx = new LocalizationContext("Localization.CascadeTest"))
{
var cultureRepo = new EFCultureRepository(ctx);
var resourceRepo = new EFResourceRepository(cultureRepo, ctx);
var existingCulture = cultureRepo.Get(1); // First and only culture.
var groupToAdd = new ResourceGroup("Added Group");
var resourceToAdd = new Resource(existingCulture,"New Resource", "Resource to add to existing group.",groupToAdd);
// Verify we got a single resource group.
Assert.Equal(1,ctx.ResourceGroups.Count());
// Saving the resource should also add the group.
resourceRepo.Save(resourceToAdd);
ctx.SaveChanges();
// Verify the group was added without explicitly saving it.
Assert.Equal(2, ctx.ResourceGroups.Count());
}
// try creating a new Unit of Work to really verify it has been persisted..
using (var ctx = new LocalizationContext("Localization.CascadeTest"))
{
Assert.DoesNotThrow(() => ctx.ResourceGroups.First(rg => rg.Name == "Added Group"));
}
}
[Fact]
public void Changing_existing_resources_group_saves_properly()
{
SetupTestCase();
using (var ctx = new LocalizationContext("Localization.CascadeTest"))
{
ctx.Configuration.LazyLoadingEnabled = true;
var cultureRepo = new EFCultureRepository(ctx);
var resourceRepo = new EFResourceRepository(cultureRepo, ctx);
// This resource already has a group.
var existingResource = resourceRepo.Get(2);
Assert.NotNull(existingResource.ResourceGroup); // IMPORTANT: Property must be virtual!
// Verify there is only one resource group in the datastore.
Assert.Equal(1,ctx.ResourceGroups.Count());
existingResource.ResourceGroup = new ResourceGroup("I am implicitly added to the database. How cool is that?");
// Make sure there are 2 resources in the datastore before saving.
Assert.Equal(2, ctx.Resources.Count());
resourceRepo.Save(existingResource);
ctx.SaveChanges();
// Make sure there are STILL only 2 resources in the datastore AFTER saving.
Assert.Equal(2, ctx.Resources.Count());
// Make sure the new group was added.
Assert.Equal(2,ctx.ResourceGroups.Count());
// Refetch from store, verify relationship.
existingResource = resourceRepo.Get(2);
Assert.Equal(2,existingResource.ResourceGroup.Id);
// let's change the group to an existing group
existingResource.ResourceGroup = ctx.ResourceGroups.First();
resourceRepo.Save(existingResource);
ctx.SaveChanges();
// Assert no change in groups.
Assert.Equal(2, ctx.ResourceGroups.Count());
// Refetch from store, verify relationship.
existingResource = resourceRepo.Get(2);
Assert.Equal(1, existingResource.ResourceGroup.Id);
}
}
private void SetupTestCase()
{
// Delete everything first. Database.SetInitializer does not work very well for me.
using (var ctx = new LocalizationContext("Localization.CascadeTest"))
{
ctx.Database.Delete();
ctx.Database.Create();
var culture = new Culture("en-US", "English");
var resourceGroup = new ResourceGroup("Existing Group");
var resource = new Resource(culture, "Existing Resource 1",
"This resource will already exist when starting the test. Initially it has no group.");
var resourceWithGroup = new Resource(culture, "Exising Resource 2",
"Same for this resource, except it has a group.",resourceGroup);
ctx.Cultures.Add(culture);
ctx.ResourceGroups.Add(resourceGroup);
ctx.Resources.Add(resource);
ctx.Resources.Add(resourceWithGroup);
ctx.SaveChanges();
}
}
}
It was interesting to learn this, as I was not sure if it would work.
After working on this for a while I found an opensource project called GraphDiff here is it's blog entry 'introducing graphdiff for entity framework code first – allowing automated updates of a graph of detached entities'. I only began using it but it looks impressive. And it does solve the problem of issuing update/delete/insert for Many to One relationships. It actually generalizes the problem to graphs and allows arbitrary nesting.
Here is the generic method I concocted. It does use AddOrUpdate from the System.Data.Entity.Migrations namespace. Which may be reloading records from the db, I'll be checking on that later. The usage is
ctx.OrderLines.AddOrUpdateSet(l => l.orderId == neworder.Id,
l => l.Id, order.orderLines);
Here is the code:
public static class UpdateExtensions
{
public static void AddOrUpdateSet<TEntity>(this IDbSet<TEntity> set, Expression<Func<TEntity, bool>> predicate,
Func<TEntity, int> selector, IEnumerable<TEntity> newRecords) where TEntity : class
{
List<TEntity> oldRecords = set.Where(predicate).ToList();
IEnumerable<int> keys = newRecords.Select(selector);
foreach (TEntity newRec in newRecords)
set.AddOrUpdate(newRec);
oldRecords.FindAll(old => !keys.Contains(selector(old))).ForEach(detail => set.Remove(detail));
}
}

Having a hard time with Entity Framework detached POCO objects

I want to use EF DbContext/POCO entities in a detached manner, i.e. retrieve a hierarchy of entities from my business tier, make some changes, then send the entire hierarchy back to the business tier to persist back to the database. Each BLL call uses a different instance of the DbContext. To test this I wrote some code to simulate such an environment.
First I retrieve a Customer plus related Orders and OrderLines:-
Customer customer;
using (var context = new TestContext())
{
customer = context.Customers.Include("Orders.OrderLines").SingleOrDefault(o => o.Id == 1);
}
Next I add a new Order with two OrderLines:-
var newOrder = new Order { OrderDate = DateTime.Now, OrderDescription = "Test" };
newOrder.OrderLines.Add(new OrderLine { ProductName = "foo", Order = newOrder, OrderId = newOrder.Id });
newOrder.OrderLines.Add(new OrderLine { ProductName = "bar", Order = newOrder, OrderId = newOrder.Id });
customer.Orders.Add(newOrder);
newOrder.Customer = customer;
newOrder.CustomerId = customer.Id;
Finally I persist the changes (using a new context):-
using (var context = new TestContext())
{
context.Customers.Attach(customer);
context.SaveChanges();
}
I realise this last part is incomplete, as no doubt I'll need to change the state of the new entities before calling SaveChanges(). Do I Add or Attach the customer? Which entities states will I have to change?
Before I can get to this stage, running the above code throws an Exception:
An object with the same key already exists in the ObjectStateManager.
It seems to stem from not explicitly setting the ID of the two OrderLine entities, so both default to 0. I thought it was fine to do this as EF would handle things automatically. Am I doing something wrong?
Also, working in this "detached" manner, there seems to be an lot of work required to set up the relationships - I have to add the new order entity to the customer.Orders collection, set the new order's Customer property, and its CustomerId property. Is this the correct approach or is there a simpler way?
Would I be better off looking at self-tracking entities? I'd read somewhere that they are being deprecated, or at least being discouraged in favour of POCOs.
You basically have 2 options:
A) Optimistic.
You can proceed pretty close to the way you're proceeding now, and just attach everything as Modified and hope. The code you're looking for instead of .Attach() is:
context.Entry(customer).State = EntityState.Modified;
Definitely not intuitive. This weird looking call attaches the detached (or newly constructed by you) object, as Modified. Source: http://blogs.msdn.com/b/adonet/archive/2011/01/29/using-dbcontext-in-ef-feature-ctp5-part-4-add-attach-and-entity-states.aspx
If you're unsure whether an object has been added or modified you can use the last segment's example:
context.Entry(customer).State = customer.Id == 0 ?
EntityState.Added :
EntityState.Modified;
You need to take these actions on all of the objects being added/modified, so if this object is complex and has other objects that need to be updated in the DB via FK relationships, you need to set their EntityState as well.
Depending on your scenario you can make these kinds of don't-care writes cheaper by using a different Context variation:
public class MyDb : DbContext
{
. . .
public static MyDb CheapWrites()
{
var db = new MyDb();
db.Configuration.AutoDetectChangesEnabled = false;
db.Configuration.ValidateOnSaveEnabled = false;
return db;
}
}
using(var db = MyDb.CheapWrites())
{
db.Entry(customer).State = customer.Id == 0 ?
EntityState.Added :
EntityState.Modified;
db.SaveChanges();
}
You're basically just disabling some extra calls EF makes on your behalf that you're ignoring the results of anyway.
B) Pessimistic. You can actually query the DB to verify the data hasn't changed/been added since you last picked it up, then update it if it's safe.
var existing = db.Customers.Find(customer.Id);
// Some logic here to decide whether updating is a good idea, like
// verifying selected values haven't changed, then
db.Entry(existing).CurrentValues.SetValues(customer);

MVC 2 and EF4 Self-tracking entities models have bad state on post back

I've got standard Create() Edit() and Delete() methods on my controllers, and I am using the EF4 Self-tracking entities.
When the edit is posted back, the model.ChangeTracker.ChangeTracking = false, and model.ChangeTracker.State = ObjectState.Added, even though I made sure those are set when retrieving the record initially.
Are the self-tracking entities not persisting the ChangeTracker class when the form is submitted? If so, how do I fix that?
public virtual ActionResult Edit(int personId)
{
IContext context = ContextFactory.GetContext();
EntityRepo Repo = new EntityRepo(context);
Person d = Repo.Person.GetById(PersonId);
d.ChangeTracker.ChangeTrackingEnabled = true;
return View(d);
}
[HttpPost]
public virtual ActionResult Edit(int personId, Person item)
{
try
{
if (ModelState.IsValid)
{
IContext context = ContextFactory.GetContext();
EntityRepo Repo = new EntityRepo(context);
// the item is returning these properties that are wrong
//item.ChangeTracker.ChangeTrackingEnabled = false;
//item.ChangeTracker.State = ObjectState.Added;
Repo.Person.Update(item);
Repo.Person.SaveChanges();
return RedirectToAction("Index");
}
}
catch
{
}
return View();
}
Let's start at the beginning.
What are Self-Tracking Entities, exactly?
A Self-Tracking Entity is an entity which can do change tracking even when it is not connected to a ObjectContext. They are useful in times when you must change the entity, but cannot have it connected to an ObjectContext.
So when would I want one, really?
Mostly, when you must have distributed objects. For example, one use case is when you are making a web service which talks to a Silverlight client. However, other tools, like RIA Services may be a better fit here. Another possible use case is for a long-running task. Since an ObjectContext is intended to be a unit of work and should typically not be long-lived, having a disconnected entity might make sense here.
Do they make any sense for MVC?
Not really, no.
Let's look at this a little deeper, and examine what happens when you update an entity in MVC. The general process is like this:
The browser issues a GET request for an update page.
The MVC app fetches an entity, and uses it to build an update HTML page. The page is served to the browser, and most C# objects, including your entity, are disposed. At this point, you can restart the Web server, and the browser will never know the difference.
The browser issues a POST request to update the entity.
The MVC framework uses the data in the POST in order to materialize an instance of an edit model which is passed to the update action. This might happen to be the same type as the entity, but it is a new instance.
The MVC app can update the entity and pass those changes back to the database.
Now, you could make self-tracking entities work by also including the full state of the STE in the HTML form and POSTing that back to the MVC app along with the scalar values on the entity. Then the Self-Tracking Entity might at least work.
But what benefit does this give you? The browser obviously cannot deal with your entity as a C# object. So it cannot make any changes to the entity worth tracking in terms that a Self-Tracking Entity would understand.
U should keep original STE in some hidden field. It's like your custom ViewState. In submit method u must merge original STE and new values.
Use ActionFilterAttribute for it.
Like
public class SerializeOriginalModelAttribute : ActionFilterAttribute
{
public override void OnActionExecuted(ActionExecutedContext filterContext)
{
var viewResult = filterContext.Result as ViewResult;
if (viewResult == null)
return;
var viewModel = viewResult.ViewData.Model as ViewModel;
if (viewModel == null || viewModel.SteObject == null)
return;
byte[] bytes;
using (var stream = new MemoryStream())
{
var serializer = new DataContractSerializer(viewModel.SteObject.GetType());
serializer.WriteObject(stream, viewModel.SteObject);
bytes = stream.ToArray();
}
var compressed = GZipHelper.Compress(bytes);
viewModel.SerializedSteObject = Convert.ToBase64String(compressed);
}
}
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
if (filterContext.ActionParameters == null || filterContext.ActionParameters.Count == 0)
return;
var viewModel = filterContext.ActionParameters.First().Value as ViewModel;
var serialized = filterContext.HttpContext.Request.Form["SerializedSteObject"];
if (viewModel == null || String.IsNullOrEmpty(serialized))
return;
var type = filterContext.ActionParameters.First().Value.GetType().BaseType.GetGenericArguments()[0];
var bytes = GZipHelper.Decompress(Convert.FromBase64String(serialized));
using (var stream = new MemoryStream(bytes))
{
var serializer = new DataContractSerializer(type);
viewModel.SteObject = serializer.ReadObject(stream);
}
}
}
STE has one very big drawback. You have to store them in session or view state (WebForms). So it is nothing more than "new version of dataset". If you don't store STE you will have one instance for getting data and different for posting = no change tracking.
I think you are missing the idea of Repository. You should not have an Update method in the Repository. After submitting, you should get the item again, apply the modifications and then Save.
I prefer having a service layer between client and Repository. We can always change the strategy with which we merge.
And yes, if you need to persist your STE's between requests, use session or viewstate.
It should be
Repo.Person.ApplyChanges(item);
Repo.Person.SaveChanges();
instead of
Repo.Person.Update(item);
Repo.Person.SaveChanges();
Self Tracking works with ApplyChanges extention method.