Maximum number of related records - entity-framework

Is there a way to specify the maximum number of related records allowed for an entity? For example, for each Order entity I want to specify a constraint that it has a maximum of five orderItems.
Would I have to use sql or is there something in the fluent api or ef attributes that can help?

I think that "a maximum of five orderItems per order" is a business requirement. Such requirements should not be implemented by infrastructure (mapping) or sql (although I'm not sure what you mean by that, I read it as database logic). An attribute that causes validation might be OK, but I don't think there is any attribute for it.
You should implement it in a way that validation and feedback occur similar to all other business rules. Rules implemented in mapping (if it were possible) or database constraints would require a second validation mechanism, probably catching exceptions, which is ugly.
Besides that, it is a rule that could change one day, maybe even temporarily (Christmas?). Then you don't want the implementation of this rule to be scattered over various application layers.
I would implement the rule in some AddItem method in a service class or repository or in the Order class itself and make the maximum configurable.

I would approach this something like what was done here:
Limit size of Queue<T> in .NET?
Also: How do I override List<T>'s Add method in C#?
Handle this by overriding whatever is handling you list of returned entities with an extended type which implements your business logic requirements. This will also make it easy to control the property from a settings file if you want to change it in the future.
I know of no way to do this in either EF / Fluent or SQL and it seems counter intuitive as this is relevant business logic and not relevant to how you persist the data. (*Not to say there isn't a way I don't know of)
Something like this should work:
public class LimitedList<T> : List<T> {
private int limit = -1;
public int Limit {
get { return limit; }
set { limit = value; }
}
private List<T> list= new List<T>();
public LimitedList(int Limit) {
this.Limit=Limit;
}
public void Add(T entry) {
if (this.Limit != list.Count) {
list.Add(entry);
} else {
//error
}
}
}

Related

Zend - Design Pattern DataMapper & Table Gateway

This is directly out of the Zend Quick Start guide. My question is: why would you need the setDbTable() method when the getDbTable() method assigns a default Zend_Db_Table object? If you know this mapper uses a particular table, why even offer the possibility of potentially using the "wrong" table via setDbTable()? What flexibility do you gain by being able to set the table if the rest of the code (find(), fetchAll() etc.) is specific to Guestbook?
class Application_Model_GuestbookMapper
{
protected $_dbTable;
public function setDbTable($dbTable)
{
if (is_string($dbTable)) {
$dbTable = new $dbTable();
}
if (!$dbTable instanceof Zend_Db_Table_Abstract) {
throw new Exception('Invalid table data gateway provided');
}
$this->_dbTable = $dbTable;
return $this;
}
public function getDbTable()
{
if (null === $this->_dbTable) {
$this->setDbTable('Application_Model_DbTable_Guestbook');
}
return $this->_dbTable;
}
... GUESTBOOK SPECIFIC CODE ...
}
class Application_Model_DbTable_Guestbook extends Zend_Db_Table_Abstract
{
protected $_name = 'guestbook_table';
}
Phil is correct, this is known as lazy-loading design pattern. I just implemented this pattern in a recent project, because of these benefits:
When I call on getMember() method, I will get a return value, regardless if it has been set before or not. This is great for method chaining: $this->getCar()->getTires()->getSize();
This pattern offers flexibility in that outside calling code is still able to set member values: $myClass->setCar(new Car());
-- EDIT --
Use caution when implementing the lazy-loading design pattern. If your objects are not properly hydrated, a query will be issued for every piece of data which is NOT available. The best thing to do is tail your db query log, during the dev phase, to ensure the number and type of queries are what you expect. A project I was working on was issuing over 27 queries for a "detail" page, and I had no idea until I saw the queries.
This method is called lazy-loading. It allows a property to remain null until requested unless it is set earlier.
One use for setDbTable() would be testing. This way you could set a mock DB table or something like that.
One addition: if setDbTable() is solely for lazy-loading, wouldn't it make more sense to make it private? That way it will avoid accidental assignment and to wrong table as originally mentioned by Sam.
Should we be compromising the design for the sake of testability?

Entity Framework in n-layered application - Lazy loading vs. Eager loading patterns

This questions doesn't let me sleep as it's since one year I'm trying to find a solution but... still nothing happened in my mind. Probably you can help me, because I think this is a very common issue.
I've a n-layered application: presentation layer, business logic layer, model layer. Suppose for simplicity that my application contains, in the presentation layer, a form that allows a user to search for a customer. Now the user fills the filters through the UI and clicks a button. Something happens and the request arrives to presentation layer to a method like CustomerSearch(CustomerFilter myFilter). This business logic layer now keeps it simple: creates a query on the model and gets back results.
Now the question: how do you face the problem of loading data? I mean business logic layer doesn't know that that particular method will be invoked just by that form. So I think that it doesn't know if the requesting form needs just the Customer objects back or the Customer objects with the linked Order entities.
I try to explain better:
our form just wants to list Customers searching by surname. It has nothing to do with orders. So the business logic query will be something like:
(from c in ctx.CustomerSet
where c.Name.Contains(strQry) select c).ToList();
now this is working correctly. Two days later your boss asks you to add a form that let you search for customers like the other and you need to show the total count of orders created by each customer. Now I'd like to reuse that query and add the piece of logic that attach (includes) orders and gets back that.
How would you front this request?
Here is the best (I think) idea I had since now. I'd like to hear from you:
my CustomerSearch method in BLL doesn't create the query directly but passes through private extension methods that compose the ObjectQuery like:
private ObjectQuery<Customer> SearchCustomers(this ObjectQuery<Customer> qry, CustomerFilter myFilter)
and
private ObjectQuery<Customer> IncludeOrders(this ObjectQuery<Customer> qry)
but this doesn't convince me as it seems too complex.
Thanks,
Marco
Consider moving to DTO's for the interface between the presentation layer and the business layer, see for example:- http://msdn.microsoft.com/en-us/magazine/ee236638.aspx
Something like Automapper can relieve much of the pain associated with moving to DTOs and the move will make explicit what you can and cannot do with the results of a query, i.e. if it's on the DTO it's loaded, if it's not you need a different DTO.
Your current plan sounds a rather too tightly coupled between presentation layer and data layer.
I would agree with the comment from Hightechrider in reference to using DTOs, however you have a valid question with regard to business entities.
One possible solution (I'm using something along these lines on a project I'm developing) is to use DTOs that are read-only (at least from the presentation layer perspective. Your query/get operations would only return DTOs, this would give you the lazy loading capability.
You could setup your business layer to return an Editable object that wraps the DTO when an object/entity is updated/created. Your editable object could enforce any business rules and then when it was saved/passed to the business layer the DTO it wrapped (with the updated values) could be passed to the data layer.
public class Editable
{
//.......initialize this, other properties/methods....
public bool CanEdit<TRet>(Expression<Func<Dto, TRet>> property)
{
//do something to determine can edit
return true;
}
public bool Update<TRet>(Expression<Func<Dto, TRet>> property, TRet updatedValue)
{
if (CanEdit(property))
{
//set the value on the property of the DTO (somehow)
return true;
}
return false;
}
public Dto ValueOf { get; private set;}
}
This gives you the ability to enforce if the user can get editable objects from the business layer as well as allowing the business object to enforce if the user has permission to edit specific properties of an object. A common problem I run into with the domain I work in is that some users can edit all of the properties and others can not, while anyone can view the values of the properties. Additionally the presentation layer gains the ability to determine what to expose as editable to the user as dictated and enforced by the business layer.
Other thought I had is can't your Business Layer expose IQueryable or take standard expressions as arguments that you pass to your data layer. For example I have a page building query something like this:
public class PageData
{
public int PageNum;
public int TotalNumberPages;
public IEnumerable<Dto> DataSet;
}
public class BL
{
public PageData GetPagedData(int pageNum, int itemsPerPage, Expression<Func<Dto, bool>> whereClause)
{
var dataCt = dataContext.Dtos.Where(whereClause).Count();
var dataSet = dataContext.Dtos.Where(whereClause).Skip(pageNum * itemsPerPage).Take(itemsPerPage);
var ret = new PageData
{
//init this
};
return ret;
}
}

DDD, handling dependencies

Boring intro:
I know - DDD isn't about technology. As i see it - DDD is all about creating ubiquitous language with product owner and reflecting it into code in such a simple and structured manner, that it just can't be misinterpreted or lost.
But here comes a paradox into play - in order to get rid of technical side of application in domain model, it gets kind a technical - at least from design perspective.
Last time i tried to follow DDD - it ended up with whole logic outside of domain objects into 'magic' services all around and anemic domain model.
I've learnt some new ninja tricks and wondering if I could handle Goliath this time.
Problem:
class store : aggregateRoot {
products;
addProduct(product){
if (new FreshSpecification.IsSatisfiedBy(product))
products.add(product);
}
}
class product : entity {
productType;
date producedOn;
}
class productTypeValidityTerm : aggregateRoot {
productType;
days;
}
FreshSpecification is supposed to specify if product does not smell. In order to do that - it should check type of product, find by it days how long product is fresh and compare it with producedOn. Kind a simple.
But here comes problem - productTypeValidityTerm and productType are supposed to be managed by client. He should be able to freely add/modify those. Because I can't traverse from product to productTypeValidityTerm directly, i need to somehow query them by productType.
Previously - i would create something like ProductService that receives necessary repositories through constructor, queries terms, performs some additional voodoo and returns boolean (taking relevant logic further away from object itself and scattering it who knows where).
I thought that it might be acceptable to do something like this:
addProduct(product, productTypeValidityTermRepository){...}
But then again - i couldn't compose specification from multiple specifications underneath freely what's one of their main advantages.
So - the question is, where to do that? How store can be aware of terms?
With the risk of oversimplifying things: why not make the fact whether a Product is fresh something a product "knows"? A Store (or any other kind of related object) should not have to know how to determine whether a product is still fresh; in other words, the fact that something like freshSpecification or productTypeValidityTerm even exist should not be known to Store, it should simply check Product.IsFresh (or possibly some other name that aligns better with the real world, like ShouldbeSoldBy, ExpiresAfter, etc.). The product could then be aware how to actually retrieve the protductTypeValidityTerm by injecting the repository dependency.
It sounds to me like you are externalizing behavior which should be intrinsic to your domain aggregates/entities, eventually leading (again) to an anemic domain model.
Of course, in a more complicated scenario, where freshness depends on context (e.g., what's acceptable in a budget store is not deemed worthy for sale at a premium outlet) you'd need to externalize the entire behavior, both from product and from store, and create a different type altogether to model this particular behavior.
Added after comment
Something along these lines for the simple scenario I mentioned: make the FreshSpec part of the Product aggregate, which allows the ProductRepository (constructor-injected here) to (lazy) load it when needed.
public class Product {
public ProductType ProductType { get; set; }
public DateTime ProducedOn { get; set; }
private FreshSpecification FreshSpecification { get; set; }
public Product(IProductRepository productRepository) { }
public bool IsFresh() {
return FreshSpecification
.IsSatisfiedBy(ProductType, ProducedOn);
}
}
The store doesn't know about these internals: all it cares about is whether or not the product is fresh:
public class Store {
private List<Product> Products = new List<Product>();
public void AddProduct(Product product) {
if (product.IsFresh()) {
Products.Add(product);
}
}
}

Soft Deletes ( IsHistorical column ) with EntityFramework

I'm working with a database where the designers decided to mark every table with a IsHistorical bit column. There is no consideration for proper modeling and there is no way I can change the schema.
This is causing some friction when developing CRUD screens that interact with navigation properties. I cannot simply take a Product and then edit its EntityCollection I have to manually write IsHistorical checks all over the place and its driving me mad.
Additions are also horrible because so far I've written all manual checks to see if an addition is just soft deleted so instead of adding a duplicate entity I can just toggle IsHistoric.
The three options I've considered are:
Modifying the t4 templates to include IsHistorical checks and synchronization.
Intercept deletions and additions in the ObjectContext, toggle the IsHistorical column, and then synch the object state.
Subscribe to the AssociationChanged event and toggle the IsHistorical column there.
Does anybody have any experience with this or could recommend the most painless approach?
Note: Yes, I know, this is bad modeling. I've read the same articles about soft deletes that you have. It stinks I have to deal with this requirement but I do. I just want the most painless method of dealing with soft deletes without writing the same code for every navigation property in my database.
Note #2 LukeLed's answer is technically correct although forces you into a really bad poor mans ORM, graph-less, pattern. The problem lies in the fact that now I'm required to rip out all the "deleted" objects from the graph and then call the Delete method over each one. Thats not really going to save me that much manual ceremonial coding. Instead of writing manual IsHistoric checks now I'm gathering deleted objects and looping through them.
I am using generic repository in my code. You could do it like:
public class Repository<T> : IRepository<T> where T : EntityObject
{
public void Delete(T obj)
{
if (obj is ISoftDelete)
((ISoftDelete)obj).IsHistorical = true
else
_ctx.DeleteObject(obj);
}
Your List() method would filter by IsHistorical too.
EDIT:
ISoftDelete interface:
public interface ISoftDelete
{
bool IsHistorical { get; set; }
}
Entity classes can be easily marked as ISoftDelete, because they are partial. Partial class definition needs to be added in separate file:
public partial class MyClass : EntityObject, ISoftDelete
{
}
As I'm sure you're aware, there is not going to be a great solution to this problem when you cannot modify the schema. Given that you don't like the Repository option (though, I wonder if you're not being just a bit hasty to dismiss it), here's the best I can come up with:
Handle ObjectContext.SavingChanges
When that event fires, trawl through the ObjectStateManager looking for objects in the deleted state. If they have an IsHistorical property, set that, and changed the state of the object to modified.
This could get tricky when it comes to associations/relationships, but I think it more or less does what you want.
I use the repository pattern also with similar code to LukLed's, but I use reflection to see if the IsHistorical property is there (since it's an agreed upon naming convention):
public class Repository<TEntityModel> where TEntityModel : EntityObject, new()
{
public void Delete(TEntityModel entity)
{
// see if the object has an "IsHistorical" flag
if (typeof(TEntityModel).GetProperty("IsHistorical") != null);
{
// perform soft delete
var historicalProperty = entity.GetType().GetProperty("IsHistorical");
historicalProperty.SetValue(entity, true, null);
}
else
{
// perform real delete
EntityContext.DeleteObject(entity);
}
EntityContext.SaveChanges();
}
}
Usage is then simply:
using (var fubarRepository = new Repository<Fubar>)
{
fubarRepository.Delete(someFubar);
}
Of course, in practice, you extend this to allow deletes by passing PK instead of an instantiated entity, etc.

How can I make a c# decimal match a SQL decimal for EF change tracking?

To avoid touching changeless records in EF it's important that original and current entity values match. I am seeing a problem with decimals where the EF entity has SQL representation and that is being compared to c# decimal.
This is debug output from entities with changes detected. This shows the problem pretty clearly. Even though both the entity and the source data are in of type decimal the values are considered difference even though they are equal.
How can I ensure that original and current values match when using c# decimal?
Maybe there is a way to turn the c# decimal into an entity (SQL) decimal before the update?
Another Example
I would expect the truncation to ignore the fact that the incoming precision is higher than the SQL scale
You could implement a proxy-property which handles the conversion from code-precision to db-precision:
public class MoneyClass
{
[Column("Money")]
public decimal MyDbValue { get; set; } // You existing db-property
[NotMapped]
public decimal MyCodeValue // some property to access within you code
{
get
{
return this.MyDbValue;
}
set
{
decimal newDbValue = decimal.Round(value, 2);
if (this.MyDbValue != newDbValue)
{
Console.WriteLine("Change! Old: {0}, New: {1}, Input: {2}", this.MyDbValue, newDbValue, value);
this.MyDbValue = newDbValue;
}
}
}
}
static void Main(params string[] args)
{
MoneyClass dbObj = new MoneyClass()
{
MyCodeValue = 123.456M
};
Console.WriteLine(dbObj.MyDbValue);
dbObj.MyCodeValue = 123.457M; // won't change anything
Console.WriteLine(dbObj.MyDbValue);
dbObj.MyCodeValue = 123.454M; // will change because of 3rd decimal value 4
Console.WriteLine(dbObj.MyDbValue);
dbObj.MyCodeValue = 123.46M; // will change
Console.WriteLine(dbObj.MyDbValue);
}
This answer is not supposed to fix exactly the issue you have, but to go around it.
I suggest to code the logic that decides whether an objects needs to be saved or not on a higher application layer (in that respect I consider the EF generated classes as low level objects).
The code which retrieves and stores data could be implemented in a repository class, i.e. a class that manages your data access logic. So what you application uses would be this repository class and not the EF code. Whether the repository class internally uses EF or something else would not be important anymore for you application.
If you define an interface for you repository class you could even replace it easily with some or technology to save and retrieve data.
See here for an article from microsoft about the repository pattern.
This is an info from a question here at stackoverflow.
I generally would not recommend to use the EF generated classes in normal application code. It might be tempting a first, but also cause problems later as in your case.