Maintainable and extensible navigation properties - entity-framework

Which one is better (Pros and Cons) (Maintenance, Extensibility) :
1.Has a Request entity have 3 different reference (navigation properties) to Signature:
2.Has a Request entity have collection of Signature, which each Signature has Signature Type:

Model 1
This model pretty much carves your business rules in stone. There can only be three types of signatures and there can only be one each. Any change in these rules requires a database change, which is always a pervasive change.
Apart from that, there's a potential pitfall when you work with disconnected entities that you want to re-attach to a context. If one (or two) of the Signatures are the same (business rules permitting) you must heed the exception that two entity keys are added twice.
Model 2
This gives more freedom (extensibility), but that means that you need coded business rules to enforce the rules I mentioned for Model 1. On the other hand, it would allow more people to approve a request, for example.
Maybe an even better model would be to make SignatureType a flagged enum, so a signature can have multiple types and the signatures don't have to be repeated (again: the business rules permitting). This would avoid the potential multiple key exception I mentioned with Model 1.
So I would prefer model 2 (with flagged enum), because I like to express business rules in code, not in database constraints.

Related

Does 2 additional tables better than one with meta?

Have a question about architecture: I have 2 subjects, DocumentLetter and DocumentOther, both should be approved by managers.
What would be better: to use 2 additional models DocumentLetterApprove and DocumentOtherApprove with entity relations, OR one additional table without relations but contains info about model identity (columns ModelName and ModelID)?
Or another example, attachments for different documents.
Letter, contract - 2 different tables and each should have own attachment.
I can use additional table for each model (for letter and for contract) or create one table with fields field ModelName and ModelID?
Personally, I would favor keeping the separate entities /w the relationships if there is any possibility that the related entities (approvals) could be in any way different depending on what they are applied to. I avoid ambiguously linked tables unless they represent a large 1 to many entity that might be associated to one of a number of other entities.
The problem with using something like a "ParentType" + "ParentId" is that you cannot leverage any form of FK constraint between the related tables. This also means you cannot leverage EF relationships given there will probably be times loading one of documents and wanting to know if it is approved and details from the approval.
If an Approval for the different document types is expected to be identical then I would sooner declare a common Approval table/Entity and put an ApprovalId on each of the document type tables to establish a many-to-1 relationship from the document to the approval.
If an approval is identical and can form a many to many, then a suitable many-to-many relation table DocumentLetter - DocumentLetterApproval (FKs) - Approval (Approval details) can be employed.
If a Letter approval vs. other approval could be different then: DocumentLetter - DocumentLetterApproval (Approval details)
Design decisions like this usually come from considerations around DRY (Don't Repeat Yourself). What advice I can give is that KISS (Keep It Stupidly Simple) should trump DRY, and that DRY should only apply to logic/structure that is proven to be identical. (not merely expected to be identical, or worse, expected to be similar) DRY should be a re-factoring consideration for constant improvement, not an up-front design decision. Coding for DRY too early ends up costing you time when you paint yourself into corners. By keeping code fluid these relationships can be proven, then if they are proven to be identical, re-factored into a single entity. Time is still spent re-factoring, but re-factoring to make code structure better rather than making code worse when having to work around design assumptions.
An example where i might consider an ambiguous loosely linked linked table would be something like File Attachments. I might have several entities that can hold references to 1 or more attachments. Attachments are not something I would need to link to often, but rather through an explicit action that I could fire off an additional query for anyways since I'm not about to pre-load attachment details when loading a document. In this case an attachment table might have a ParentType and ParentId indexed so that I can quickly get details for a particular document or other entity. I would never try to do something like Context.Documents.Include(x => x.Attachments) or the like, there would be no such reference available. Attachments would always be accessed by single document so I would resort to Context.Attachments.Where(x => x.ParentType == ParentTypes.DocumentLetter && x.ParentId == documenLetterId).ToList();
I have experience working on systems that were designed solely with these types of ambiguously linked tables. They are not only extremely slow as they scale to any reasonable size, but they are also extremely error prone as systems evolve and the nature of the relationships change. Records have a tendency to get out of sync with the expected rules.

Structuring nested rest API

I'm writing an API with spring boot, trying to keep it restful but the structure is quite nested. So say I have:
/api/examboard/{ebid}/qualification/{qid}/subject/{sid}/module/{mid}/
I have a controller for every noun that will take in all Id's, the problem with this is that I don't really need an ebid or a qid for modules, they only really need to be concerned with subjects most of the time. The mapping between them all is quite simple. An examboard will have many qualifications, a qualification will have many subjects etc....
Now the problem is say I go for a simpler API design where I only need the parent Id so the Subject controller will also have:
api/subject/{sid}/module
then I need to include multiple services in my controller based on the way JPA works. As I need to include SubjectEntity based calls and ModuleEntity based calls. However I want to maintain a one to one relationship between my controllers/services and services/repositories. This is why I opted for the longer url as I've mentioned above, but it does seem like overkill. Does anyone have any advice on how I should structure an API like this, most examples are quite small and don't really fit.
Without knowing more about your models and the relations between them, this answer will have to stay a bit diffuse.
First of all - "it depends". I know, but it really does. The way you should design an API depends heavily on your use cases that will define required access patterns. Do you often need all modules for a subject? Then introduce /subjects/{sid}/modules, if you need the details for a module of a subject in a qualification in an examboard - by all means have a /examboards/{ebid}/qualifications/{qid}/subjects/{sid}/modules/{mid}
As you say there are many relations between your entities. That is fine, but it does not mean that you need your API to capture each of these relations in a dedicated endpoint. You should distiguish between retrieving and modifying entities here. Find below examples for certain operations you might want to have (not knowing your models, this may not apply - let's consider this an illustration)
Retrieve qualifications for an examboard
GET /examboards/{ebid}/qualifications plain and simple
GET /qualifications?ebid={ebid} if you feel you might need sophisticated filtering later on
or create a new qualitication for an examboard
POST /examboards/{ebid}/qualifications with the details submitted in the body
POST /qualifications with the details submitted in the body and making the associated examboard ebid part of the submitted data
or update an existing qualification
PUT /qualifications/{qid} (if this operation is idempotent)
POST /qualifications/{qid} (if it should not be considered idempotent)
or delete qualifications
DELETE /qualifications/{qid} deletes entities, cascade-deletes associations
DELETE /examboards/{ebid}/qualifications clears all qualifications from an examboard, without actually deleting the qualification entities
There are certainly more ways to let an API do all these things, but this should demonstrate that you need to think of your use cases first and design your API around them.
Please note the pluralisation of collection resources in the previous examples. This comes down to personal preference, but I tends to follow the argumentation of Sam Ruby in RESTful Web Services (available as PDF) that collections should be first-class citizens in an API
Usually, there should not be a reason to have 1:1:1 relationships between controllers, services and repositories. Usually, this is not even possible. Now, I don't know the reason why you might want to do this, but following through with this will force you to put a lot of logic into your database queries and models. While this (depending on your setup and skills) may or may not be easily testable, it certainly shifts the required test types from unit (simpler, usually faster, more fine-grained) to integration tests (require more setup, more complex, usually slower), when instead of having the bulk of your business logic in your services you put them into many joins and subselects in your repositories.
I will only address your REST API structure question.
As you already pointed out
The problem with this is that I don't really need an ebid or a qid for modules, they only really need to be concerned with subjects most of the time
You need to think of your entities as resources if your entity can stand for itself give it its own top level resource. If instead your entity exists only as a part of another entity build a subresource below its parent. This should correspond with the association type aggregation and composition in your object model design.
Otherwise every entity that is part of a many relationship should also be accessible via a subresource on the other side of the relationship.
As I understood you you have a OneToMany relationship between examboard and qualification so we get:
api/examboards/{eid}/qualifications
api/qualifications/{qid}/examboard
Yo could also remove the examboard subresource and include it in the qualification response.
For ManyToMany realtionships you need two subresources:
api/foos/{fid}/bars
api/bars/{bid}/foos
And another resource to manipulate the relationship itself.
api/foosToBars/{fid}+{bid}
Or likewise.

ORM Entities vs. Domain Entities under Entity Framework 6.0

I stumbled upon the following two articles First and Second in which the author states in summary that ORM Entities and Domain Entities shouldn't be mixed up.
I face exactly this problem at the moment as I code with EF 6.0 using the Code First approach. I use the POCO classes as entities in the EF as well as my domain/business objects. But I find myself frequently in the situation where I define a property as public or a navigation property as virtual only because the EF Framework forces me to do so.
I don't know what to take as the bottom line of the two articles? Should I really create for example a CustomerEF class for the entity framework and a CustomerD for my domain. Then create a repository which consumes CustomerD maps it to CustomerEF do some queries and than maps back the received CustomerEF to CustomerD. I thought EF is all about mapping my domain entities to the data.
So please give me some advice. Do I overlook an important thing the EF is able to provide me with? Or is this a problem which can not completely solved by the EF? In the latter case what is a good way to manage this problem?
I agree with the general idea of these posts. An ORM class model is part of a data access layer first and foremost (even if it consists of so-called POCOs). If any conflict of interests arises between persistence and business logic (or any other concern), decisions should always be made in favor of persistence.
However, as software developers we always have to balance between purism and pragmatism. Whether or not to use the persistence model as a domain model depends on a number of factors:
The size/coherence of the development team. When the whole team knows that properties can be public just because of ORM requirements, but should not be set all over the place, it may not be a big deal. If everybody knows (and obeys) that an ID property is not to be used in business logic, having IDs may not be a big deal. A scattered, unexperienced or undisciplined team may need more stringent segregation of code.
The overlap between business logic concerns and persistence concerns. Object oriented design thrives when a class model sticks to SOLID principles. But these principles are not necessarily at odds with persistence concerns. I mean that although the concerns are different, in the end their resultant requirements may be quite similar. For instance, both concerns may require valid object state and correct associations.
There can be use cases, however, in which objects temporarily need to be in a state that absolutely shouldn't be stored. This may be a reason to work with dedicated domain classes. Another reason may be that the entity model just can't fulfill the best segmentation of responsibilities. For instance, a business process "blacklisting customer" may require data that is scattered over so many entity objects that new domain classes must be designed that can encapsulate the data and the methods working on them. In other words: doing this by entities would violate the Tell Don't Ask principle.
The need for layering. For instance, if the data access layer targets different database vendors it may have to consist of interchangeable parts that are vendor-specific (e.g. to account for subtle differences in data types between Oracle and Sql Server or to exploit vendor-specific features). Using the persistence model as domain model would probably bleed vendor-specific implementations into the business logic. That would be really bad. There the data access layer should be precisely that, a layer.
(Very trivial) The amount of data. Creating objects takes time and resources. When "many" objects are involved in a business case it may just be too expensive to build both entity objects and domain objects.
And more, undoubtedly.
So I would always try to be a pragmatist. If entity classes do a decent job, go for it. If the mismatch is too large, create a business domain for appropriate parts of the business logic. I would not slavishly follow a (any) design pattern just because it is a good pattern. Contrary to what is said in the post, it requires a lot of maintenance to map an entity model onto a business model. When you find yourself creating myriads of business classes that are almost identical to entity classes it's time to rethink what you're doing.

On observing an execution tree of interdependent models in MVC

I've developed on the Yii Framework for a while now (4 months), and so far I have encountered some issues with MVC that I want to share with experienced developers out there. I'll present these issues by listing their levels of complexity.
[Level 1] CR(create update) form. First off, we have a lot of forms. Each form itself is a model, so each has some validation rules, some attributes, and some operations to perform on the attributes. In a lot of cases, each of these forms does both updating and creating records in the db using a single active record object.
-> So at this level of complexity, a form has to
when opened,
be able to display the db-friendly data from the db in a human-friendly way
be able to display all the form fields with the attributes of the active record object. Adding, removing, altering columns from the db table has to affect the display of the form.
when saves, be able to format the human-friendly data to db-friendly data before getting the data
when validates, be able to perform basic validations enforced by the active record object, it also has to perform other validations to fulfill some business rules.
when validating fails, be able to roll back changes made to the attribute as well as changes made to the db, and present the user with their originally entered data.
[Level 2] Extended CR form. A form that can perform creation/update of records from different tables at once. Not just that, whether a form would create/update of one of its records can sometimes depend on other conditions (more business rules), so a form can sometimes update records at table A,B but not D, and sometimes update records at A,D but not B
-> So at this level of complexity, we see a form has to:
be able to satisfy [Level 1]
be able to conditionally create/update of certain records, conditionally create/update of certain columns of certain records.
[Level 3] The Tree of Models. The role of a form in an application is, in many ways, a port that let user's interact with your application. To satisfy requests, this port will interact with many other objects which, in turn, interact with many more objects. Some of these objects can be seen as models. Active Record is a model, but a Mailer can also be a model, so is a RobotArm. These models use one another to satisfy a user's request. Each model can perform their own operation and the whole tree has to be able to roll back any changes made in the case of error/failure.
Has anyone out there come across or been able to solve these problems?
I've come up with many stuffs like encapsulating model attributes in ModelAttribute objects to tackle their existence throughout tiers of client, server, and db.
I've also thought we should give the tree of models an Observer to observe and notify the observed models to rollback changes when errors occur. But what if multiple observers can exist, what if a node use its parent's observer but give its children another observers.
Engineers, developers, Rails, Yii, Zend, ASP, JavaEE, any MVC guys, please join this discussion for the sake of science.
--Update to teresko's response:---
#teresko I actually intended to incorporate the services into the execution inside a unit of work and have the Unit of work not worry about new/updated/deleted. Each object inside the unit of work will be responsible for its state and be required to implement their own commit() and rollback(). Once an error occur, the unit of work will rollback all changes from the newest registered object to the oldest registered object, since we're not only dealing with database, we can have mailers, publishers, etc. If otherwise, the tree executes successfully, we call commit() from the oldest registered object to the newest registered object. This way the mailer can save the mail and send it on commit.
Using data mapper is a great idea, but We still have to make sure columns in the db matches data mapper and domain object. Moreover, an extended CR form or a model that has its attributes depending on other models has to match their attributes in terms of validation and datatype. So maybe an attribute can be an object and shipped from model to model? An attribute can also tell if it's been modified, what validation should be performed on it, and how it can be human-friendly, application-friendly, and db-friendly. Any update to the db schema will affect this attribute, and, thereby throwing exceptions that requires developers to make changes to the system to satisfy this change.
The cause
The root of your problem is misuse of active record pattern. AR is meant for simple domain entities with only basic CRUD operations. When you start adding large amount of validation logic and relations between multiple tables, the pattern starts to break apart.
Active record, at its best, is a minor SRP violation, for the sake of simplicity. When you start piling on responsibilities, you start to incur severe penalties.
Solution(s)
Level 1:
The best option is the separate the business and storage logic. Most often it is done by using domain object and data mappers:
Domain objects (in other materials also known as business object or domain model objects) deal with validation and specific business rules and are completely unaware of, how (or even "if") data in them was stored and retrieved. They also let you have object that are not directly bound to a storage structures (like DB tables).
For example: you might have a LiveReport domain object, which represents current sales data. But it might have no specific table in DB. Instead it can be serviced by several mappers, that pool data from Memcache, SQL database and some external SOAP. And the LiveReport instance's logic is completely unrelated to storage.
Data mappers know where to put the information from domain objects, but they do not any validation or data integrity checks. Thought they can be able to handle exceptions that cone from low level storage abstractions, like violation of UNIQUE constraint.
Data mappers can also perform transaction, but, if a single transaction needs to be performed for multiple domain object, you should be looking to add Unit of Work (more about it lower).
In more advanced/complicated cases data mappers can interact and utilize DAOs and query builders. But this more for situation, when you aim to create an ORM-like functionality.
Each domain object can have multiple mappers, but each mapper should work only with specific class of domain objects (or a subclass of one, if your code adheres to LSP). You also should recognize that domain object and a collection of domain object are two separate things and should have separate mappers.
Also, each domain object can contain other domain objects, just like each data mapper can contain other mappers. But in case of mappers it is much more a matter of preference (I dislike it vehemently).
Another improvement, that could alleviate your current mess, would be to prevent application logic from leaking in the presentation layer (most often - controller). Instead you would largely benefit from using services, that contain the interaction between mappers and domain objects, thus creating a public-ish API for your model layer.
Basically, services you encapsulate complete segments of your model, that can (in real world - with minor effort and adjustments) be reused in different applications. For example: Recognition, Mailer or DocumentLibrary would all services.
Also, I think I should not, that not all services have to contain domain object and mappers. A quite good example would be the previously mentioned Mailer, which could be used either directly by controller, or (what's more likely) by another service.
Level 2:
If you stop using the active record pattern, this become quite simple problem: you need to make sure, that you save only data from those domain objects, which have actually changed since last save.
As I see it, there are two way to approach this:
Quick'n'Dirty
If something changed, just update it all ...
The way, that I prefer is to introduce a checksum variable in the domain object, which holds a hash from all the domain object's variables (of course, with the exception of checksum it self).
Each time the mapper is asked to save a domain object, it calls a method isDirty() on this domain object, which checks, if data has changed. Then mapper can act accordingly. This also, with some adjustments, can be used for object graphs (if they are not to extensive, in which case you might need to refactor anyway).
Also, if your domain object actually gets mapped to several tables (or even different forms of storage), it might be reasonable to have several checksums, for each set of variables. Since mapper are already written for specific classes of domain object, it would not strengthen the existing coupling.
For PHP you will find some code examples in this ansewer.
Note: if your implementation is using DAOs to isolate domain objects from data mappers, then the logic of checksum based verification, would be moved to the DAO.
Unit of Work
This is the "industry standard" for your problem and there is a whole chapter (11th) dealing with it in PoEAA book.
The basic idea is this, you create an instance, that acts like controller (in classical, not in MVC sense of the word) between you domain objects and data mappers.
Each time you alter or remove a domain object, you inform the Unit of Work about it. Each time you load data in a domain object, you ask Unit of Work to perform that task.
There are two ways to tell Unit of Work about the changes:
caller registration: object that performs the change also informs the Unit of Work
object registration: the changed object (usually from setter) informs the Unit of Work, that it was altered
When all the interaction with domain object has been completed, you call commit() method on the Unit of Work. It then finds the necessary mappers and store stores all the altered domain objects.
Level 3:
At this stage of complexity the only viable implementation is to use Unit of Work. It also would be responsible for initiating and committing the SQL transactions (if you are using SQL database), with the appropriate rollback clauses.
P.S.
Read the "Patterns of Enterprise Application Architecture" book. It's what you desperately need. It also would correct the misconception about MVC and MVC-inspired design patters, that you have acquired by using Rails-like frameworks.

Business logic in Entity Framework POCOs using partial classes?

I have business logic that could either sit in a business logic/service layer or be added to new members of an extended domain class (EF T4 generated POCO) that exploits the partial class feature.
So I could have:
a) bool OrderBusiness.OrderCanBeCancelledOnline(Order order) .. or (IOrder order)
or
b) bool order.CanBeCancelledOnline() .. i.e. it is the order itself knows whether or not it can be cancelled.
For me option b) is more OO. However option a) allows more complex logic to be applied e.g. using other domain objects or services.
At the moment I have a mix of both and this doesn't seem elegant.
Any guidance on this would be much appreciated!
The key thing about OO for me is that you tell objects to do things for you. You don't pull attributes out and make the decisions yourself (in a helper class or other).
So I agree with your assertion about option b). Since you require additional logic, there's no harm in performing an operation on the object whilst passing references to additional helper objects such that they collaborate. Whether you do this at the time of the operation itself, or pre-populate your order object with those collaborating entities is very much dependent upon your current situation.
You can also use extension methods to the POCO's to wrap your bll methods.
So you can keep using your current bll's.
in c# something like:
public static class OrderBusiness <- everything must be static, class and method
{
public static bool CanBeCancelledOnline(this Order order) <- notice the 'this'
{
logic ...
And now you can do order.CanBeCancelledOnline()
This is likely to depend on the complexity of your application and does require some judgement that comes with experience. The short answer is that if your project is anything more than a pretty simple one then you are best off putting your logic in the domain classes.
The longer answer:
If you place your logic within a service layer you are affectively following the transaction script pattern, and ending up with an anaemic domain model. This can be a valid route, but it generally works best with simple and small projects. The problem is that the transaction script layer (your service layer) becomes more complicated to maintain as it grows.
So the alternative is to create a rich domain model that contains the logic within it. Keeping logic together with the class it applies to is a key part of good OO design, and in a complex project pretty essential. It usually requires a bit more thought and effort initially, which is why for very simple projects people sometimes use the transaction script pattern.
If you are unsure about which to go with it is not normally a too difficult job to refactor your logic to move it from your service layer to the domain, but you need to make the call early enough that the job is not too large.
Contrary to one of the answers, using POCO classes does not mean you can't have business logic in your domain classes. POCO is about not applying framework specific structures to your domain classes, such as methods and interfaces specific to a particular ORM. A class with some functions to apply business logic is clearly still a Plain-Old-CLR-Object.
A common question, and one that is partially subjective.
IMO, you should go with Option A.
POCO's should be exactly that, "plain-old-CLR" objects. If you start applying business logic to them, they cease to be POCO's. :)
You can certainly put your business logic in the same assembly as your POCO's, just don't add methods directly to them, create helper classes to facilitate business rules. The only thing your POCO's should have is properties mapping to your domain model.
Really depends on how complex your business rules are. In our application, the busines rules are very straightforward, so we use Option A.
But if your business rules start to get messy, consider using the Specification Pattern.