I have been reading up on multiple PHP frameworks, especially the Zend Framework but I am getting confused about the proper way to go forward.
Zend Framework does not use ActiveRecords but instead uses the Table Data Gateway and Row Data Gateway pattern, and uses a DataMapper to map the contents of the Row Data Gateway to the model, because ActiveRecord breaks down when your models don't have a 1:1 mapping to your database tables. There is an example of this in the Zend Quickstart guide.
To me, their example looks very bloated with a ton of getters and setters all over the place. I came across various blog posts about Domain Driven Design arguing that using so many getters and setters is bad practice because it exposes all the inner model data to the outside, so it has no advantage over public attributes. Here is one example.
My question: If you remove those getters and setters, how will you render your views? At some point the data has to hit the view so you can actually show something to the user. Following the DDD advice seems to break the separation between M and V in MVC. Following the MVC and Zend example seems to break DDD and leaves me typing up a whole lot of getters, setters and DataMappers for all my models. Aside from being a lot of work it also seems to violate DRY.
I would really appreciate some (links to) good examples or more information about how it all fits together. I'm trying to improve my achitecture and design skills here.
Using Value Objects, you can eliminate some of those public setter methods. Here's a description of the difference between Entity and Value Objects. Value Objects are immutable and often tied to an Entity. If you pass all values in with the constructor, you don't need to set these properties from external code.
Something extra, not directly related to an answer, but more focused on DDD:
(Disclaimer: The only thing I know about the Zend Framework is what I read in the linked article.) The Zend Framework is using DataMappers instead of Repositories. Is this really DDD-ish? Well, Fowler's interpretation of a Repository might say no. However, Eric Evans states that a DDD Repository can be very simple. At its simplest, a Repository is a DataMapper (See DDD book). For something more complex and still DDD, see the Fowler article. DDD has a conceptual Repository that may differ from the pattern definition.
I urge you to continue reading about Domain-Driven Design. I think there's a flaw in the assumption that getters and setters violate DDD. DDD is about focusing on the domain model and best practices to accomplish that. Accessors are just a minor detail.
You don't need to implement all the getters/setters, you can use__get() and __set(). What's the problem then?
From my reading of the post, the question is more philosophical rather than practical.
I don't have the time to write in depth, but here is my two cents. While I agree that you want to limit the number of get and set requests because a class should hide its internals, you also need to take into account the Java and PHP are different tools and have different purposes. In the web environment your classes are being built and taken down with each request and therefore the code you write should not depend on huge classes. In the article you pointed out the author suggests placing the view logic in the class. This probably does not makes sense on the web since I will likely want to present the view in multiple formats (rss, html, etc...). Using accessor methods (get & set) therefore is a necessary evil. You still want to use them thoughtfully so that you don't shoot yourself in the foot. The key is to try to have your classes do the work for you instead of trying to force them to do work externally. By accessing your properties with a method instead of directly you hide the internals which is what you want.
Again, this post could use some examples, but I don't have the time right now.
Can someone else provide a few examples of why accessor methods aren't evil?
There are two approaches here: What I call the "tell don't ask approach", and the other is the ViewModel/DTO approach.
Essentially the questions revolves around what is the "model" in your view.
Tell don't ask requires that the only way an object can be externalized, is from the the object itself. In other words, to render an object, you would have a render method, but that render method would need to talk to an interface.
Something like this:
class DomainObject {
....
public function render(DomainObjectRenderer $renderer) {
return $renderer->renderDomainObject(array $thegorydetails);
}
}
In the context of Zend Framework, you can subclass Zend_View and have your subclass implement this interface.
I've done this before, but its a bit unwieldy.
The second option is convert your domain model in to a ViewModel object, which is like a simplified, flattened out, "stringed out" view of your data, customized for each specific view (with one ViewModel per view), and on the way back, convert the POST data to an EditModel.
This is a very popular pattern in the ASP.NET MVC world, but its also similar to the class "DTO" pattern used to transfer data between "layers" in an application. You would need to create mappers to do the dirty work for you (not unlike a DataMapper, actually). In PHP 5.3, you can use reflection to change private properties, so your DomainObject doesn't even need to expose itself either!
Implementing getters and setters has two advantages, in my eyes:
You can choose which properties to make public, so you don't necessarily have to expose all of the model's internals
If you use an IDE with autocomplete, all the available properties will be a TAB away when you start typing "get" or "set"—this alone is reason enough for me.
Related
I am new to domain driven design and trying to learn and implement in my project. My project structure up till now similar to this.
Maintainance Folder Maintainance.Data(Class
Library) Maintainance.Domain(Class Library)
Maintainance.Domin.Tests(test project)
MovieBooking Folder MovieBooking.Data(Class
Library) MovieBooking.Domain(Class Library)
MovieBooking.Domain.Tests(test project)
SharedKernel Common things
Web Application MovieBooking MVC Web
Application(which have reference to MovieBooking Domain)
In Maintainance boundned context I am keeping all CRUD, GetAll type things for say Movie, Country, Category, Subcategory entities in Maintainance DBContext.
Now in MovieBooking data layer I will also need to use these entities (mostly to display name or dropdown fills in view, kind of subset needed - not all properties needed, only few like Id, name)
There are few ways I can access this entities in Movie booking Bounded Context
Via web services - Need to create web api for common entities like Movie,Country,Category,Subcategory and call web api in web project (to fill Dropdowns or get name from entities)
Via Reference Context (Seperate Dbcontext) - Need to configure Dbset and then map a database view (with only require fields) to Dbset
Example :
modelBuilder.Entity().ToTable(ViewName);
For (1) it can be long term implmentation solution for me
(2) I have to create view (with only few properties) for each require table and it will increase my number of views in my DB drastically as I have enterprise level application.
Is there any other way I can achieve this? Anything I am missing in DDD to look for ?
Option 2, while it will save you time, is actually a very bad idea from the DDD perspective as it allows for violations of the transactional boundary guarantees that each aggregate is meant to enforce\represent.
Option 1 seems a better option, although there are still quite a bit of wiggle room for interpretation based on your brief description of your proposed solution. If I understood correctly, it is generally recommended to follow the below:
Do not expose your aggregate state directly since this exposes internals and increases coupling. Simple create meaningful DTO's and use something like Automapper to map your Aggregates to DTO's easilly and with little effort before sending it over.
Have a duplicate of the DTO definition in your client. This will reduce coupling and allow for easier deployments.
I strongly recommend reading the DDD orange book although I have to say that I cannot recall specifically on which chapter this is discussed. You will also benefit a lot by reading about hexagonal architecture (and I would search for that term in the orange book to find more info about your question).
There is actually one alternative that I can think of: if you're publishing events from your BC's you can create a workflow to translate the domain events to "public" events and then in the other BC listen for the public events that you need to and store the data that you need somewhere inside there. The difficulty of this ranges from very easy to quite problematic depending on your infrastructure. Be aware that it is not a very good idea to re-use your domain events for transmitting data to other BC's since this closely couples the two BC's.
I hope this helps. Please do not hesitate to elaborate if I did not understood the question well enough.
So, I'm working on new software, but I have no choice but to brownfield the database. I would like to use Entity Framework where it makes sense.
Here's my dilemma:
Since the tables are very wide, and I can't change this, I will probably make heavy use of projection to limit the width of the datasets that I query.
I do want to make use of navigation properties where it makes sense
From what I've seen, a lot of people use a model where there is a single DbContext class for the whole project.
So,
I'm weighing these pro's and cons, and I'm wondering what the established best practices might be:
Use 1 DbContext.
There could be A LOT of "pollution" here, with bunches of projections of the data inside of the 1 context class. This sounds like it could become a maintenance nightmare.
Don't make my projections dbsets at all -- just make them plain old objects and select new MyProject {..} into them.
This offers the benefit of keeping my projections in module-specific assemblies and namespaces, but now I get NO navigation/lazy loading/ etc.
Be evil?? and use multiple DbContexts?
I'm not really sure what the maintenance story looks like here, but I'm kind of starting to lean in this direction. My biggest problem with it is that it feels like I'm swimming against the current -- not many people seem to do this, but for a large system, it seems like it could be the best option.
Thoughts?
I think you must use POCO or DTO for data transfer between different layers of application. Use ViewModel to send data to View.
Consider using Repository Pattern and UoW to have a better and efficient architecture in this scenario. Limit the use of navigation properties till repository otherwise they makes entity heavy while transferring those across layers (Use POCO or DTO).
If you are doing as above, then I do not think using multiple DbContexts would give you any benefit. Thanks.
I am struggling with how to understand the correct usage of models. Currently i use the inheritance of Db_Table directly and declare all the business logic there. I know it's not correct way to do this.
One solution would be to use Doctrine ORM, but this requires learning curve and all the current components what i use needs to be rewritten paginator and auth. Also Doctrine1 adds a another dozen classes which need to be loaded.
So the current cleanest implementation what i have seen is to use the Data Mapper classes between the so called model and DbTabel. I haven't yet implemented this as it seems to head writing another ORM. But example could be something this: SQL table User
create class with setters, getters, business logic here /model/User.php
data mapper /model/mapper/UserMapper.php, the funcionality is basically writing all the update, save actions in here.
the data source /model/DbTable/User.php extends the Db_Table_Abstract
Problems are with relationships between other models.
I have found it beneficial to not have my models extend Db_Table, but to use composition instead. That means my model 'has a' Db_Table rather than 'is a' Db_Table.
That way I find it much easier to reference multiple tables in the same model, which is a common requirement. This is enough for a simple project. I am currently developing a more complex application and have used the Data Mapper pattern and have found that it has simplified my code more than I would have believed.
Specifically, I have created a class which provides all access to the database and exposes methods such as getUser() etc.. That way, if the DB changes, or my client wants something daft like storing records in XML or we split the servers or something I only have to rewrite one class.
Again, my models do not extend this class, but have an instance of it assigned as a property during construction.
I would say the 'correct' way depends on the situation. Following the YAGNI and KISS principles, it is not good to over-complicate your model setup unless you really believe that it will benefit you in the long run.
What is the application you are developing? How is your current setup of extending Db_Table holding you back?
I have business logic that could either sit in a business logic/service layer or be added to new members of an extended domain class (EF T4 generated POCO) that exploits the partial class feature.
So I could have:
a) bool OrderBusiness.OrderCanBeCancelledOnline(Order order) .. or (IOrder order)
or
b) bool order.CanBeCancelledOnline() .. i.e. it is the order itself knows whether or not it can be cancelled.
For me option b) is more OO. However option a) allows more complex logic to be applied e.g. using other domain objects or services.
At the moment I have a mix of both and this doesn't seem elegant.
Any guidance on this would be much appreciated!
The key thing about OO for me is that you tell objects to do things for you. You don't pull attributes out and make the decisions yourself (in a helper class or other).
So I agree with your assertion about option b). Since you require additional logic, there's no harm in performing an operation on the object whilst passing references to additional helper objects such that they collaborate. Whether you do this at the time of the operation itself, or pre-populate your order object with those collaborating entities is very much dependent upon your current situation.
You can also use extension methods to the POCO's to wrap your bll methods.
So you can keep using your current bll's.
in c# something like:
public static class OrderBusiness <- everything must be static, class and method
{
public static bool CanBeCancelledOnline(this Order order) <- notice the 'this'
{
logic ...
And now you can do order.CanBeCancelledOnline()
This is likely to depend on the complexity of your application and does require some judgement that comes with experience. The short answer is that if your project is anything more than a pretty simple one then you are best off putting your logic in the domain classes.
The longer answer:
If you place your logic within a service layer you are affectively following the transaction script pattern, and ending up with an anaemic domain model. This can be a valid route, but it generally works best with simple and small projects. The problem is that the transaction script layer (your service layer) becomes more complicated to maintain as it grows.
So the alternative is to create a rich domain model that contains the logic within it. Keeping logic together with the class it applies to is a key part of good OO design, and in a complex project pretty essential. It usually requires a bit more thought and effort initially, which is why for very simple projects people sometimes use the transaction script pattern.
If you are unsure about which to go with it is not normally a too difficult job to refactor your logic to move it from your service layer to the domain, but you need to make the call early enough that the job is not too large.
Contrary to one of the answers, using POCO classes does not mean you can't have business logic in your domain classes. POCO is about not applying framework specific structures to your domain classes, such as methods and interfaces specific to a particular ORM. A class with some functions to apply business logic is clearly still a Plain-Old-CLR-Object.
A common question, and one that is partially subjective.
IMO, you should go with Option A.
POCO's should be exactly that, "plain-old-CLR" objects. If you start applying business logic to them, they cease to be POCO's. :)
You can certainly put your business logic in the same assembly as your POCO's, just don't add methods directly to them, create helper classes to facilitate business rules. The only thing your POCO's should have is properties mapping to your domain model.
Really depends on how complex your business rules are. In our application, the busines rules are very straightforward, so we use Option A.
But if your business rules start to get messy, consider using the Specification Pattern.
Stephan Walters video on MVC and Models is a very good and light discussion of the various topics listed in this questions title. The one question listed in the notes unanswered was:
If you create an Interface / Repository pattern for Linq2SQL, does Linq2SQLs classes still cause a dependency on Linq, even though you pass the classes as toList?
It is probably an easy answer YES, however, what standard mechanic would you use to represent the data?
Lets say you have a Product entity that is made up of three tables (Prices, Text, and Photos) (you could have sets of price for different regions, different text for localization, and different photos). (Sounds like a builder pattern) Would you create a slice of these tables grabbing the right prices, text, and photos in to a single List? Since Lists may be proprietary, would you use a Dictionary object?
I thank you for your answers. I am very interested in the "standard and proper" way to do it rather than 101 possibilities.
Another quick question: is Entity Framework ready for a complicated database yet? There are a lot of constructs that Linq2SQL likes that EF does not. EF seems to require identity fields as primary keys (HAHA), but it seems like every demo does this. I want to use EF, but I constantly fail to make it work, falling back to Linq2SQL.
If you keep the L2S on the other side of the Repository facade (remember, that's all a Repository is - a facade) then you decouple the rest of your application from L2S. This means that the job of the code behind your repository is to turn the L2S into "domain" objects, custom classes, and then the Repository returns those.
In this sense, the Repository is returning fully formed "Product" objects with all their related Price, Text, and Photo data. This is called an Aggregate Root.
There shouldn't be a problem with Lists, since they are CLR objects.
As far as EF for advanced scenarios, my advice would be not yet, for the reasons you note.
The standard mechanism I'd use to represent the data is a Data Transfer Object. I would never return a LINQ to SQL or Entity Framework object across a service boundary, and I would hesitate to return it across a layer boundary of any kind. This is because these objects will serialize implementation-dependant data.