Domain Events - Inheritance or use of enums - event-handling

So guess that I have this kind of domain events:
class BookChangedName...
class BookType1ChangedName extends BookChangedName...
class BookType2ChangedName extends BookChangedName...
Is that better or:
class BookChangedName{
enum bookType = BOOK_TYPE_1;
}
Since they say use inheritance only if there is a different behavior between classes, I assume here I would go with enum sample (case #2) - since Domain Events are just plain simple DTOs.
But again, in my domain this different types of events have different meaning (different processing paths). So in case I use example #1 i would end up with a lot of:
if(event instanceof BookType1ChangedName){
//do smth in domain
}
else if(event instanceof BookType2ChangedName){
//do smth in domain
}
and also I could not be as explicit as:
when(BookType1ChangedName event){...
I would have to do some kind of pre-processing like:
#EventHandler(matcher_pattern = event->event.bookType==BOOK_TYPE_1)
when(BookChangedNameevent){...

Do your domain experts use the phrase "BookType1"? You said that there are different logic paths for each book type - I'd suggest interrogate the rational for that, and perhaps there is a more domain-oriented name for each event? Try and listen for the language used by the domain experts to describe the two situations.
If you can find more descriptive language, I would go with separate events based on inheritence. With some domain event mechanisms, you can subscribe to handle either the abstract super class or either of the specialisations, which may be helpful if you want that flexibility.
On the other hand, if it really is just booktype1 and booktype2 and in the future maybe booktype3 and booktype4, I'd consider either an enum or even just an integer and consider implementing the different logic paths in different strategies and have your domain event handler use a factory to return the appropriate strategy for the given book type, then delegate to the strategy to execute the logic.

Event names are an important part in DDD (in fact any "name"is important) so I suggest to use different event names for different events as those names carry a lot of information. Also, if you use a type code (with enum) you increase the CRAP index because of the additional ifs.

You should just have a single BookChangedName event and have a BookType property for that event. The handler/subscriber of the event should take care of the logic.

Related

Should service layer accept a DTO or a custom request object from the controller?

As the title suggests what is the best practice when designing service layers?. I do understand service layer should always return a DTO so that domain (entity) objects are preserved within the service layer. But what should be the input for the service layer from the controllers?
I put forward three of my own suggestions below:
Method 1:
In this method the domain object (Item) is preserved within the service layer.
class Controller
{
#Autowired
private ItemService service;
public ItemDTO createItem(IntemDTO dto)
{
// service layer returns a DTO object and accepts a DTO object
return service.createItem(dto);
}
}
Method 2:
This is where the service layer receives a custom request object. I have seen this pattern extensively in AWS Java SDK and also Google Cloud Java API
class Controller
{
#Autowired
private ItemService service;
public ItemDTO createItem(CreateItemRequest request)
{
// service layer returns a DTO object and accepts a custom request object
return service.createItem(request);
}
}
Method 3:
Service layer accepts a DTO and returns a domain object. I am not a fan of this method. But its been used extensively used at my workplace.
class Controller
{
#Autowired
private ItemService service;
public ItemDTO createItem(CreateItemRequest request)
{
// service layer returns a DTO object and accepts a DTO object
Item item = service.createItem(request);
return ItemDTO.fromEntity(item);
}
}
If all 3 of the above methods are incorrect or not the best way to do it, please advise me on the best practice.
Conceptually speaking, you want to be able to reuse the service/application layer across presentation layers and through different access ports (e.g. console app talking to your app through a web socket). Furthermore, you do not want every single domain change to bubble up into the layers above the application layer.
The controller conceptually belongs to the presentation layer. Therefore, you wouldn't want the application layer to be coupled upon a contract defined in the same conceptual layer the controller is defined in. You also wouldn't want the controller to depend upon the domain or it may have to change when the domain changes.
You want a solution where the application layer method contracts (parameters & return type) are expressed in any Java native types or types defined in the service layer boundary.
If we take an IDDD sample from Vaughn Vernon, we can see that his application service method contracts are defined in Java native types. His application service command methods also do not yield any result given he used CQRS, but we can see query methods do return a DTO defined in the application/service layer package.
In the above listed 3 methods which ones are correct/wrong?
Both, #1 and #2 are very similar and could be right from a dependency standpoint, as long as ItemDto and CreateItemRequest are defined in the application layer package, but I would favor #2 since the input data type is named against the use case rather than simply the kind of entity it deals with: entity-naming-focus would fit better with CRUD and because of that you might find it difficult to find good names for input data types of other use case methods operating on the same kind of entity. #2 also have been popularized through CQRS (where commands are usually sent to a command bus), but is not exclusive to CQRS. Vaughn Vernon also uses this approach in the IDDD samples. Please note that what you call request is usually called command.
However, #3 would not be ideal given it couples the controller (presentation layer) with the domain.
For example, some methods receive 4 or 5 args. According to Eric Evans in Clean Code, such methods must be avoided.
That's a good guideline to follow and I'm not saying the samples couldn't be improved, but keep in mind that in DDD, the focus is put on naming things according to the Ubiquitous Language (UL) and following it as closely as possible. Therefore, forcing new concepts into the design just for the sake of grouping arguments together could potentially be harmful. Ironically, the process of attempting to do so may still offer some good insights and allow to discover overlooked & useful domain concepts that could enrich the UL.
PS: Robert C. Martin has written Clean Code, not Eric Evans which is famous for the blue book.
I'm from C# background but the concept remains same here.
In a situation like this, where we have to pass the parameters/state from application layer to service layer and, then return result from service layer, I would tend to follow separation-of-concerns. The service layer does not need to know about the Request parameter of you application layer/ controller. Similarly, what you return from service layer should not be coupled with what you return from your controller. These are different layers, different requirements, separate concerns. We should avoid tight coupling.
For the above example, I would do something like this:
class Controller
{
#Autowired
private ItemService service;
public ItemResponse createItem(CreateItemRequest request)
{
var creatItemDto = GetDTo(request);
var itemDto = service.createItem(createItemDto);
return GetItemResponse(itemDto);
}
}
This may feel like more work since now you need to write addional code to convert the different objects. However, this gives you a great flexiblity and makes the code much easier to maintain. For example: CreateItemDto may have additional/ computational fields as compared to CreateItemRequest. In such cases, you do not need to expose those fields in your Request object. You only expose your Data Contract to the client and nothing more. Similarly, you only return the relevant fields to the client as against what you return from service layer.
If you want to avoid manual mapping between Dto and Request objects C# has libaries like AutoMapper. In java world, I'm sure there should be an equivalent. May be ModelMapper can help.

Preconditions and postconditions in interfaces and abstract methods

I'm trying to implement my own programming language, and i'm currently doing lexing and parsing. I'm nearly done and want to add native support for class invariants, preconditions and postconditions.
public withdraw (d64 amount) : this {
require amount > 0;
require this.balance - amount > this.overdraft;
# method code
d64 newBalance = this.balance - amount;
ensure this.balance == newBalance;
}
You would also be able to define class invariance at the top of the class.
class BankAccount {
invariant this.balance > this.overdraft;
# class body
}
These are my questions:
Would it make sense to include class invariance in abstract classes, or interfaces.
Would it make sense to include preconditions in abstract methods and interface methods.
Would it make sense to include postconditions in abstract methods, or interface methods.
Thinking about it myself, i don't think it makes sense to include invariance or postconditions in interfaces, but i don't really see a problem with preconditions.
It would be possible to include pre- and postconditions in abstract and interface methods like below.
public interface BankAccount {
public withdraw (d64 amount) : this {
require amount > 0;
require this.balance - amount > this.overdraft;
# no other statements (implementation)
d64 newBalance = this.balance - amount;
ensure this.balance == newBalance;
}
}
It really depends on whether your interface is stateful or stateless. It can be perfectly fine to include pre and/or post conditions for interface methods. In fact, we do this all the time. Any time you create a piece of javadoc (or any other tool), you are creating a contract. Otherwise, how could you test anything? It's important to realize that test-driven-development and design-by-contract have much in common. Defining a contract is essential to proper tdd - you first design an interface and create an informal contract for it (using human-readable language). Then, you write a test to ensure contract is satisfied. If we follow tdd classicists (https://www.thoughtworks.com/insights/blog/mockists-are-dead-long-live-classicists), we always write tests against contracts.
Now, to be more specific. If interface is stateful, we can easily express its invariants according to other methods. Let's take a java List interface as an example:
If you read the javadoc carefully, you will see there are a lot of invariants. For instance, the add method has the following contract:
Preconditions: element cannot be null (if list doesn't support it -
it's a design smell btw in my opinion, but let's set it aside for
now)
Postconditions: ordering is preserved, i.e. the ordering of other
elements cannot be changed
Since List interface is definitely stateful, we can reason about the state of the list using query method, like get, sublist etc. Therefore, you can express all the invariants based on interface's methods.
In case of an interface which is stateless, such as Calculator, we also define a contract, but its invariants do not include any state. So, for example, the sum method can have the following contract:
int sum(int a, int b)
Preconditions: a and b are integers (which is automatically guaranteed by static type checking in Java)
Postconditions: the result is an integer (again - type safety) which is equal to a + b
Our Calculator is a stateless interface, therefore we don't include any state in our invariants.
Now, let's get back to your BankAccount example:
The way you describe it, BankAccount is definitely a stateful interface. In fact, it's a model example of what we call an Entity (in terms of domain-driven-design). Therefore, BankAccount has it's lifecycle, it's state and can (and will) change during its lifetime. Therefore, it's perfectly fine to express your contracts based on the state methods of your class. All you need to do, is to move your amount, balance and overdraft to the top of the interface, either as properties (if your language supports it) or methods - it doesn't really matter. What's important is that amount, balance and overdraft are now part of your interface, and form the ubiquitous language of your interface. These methods/properties are integral part of your entire BankAccount interface - which means, they can be used as part of your interface's contract.
Some time ago I've implemented a very simple prototype of Java contracts, implemented as set of annotations supported by Aspect Oriented Programming. I tried to achieve similar goal to yours - to integrate contracts with language and make them more formal. It was just a very simple prototype, but I think it expressed the idea quite well. If you are interested - I should probably upload it to the github soon (I've been using bitbucket for most of the time so far).

When are object interfaces useful in PHP? [duplicate]

This question already has answers here:
What is the point of interfaces in PHP?
(15 answers)
Closed 8 years ago.
From php.net:
Object interfaces allow you to create code which specifies which methods
a class must implement, without having to define how these methods are handled.
Why should I need to do that? Could it be a kind of 'documentation'?
When I'm thinking about a class I have to implement, I know exactly which methods I should code.
What are some situations where interfacing a class is a "best practice"?
Short answer: uniform interfaces and polymorphism.
Longer answer: you can obviously just create a class that does everything and indeed you'd know what methods to write. The problem you have with using just concrete classes, however, is your lack of ability to change. Say you have a class that stores your users into a MySQL database, let's call it a UserRepository. Imagine the following code:
<?php
class UserRepositoryMysql {
public function save( User $user ) {
// save the user.
}
}
class Client {
public function __construct( UserRepositoryMysql $repos ) {
$this->repos = $repos;
}
public function save( User $user ) {
$this->repos->save( $user );
}
}
Now, this is all good, as it would actually work, and save the User to the database. But imagine your application will become populair, and soon, there is a question to support PostgreSQL as well. You'll have to write a UserRepositoryPostgresql class, and pass that along instead of UserRepositoryMysql. Now, you've typehinted on UserRepositoryMysql, plus you're not certain both repositories use the same methods. As an aside, there is little documentation for a potential new developer on how to implement his own storage.
When you rewrite the Client class to be dependent upon an interface, instead of a concrete class, you'll have an option to "swap them out". This is why interfaces are useful, obviously, when applied correctly.
First off, my php object coding is way behind my .net coding, however, the principles are the same. the advantages of using interfaces in your classes are many fold. Take for example the case where you need to return data from a search routine. this search routine may have to work across many different classes with completely different data structures. In 'normal' coding, this would be a nightmare trying to marry up the variety of different return values.
By implementing interfaces, you add a responsibility to the clsses that use them to produce a uniform set of data, no matter how disparate they may be. Another example would be the case where you are pulling data from different 'providers' (for example xml, json, csv etc, etc). By implementing an interface on each class type, you open up the possibilities to extend your data feeds painlessly by adding new classes that implement the interface, rather than having a mash-up of switch statements attempting to figure out what your intentions are.
In a word, think of an interface as being a 'contract' that the class 'must' honour. lnowing that means that you can code with confidence for that given scenario with only the implementation detail varying.
Hope this helps.
[edit] - see this example on SO for a fairly simple explanation:
An interface is a concept in Object Oriented programming that enables polymorphism. Basically an interface is like a contract, that by which classes that implement it agree to provide certain functionality so that they can be used the same way other classes that use the interface
purpose of interface in classes
The first case that comes to my mind is when you have a class that uses certain methods of another class. You don't care how this second class works, but expects it to have particular methods.
Example:
interface IB {
public function foo();
}
class B implements IB {
public function foo() {
echo "foo";
}
}
class A {
private $b;
public function __construct( IB $b ) {
$this->b = $b;
}
public function bar() {
$this->b->foo();
}
}
$a = new A( new B() );
$a->bar(); // echos foo
Now you can easily use different object passed to the instance of class A:
class C implements IB {
public function foo() {
echo "baz";
}
}
$a = new A( new C() );
$a->bar(); // echos baz
Please notice that the same bar method is called.
You can achieve similar results using inheritance, but as PHP does not support multiple inheritance, interfaces are better - class can implement more than one interface.
You can review one of PHP design patterns - Strategy.
Say you're creating a database abstraction layer. You provide one DAL object that provides generic methods for interfacing with a database and adapter classes that translate these methods into specific commands for specific databases. These adapters themselves need to have a generic interface, so the DAL object can talk to them in a standardized way.
You can specify the interface the adapters need to have using an Interface. Of course you can simply write some documentation that specifies what methods an adapter needs to have, but writing it in code enables PHP to enforce this interface for you. It enables PHP to throw helpful error messages before a single line of code is executed. Otherwise missing methods could only be found during runtime and only if you actually try to call them, which makes debugging a lot harder and code much more unreliable.

Need suggestions regarding Interface refactoring

I have inherited a project that has an awkwardly big interface declared (lets call it IDataProvider). There are methods for all aspects of the application bunched up inside the file. Not that it's a huge problem but i'd rather have them split into smaller files with descriptive name. To refactor the interface and break it up in multiple interfaces (let's say IVehicleProvider, IDriverProvider etc...) will require massive code refactoring, because there are a lot of classes that implement the interface. I'm thinking of two other ways of sorting things out: 1) Create multiple files for each individual aspect of the application and make the interface partial or 2) Create multiple interfaces like IVehicleProvider, IDriverProvider and have IDataProvider interface inhertit from them.
Which of the above would you rather do and why? Or if you can think of better way, please tell.
Thanks
This book suggests that interfaces belong, not to the provider, but rather to the client of the interface. That is, that you should define them based on their users rather than the classes that implement them. Applied to your situation, users of IDataProvider each use (probably) only a small subset of the functionality of that big interface. Pick one of those clients. Extract the subset of functionality that it uses into a new interface, and remove that functionality from IDataProvider (but if you want to let IDataProvider extend your new interface to preserve existing behavior, feel free). Repeat until done - and then get rid of IDataProvider.
This is difficult to answer without any tags or information telling us the technology or technologies in which you are working.
Assuming .NET, the initial refactoring should be very minimal.
The classes that implement the original interface already implement it in its entirety.
Once you create the smaller interfaces, you just change:
public class SomeProvider : IAmAHugeInterface { … }
with:
public class SomeProvider : IProvideA, IProvideB, IProvideC, IProvideD { … }
…and your code runs exactly the way it did before, as long as you haven't added or removed any members from what was there to begin with.
From there, you can whittle down the classes on an as-needed or as-encountered basis and remove the extra methods and interfaces from the declaration.
Is it correct that most if not all of the classes which implement this single big interface have lots of methods which either don't do anything or throw exceptions?
If that isn't the case, and you have great big classes with lots of different concerns bundled into it then you will be in for a painful refactoring, but I think handling this refactoring now is the best approach - the alternatives you suggest simply push you into different bad situations, deferring the pain for little gain.
One thing to can do is apply multiple interfaces to a single class (in most languages) so you can just create your new interfaces and replace the single big interface with the multiple smaller ones:
public class BigNastyClass : IBigNastyInterface
{
}
Goes to:
public class BigNastyClass : ISmallerInferface1, ISmallerInterface2 ...
{
}
If you don't have huge classes which implement the entire interface, I would tackle the problem on a class by class basis. For each class which implements this big interface introduce a new specific interface for just that class.
This way you only need to refactor your code base one class at a time.
DriverProvider for example will go from:
public class DriverProvider : IBigNastyInterface
{
}
To:
public class DriverProvider : IDriverProvider
{
}
Now you simply remove all the unused methods that weren't doing anything beyond simply satisfying the big interface, and fix up any methods where DriverProvider's need to be passed in.
I would do the latter. Make the individual, smaller interfaces, and then make the 'big' interface an aggregation of them.
After that, you can refactor the big interface away in the consumers of it as applicable.

Single Responsibility Principle: do all public methods in a class have to use all class dependencies?

Say I have a class that looks like the following:
internal class SomeClass
{
IDependency _someDependency;
...
internal string SomeFunctionality_MakesUseofIDependency()
{
...
}
}
And then I want to add functionality that is related but makes use of a different dependency to achieve its purpose. Perhaps something like the following:
internal class SomeClass
{
IDependency _someDependency;
IDependency2 _someDependency2;
...
internal string SomeFunctionality_MakesUseofIDependency()
{
...
}
internal string OtherFunctionality_MakesUseOfIDependency2()
{
...
}
}
When I write unit tests for this new functionality (or update the unit tests that I have for the existing functionality), I find myself creating a new instance of SomeClass (the SUT) whilst passing in null for the dependency that I don't need for the particular bit of functionality that I'm looking to test.
This seems like a bad smell to me but the very reason why I find myself going down this path is because I found myself creating new classes for each piece of new functionality that I was introducing. This seemed like a bad thing as well and so I started attempting to group similar functionality together.
My question: should all dependencies of a class be consumed by all its functionality i.e. if different bits of functionality use different dependencies, it is a clue that these should probably live in separate classes?
When every instance method touches every instance variable then the class is maximally cohesive. When no instance method shares an instance variable with any other, the class is minimally cohesive. While it is true that we like cohesion to be high, it's also true that the 80-20 rule applies. Getting that last little increase in cohesion may require a mamoth effort.
In general if you have methods that don't use some variables, it is a smell. But a small odor is not sufficient to completely refactor the class. It's something to be concerned about, and to keep an eye on, but I don't recommend immediate action.
Does SomeClass maintain an internal state, or is it just "assembling" various pieces of functionality? Can you rewrite it that way:
internal class SomeClass
{
...
internal string SomeFunctionality(IDependency _someDependency)
{
...
}
internal string OtherFunctionality(IDependency2 _someDependency2)
{
...
}
}
In this case, you may not break SRP if SomeFunctionality and OtherFunctionality are somehow (functionally) related which is not apparent using placeholders.
And you have the added value of being able to select the dependency to use from the client, not at creation/DI time. Maybe some tests defining use cases for those methods would help clarifying the situation: If you can write a meaningful test case where both methods are called on same object, then you don't break SRP.
As for the Facade pattern, I have seen it too many times gone wild to like it, you know, when you end up with a 50+ methods class... The question is: Why do you need it? For efficiency reasons à la old-timer EJB?
I usually group methods into classes if they use a shared piece of state that can be encapsulated in the class. Having dependencies that aren't used by all methods in a class can be a code smell but not a very strong one. I usually only split up methods from classes when the class gets too big, the class has too many dependencies or the methods don't have shared state.
My question: should all dependencies of a class be consumed by all its functionality i.e. if different bits of functionality use different dependencies, it is a clue that these should probably live in separate classes?
It is a hint, indicating that your class may be a little incoherent ("doing more than just one thing"), but like you say, if you take this too far, you end up with a new class for every piece of new functionality. So you would want to introduce facade objects to pull them together again (it seems that a facade object is exactly the opposite of this particular design rule).
You have to find a good balance that works for you (and the rest of your team).
Looks like overloading to me.
You're trying to do something and there's two ways to do it, one way or another. At the SomeClass level, I'd have one dependency to do the work, then have that single dependent class support the two (or more) ways to do the same thing, most likely with mutually exclusive input parameters.
In other words, I'd have the same code you have for SomeClass, but define it as SomeWork instead, and not include any other unrelated code.
HTH
A Facade is used when you want to hide complexity (like an interface to a legacy system) or you want to consolidate functionality while being backwards compatible from an interface perspective.
The key in your case is why you have the two different methods in the same class. Is the intent to have a class which groups together similar types of behavior even if it is implemented through unrelated code, as in aggregation. Or, are you attempting to support the same behavior but have alternative implementations depending on the specifics, which would be a hint for a inheritance/overloading type of solution.
The problem will be whether this class will continue to grow and in what direction. Two methods won't make a difference but if this repeats with more than 3, you will need to decide whether you want to declare it as a facade/adapter or that you need to create child classes for the variations.
Your suspicions are correct but the smell is just the wisp of smoke from a burning ember. You need to keep an eye on it in case it flares up and then you need to make a decision as how you want to quench the fire before it burns out of control.