Models as Scala case classes interacting with DAO? - scala

I was starting a design involving Scala case classes as models, and I was wondering about a design decision.
Let's imagine we have two models, a User model and an Order model. The Order model references the User model as a foreign key.
case class User(id: UserId, [Other fields...], password: String)
case class Order(id: OrderId, [Other fields...], userId: UserId)
Then, given this design, we would have an Orders DAO with a method findByUser method.
My question is: would it be good design to have an orders method in User that calls this DAO method, and thus making the system more OO, or is it better to keep layers isolated and not include this method?
Thanks!

If you understand you correctly, you're asking about the Active Record Pattern. As any pattern, it has its pros and cons, you can find more about it online. Here are some of them:
http://www.mehdi-khalili.com/orm-anti-patterns-part-1-active-record
https://softwareengineering.stackexchange.com/questions/70291/what-are-the-drawbacks-to-the-activerecord-pattern
Active Record Design Pattern?
In a Play2 project, I firstly used the pattern, mainly because of Ebean support. However, since I needed a bit more logic for persisting some models, extension of it turned cumbersome. In the end, I separated everything: separate service, separate model, separate DAO. That also helped me easily switch between repository layer implementations (in the end I could experiment with Spring Data JPA, Hibernate etc. more freely)

Related

Exposed domain model in Java microservice architecture

I'm aware that copying entity classes and properties into DTOs is considered anti-pattern, so by Exposed domain model pattern the same #Entity can be used as both database entity class, and DTO for service and MVC layer. (see here https://codereview.stackexchange.com/questions/93511/data-transfer-objects-vs-entities-in-java-rest-server-application)
But suppose we have microservice architecture where the same set of properties is used as entity in one project with persistence, and as DTO in another project which uses the first one as a service. What's the proposed pattern in such a situation?
Because the second project doesn't need #Entity related functionality, and if we put that class in shared library, it will be tied unnecessary to JPA specific APIs and libraries. And the alternative is to again use separate DTO classes anti-pattern.
When your requirements for a DTO model exactly match your entity model you are either in a very early stage of the project or very lucky that you just have a simple model. If your model is very simple, then DTOs won't give you many immediate benefits.
At some point, the requirements for the DTO model and the entity model will diverge though. Imagine you add some audit aspects, statistics or denormalization to your entity/persistence model. That kind of data is usually never exposed via DTOs directly, so you will need to split the models. It is also often the case that the main driver for DTOs is the fact that you don't need all the data all the time. If you display objects in e.g. a dropdown you only need a label and the object id, so why would you load the whole entity state for such a use case?
The fact that you have annotations on your DTO models shouldn't bother you that much, what is the alternative? An XML-like mapping? Manual object wiring?
If your model is used by third parties directly, you could use a subclassing i.e. keep the main model free of annotations and have annotated subclasses in your project that extend the main model.
Since implementing a DTO approach correctly, I created Blaze-Persistence Entity Views which will not only simplify the way you define DTOs, but it will also improve the performance of your queries.
If you are interested, I even have an example for an external model that uses entity view subclasses to keep the main model clean.
Thank you for the answers, but emphasize in the question is on microservice (MS) architecture and reusing defined entity POJOs from one MS in another as POJOs. From what I've read on microservices it's closely related to another question - should MSs share any common functionality and classes at all, or be completely independent? It seems there is no definite agreement on it, and also no definite answer, or widely accepted pattern, to this.
From my recent experience here is what I adopted, and it works well so far.
Have common functionality across MSs - yes, in form of a commons project added as dependency to all MSs, with its dependencies set as optional. Share entity classes (expose them in commons) - no.
The main reason is that entity classes are closely related to data store for particular MS. And as the established rule is that MSs shouldn't share data stores, then it makes sense not to share entity classes for those data stores. It helps MSs to be more independent, and freedom to manage their data store in their own way. It means some more typing to add additional DTO classes and conversion between them, but it's a trade-off worth taking to retain MS independence. Reasons Christian Beikov and Maksim Gumerov mentioned apply as well.
What we do share (put in commons) are some common functionality and helper classes (for cloud, discovery, error handling, rest and json configuration...), and pure DTOs, where T is transfer between MSs (rest entities or message payloads).

Refactoring domain model with mutability and cyclical dependencies to work for Scala with good FP practices?

I come from an OO background(C#, javascript) and Scala is my first foray into FP.
Because of my background I am having trouble realizing a domain model that fits my domain problem well and also complies with good practices for FP such as minimal mutability in code.
First, a short description of my domain problem as it is now.
Main domain objects are: Event, Tournament, User, and Team
Teams are made up of Users
Both Teams and Users can attend Tournaments which take place at an Event
Events consist of Users and Tournaments
Scores, stats, and rankings for Teams and Users who compete across Tournaments and Events will be a major feature.
Given this description of the problem my initial idea for the domain is create objects where bidirectional, cyclic relationships are the norm -- something akin to a graph. My line of thinking is that being able to access all associated objects for any given given object will offer me the easiest path for programming views for my data, as well as manipulating it.
case class User(
email: String,
teams: List[TeamUser],
events: List[EventUser],
tournaments: List[TournamentUser]) {
}
case class TournamentUser(
tournament: Tournament,
user: User,
isPresent: Boolean){
}
case class Tournament(
game: Game,
event: Event,
users: List[TournamentUser],
teams: List[TournamentTeam]) {
}
However as I have dived further into FP best practices I have found that my thought process is incompatible with FP principles. Circular references are frowned upon and seem to be almost an impossibility with immutable objects.
Given this, I am now struggling with how to refactor my domain to meet the requirements for good FP while still maintaining a common sense organization of the "real world objects" in the domain.
Some options I've considered
Use lazy val and by-name references -- My qualm with this is that seems to become unmanageable once the domain becomes non-trivial
Use uni-directional relationships instead -- With this method though I am forced to relegate some domain objects as second class objects which can only be accessed through other objects. How would I choose? They all seem equally important to me. Plus this would require building queries "against the grain" just to get a simple list of the second class objects.
Use indirection and store a list of identifiers for relationships -- This removes cyclical dependencies but then creates more complexity because I would have to write extra business logic to emulate relationship updates and make extra trips to the DB to get any relationship.
So I'm struggling with how to alter either my implementation or my original model to achieve the coupling I think I need but in "the right way" for Scala. How do I approach this problem?
TL;DR -- How do I model a domain using good FP practices when the domain seems to call for bidirectional access and mutability at its core?
Assuming that your domain model is backed by a database, in the case you highlighted above, I would make the "teams," "events," and "tournaments" properties of your User class defs that retrieve the appropriate objects from the database (you could implement a caching strategy if you're concerned about excessive db calls). It might look something like:
case class User(email: String)) {
def teams = TeamService.getAllTeams.filter( { t => t.users.contains(this) } )
//similar for events and tournaments
}
Another way of saying this might be that your cyclic dependencies have a single "authoritative" direction, while references in the other direction are calculated from this. This way, for example, when you add a user to a tournament, your function only has to return a new tournament object (with the added user), rather than a new tournament object and a new user object. Also, rather than explicitly modeling the TournamentUser linking table, Tournament could simply contain a list of User/Boolean tuples.
Another option might be to use Lenses to modify your domain model, but I haven't implemented them in a situation like this. Maybe someone with more experience in FP could speak to their applicability here.

Persistence ignorance and DDD reality

I'm trying to implement fully valid persistence ignorance with little effort. I have many questions though:
The simplest option
It's really straightforward - is it okay to have Entities annotated with Spring Data annotations just like in SOA (but make them really do the logic)? What are the consequences other than having to use persistance annotation in the Entities, which doesn't really follow PI principle? I mean is it really the case with Spring Data - it provides nice repositories which do what repositories in DDD should do. The problem is with Entities themself then...
The harder option
In order to make an Entity unaware of where the data it operates on came from it is natural to inject that data as an interface through constructor. Another advantage is that we always could perform lazy loading - which we have by default in Neo4j graph database for instance. The drawback is that Aggregates (which compose of Entities) will be totally aware of all data even if they don't use them - possibly it could led to debugging difficulties as data is totally exposed (DAO's would be hierarchical just like Aggregates). This would also force us to use some adapters for the repositories as they doesn't store real Entities anymore... And any translation is ugly... Another thing is that we cannot instantiate an Entity without such DAO - though there could be in-memory implementations in domain... again, more layers. Some say that injecting DAOs does break PI too.
The hardest option
The Entity could be wrapped around a lazy-loader which decides where data should come from. It could be both in-memory and in-database, and it could handle any operations which need transactions and so on. Complex layer though, but might be generic to some extent perhaps...? Have a read about it here
Do you know any other solution? Or maybe I'm missing something in mentioned ones. Please share your thoughts!
I achieve persistence ignorance (almost) for free, as a side effect of proper domain modeling.
In particular:
if you correctly define each context's boundary, you will obtain small entities without any need for lazy loading (that, actually becomes an antipattern/code smell in a DDD project)
if you can't simply use SQL into your repository, map a set of DTO to your db schema, and use them into factories to initialize entity classes.
In DDD projects, persistence ignorance is relevant for the domain model itself, not for repositories, factories and other applicative code. Indeed you are very unlikely to change the ORM and/or the DB in the future.
The only (but very strong) rational behind persistence ignorance of the domain model is separation of concerns: in the domain model you should express business invariants only! Persistence is an infrastructural concern!
For example without persistence ignorance (and with lazy loading) the domain model should handle possible exceptions from the db, it's complexity grows and business rules are buried under technological details.
Personally I find it near impossible to achieve a clean domain model when trying to use the same entities as the ORM.
My solution is to model my domain entities as I see fit and ensure that any ORM entities don't leak outside of the repositories. This means that my repositories accept and return domain entities.
This means you lose "most of your ORM goodness" and end up "using your ORM for simple CRUD operations".
Both of these trade-offs are fine for me, I would rather have a clean domain model that I can use, rather than one polluted with artefacts from my DB or ORM. It also cuts down the amount of time I spend "wrestling with my ORM" to zero.
As a side-note, I find document databases a much better fit for DDD.
Once you will provide persistence mapping in you domain model:
your code depends on framework. If you decided to change this framework, you want to change persistence layer and model layer source code - more work, more changes, more merging of code etc.
your domain model jar file depends on spring/nhibernate jars etc.
your classes become larger and larger how business code and persistence related code grows
I've to admit that I dont understand harder and hardest option.
We used separated interfaces and implementations for domain entities. Provide separated mapping files using Hibernate along with repositories.
Entities are created using factory (or repository later), identifier is generated within persistence layer, entity does not need it until it's being persisted.
Lazy loading is provided by special implementation of List once:
mapping of an entity contains it
entity/aggregate is fetched from persistence layer
The only issue is related to transaction as when you use lazy-loaded collection out of transaction scope, it fails.
I would follow the simplest option unless I ran into a stone wall. There are also pitfalls such as this when you adopt pi principle.
Somtimes some compromises are acceptable.
public class Order {
private String status;//my orm does not support enum
public Status status() {
return Status.of(this.status);
}
public is(Status status) {
return status() == status;//use status() instead of getStatus() in domain model
}
}

Should all our Model classes be case classes?

I am very new to Scala and trying to understand its various constructs and their use-cases, so coming down to case-classes, They are great for pattern matching etc.
So looking at MVC point of view, should our all our Models be case classes to leverage this feature ?
I looked at the play framework's sample code snippets and found an example where a model class was defined as a case class.
If you have models as in swing component models, e.g., table models, in mind, then case classes might not be the best choice. Case classes are a good choice when they are (observationally) immutable, which is usually the case if you use them to represent data retrieved from a data base. For swing models, how ever, this might not be the case, e.g., if the user is allowed to change the table data.
It's not uncommon. There are various libraries, e.g. Salat for MongoDB, that will store and retrieve case class instances from your datastore of choice.

Should i use partial classes as business layer when using entity framework?

I am working on a project using entity framework. Is it okay to use partial classes of the EF generated classes as the business layer. I am begining to think that this is how EF is intended to be used.
I have attempted to use a DTO pattern and soon realized that i am just creating a bunch of mapping classes that is duplicating my effort and also a cause for more maintenance work and an additional layer.
I want to use self-tracking-entities and pass the EF entities to all the layers. Please share your thoughts and ideas. Thanks
I had a look at using partial classes and found that exposing the database model up towards the UI layer would be restrictive.
For a few reasons:
The entity model created includes a deep relational object model which, depending on your schema, would get exposed to the UI layer (say the presenter of MVP or the ViewModel in MVVM).
The Business logic layer typically exposes operations that you can code against. If you see a save method on the BLL and look at the parameters needed to do the save and see a model that require the construction of other entities (cause of the relational nature the entity model) just to do the save, it is not keeping the operation simple.
If you have a bunch of web services then the extra data will need to be sent across for no apparent gain.
You can create more immutable DTO's for your operations parameters rather than encountering side effects cause the same instance was modified in some other part of the application.
If you do TDD and follow YAGNI then you will tend to have a structure specifically designed for the operation you are writing, which would be easier to construct tests against (not requiring to create other objects not realated to the test just because they are on the model). In this case you might have...
public class Order
{ ...
public Guid CustomerID { get; set; }
... }
Instead of using the Entity model generated by the EF which have references exposed...
public class Order
{ ...
public Customer Customer { get; set; }
... }
This way the id of the customer is only needed for an operation that takes an order. Why would you need to construct a Customer (and potentially other objects as well) for an operation that is concerned with taking orders?
If you are worried about the duplication and mapping, then have a look at Automapper
I would not do that, for the following reasons:
You loose the clear distinction between the data layer and the business layer
It makes the business layer more difficult to test
However, if you have some data model specific code, place that is a partial class to avoid it being lost when you regenerate the model.
I think partial class will be a good idea. If the model is regenerated then you will not loose the business logic in the partial classes.
As an alternative you can also look into EF4 Code only so that you don't need to generate your model from the database.
I would use partial classes. There is no such thing as data layer in DDD-ish code. There is a data tier and it resides on SQL Server. The application code should only contain business layer and some mappings which allow persisting business objects in the mentioned data tier.
Entity Framework is you data access code so you shouldn't built your own. In most cases the database schema would be modified because the model have changed, not the opposite.
That being said, I would discourage you to share your entities in all the layers. I value separation of UI and domain layer. I would use DTO to transfer data in and out of the domain. If I have the necessary freedom, I would even use CQRS pattern to get rid of mapping entities to DTO -- I would simply create a second EF data access project meant only for reading data for the UI. It would be built on top of the same database. You read data through read (anemic -- without business logic) model, but you modify it by issuing commands that are executed against real model implemented using EF and partial methods.
Does this answer your question?
I wouldn't do that. Try too keep the layers independent as possible. So a tiny change in your database schema will not affect all your layers.
Entities can be used for data layer but they should not.
If at all, provide interfaces to be used and let your entities implement them (on the partial file) the BL should not know the entities but the interfaces.