We are building a multiple-component Akka-cluster-based system. Every component is a separate Play! project and on start it joins the Akka Cluster and components start looking-up actors to work.
Problem
I have two problems with this setup:
Writing test code is very hard (we didn't figure out this yet), specially when writing tests that rely on multiple actors coming from different components. How can we solve dependency and create a proper cluster in the test-code (between two play applications!)
During development, every developer has to start multiple Sbt instances to boot the system (different play projects) and this eats the entire system memory and development becomes incredible slow.
What I'm Looking For
I was thinking of using the cluster "roles" property to do selective boot-up, this means that there is only a single Play project and components are (modules) and on-boot of the play project I will scan the current "roles" property of this instance and based on that I start or stop certain components/actors.
This will make testing easier but I don't know exactly how to do this in Play, specially that some components actually offer RESTful API (Play Controllers) and I don't know how to enable/disable routes and controllers on boot-time of play.
Is there any document or something that shows the right way to build a modular distributed system or any clues? (something also that relates to how SBT should be setup?
Edit 1: The project lives in a single repository and has a single sbt build (multiple-projects)
This is a good question, and I’ll answer it in parts, although I am not a Play expert.
1 – Writing Tests
I would recommend testing modules in isolation to avoid the exponential explosion of necessary test cases. To this end actors are a very nice abstraction because you can trivially mock any actor by injecting a TestProbe instead of the real ActorRef. In a cluster you will typically want to look up services on other nodes, which means that in a test you construct your probe and inject its path (probe.ref.path) instead of the path you would look up in the production system.
The second aspect concerns integration tests for which you want multiple services to participate. In this case you don’t need to start a “proper” cluster involving multiple JVMs, you can just spin up multiple ActorSystems within your test and have them communicate on "localhost".
2 – Development Deployments
It is not necessary to run multiple instances of sbt, you can just create a suitable Main class which starts all required ActorSystems within the same process, just like for the tests as mentioned above.
3 – Node Role Management
The ActorSystem managed by Play will typically have a “frontend” role. In addition to that one you can start more systems with different roles, which are not Play applications by themselves. Triggering different behavior—starting different services and initiating different activities—makes sense based on the node’s role, we do that ourselves in tests and real applications.
On the question of disabling certain routes for certain roles I do not know enough to answer.
Related
Our team is trying to decouple a monolithic spring mvc administrative application (create, update, delete) and we want to adopt an architecture based on microservices.
After a bit of research, it seems the best is create microservices according to the problem that a specific part of the software solves, for example, Managing Clients.
The problem comes when we read some definitions, like the following from Wikipedia:
In software engineering, a monolithic application describes a
single-tiered software application in which the user interface and
data access code are combined into a single program from a single
platform.
Based on that definition, my application is not monolithic, because it is perfectly separated in layers, but it is not found in a micro-services architecture either, which is confusing to me since in the web everything is about Monolithic vs. Microservices.
So, should the microservices architecture be designed based on the business problem it solves?
Should the microservices architecture be designed based on to the way in which the application is organized in layers?
Thanks.
I like to view each microservice as self contained smaller monoliths. When you're forcing yourself to split up your legacy application to, um, smaller monoliths, you'll find:
60% of your code is scaffolding and will need to be repeated across multiple services.
It's easier to split things (and maintain them that way) if you've established a what-goes-where rule upfront.
The most common approach is to split the application by functionality area. So to answer your question, I'd agree more with the image at the top-right, assuming you intended to show multiple containers there.
And about #1 above, there's often a whole bunch of scaffolding modules that you can avoid writing by hand after all.
From my experience, the most obvious advantage of a microservice is the ability to scale horizontally. User analysis takes to long? Just add 10 more workers. Done. Remove then. No need to add more RAM/CPU/whatever to your already costly server that runs your monolith.
Do not plan ahead an try to separate ClientManager microservice - this should be just a class.
You are thinking about migrating to microservices for a reason. Something is using up too much resources. Find the most problematic process that slows everything down, and create microservice for it. It can be for example report generation, user creation, data agregation. Start with planning the API. It will state clearly, what are responsibilities it will have and how much resources it will use. When you know what it should do, name it properly.
Agile software methodologies are your greatest friend in this process. Take the processes one by one. Experiment, iterate and evaluate. With time, it will be obvious how the microservices should do.
There is also a hot topic on how to organize code with microservices - I lean towards a monorepo - a single repository with all the code.
Pros: One pull request for many services, easy utils sharing, common dependencies, common deployment procedure and easier automation.
Cons: You can easily break the API contract and do too much work within one microservice (meaning, it can take other services responsiblity.)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 6 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Sam Newman states in his book Building Microservices
The evils of too much coupling between services are far worse than the problems caused by code duplication
I just don't understand how the shared code between the services is evil. Does the author mean the service boundaries themselves are poorly designed if a need for a shared library emerges, or does he really mean I should duplicate the code in the case of common business logic dependency? I don't see what that solves.
Let's say I have a shared library of entities common to two services. The common domain objects for two services may smell, but another service is the GUI to tweak the state of those entities, another is an interface for other services to poll the state for their purpose. Same domain, different function.
Now, if the shared knowledge changes, I would have to rebuild and deploy both services regardless of the common code being an external dependency or duplicated across the services. Generally, same concerns all the cases for two services depending of the same article of the business logic. In this case, I see only harm of duplication of the code, reducing the cohesion of the system.
Of course, diverging from the shared knowledge may cause headaches in the case of shared library, but even this could be solved with inheritance, composition and clever use of abstractions.
So, what does Sam mean by saying code duplication is better than too much coupling via shared libraries?
The evils of too much coupling between services are far worse than the problems caused by code duplication
The author is very unspecific when he uses the generic word "coupling". I would agree with certain types of coupling being a strict no-no (like sharing databases or using internal interfaces). However the use of common libraries is not one of those. For example if you develop two micro services using golang you already have a shared dependency (towards golang's basic libraries). The same applies to libraries that you develop yourself for sharing purpose. Just pay attention to the following points:
Treat libraries that are shared like you would dependencies to 3rd party entities.
Make sure each component / library / service has a distinct business purpose.
Version them correctly and leave the decision which version of the library to use to the corresponding micro service teams.
Set up responsibilities for development and testing of shared libraries separately from the micro services teams.
Don't forget - The microservices architectural style is not so much focusing on code organization or internal design patterns, but on the larger organizational and process relevant aspects to allow scaling application architectures, organizations and deployments. See this answer for an overview.
Short
The core concept of the microservice architecture is that microservices have their independent development-release cycles. "Shared libraries" undermining this.
Longer
From my own experience, it's very important to keep microservices isolated and independent as much as possible. Isolation is basically about being able to release & deploy the service independently of any other services most of the time.
In other words its something like:
you build a new version of a service
you release it (after tests)
you deploy it into production
you have not caused the deployment cascade of your whole environment.
"Shared libraries" in my definition those libraries, do hinder you to do so.
It's "funny" how "Shared Libraries" poison your architecture:
Oh we have a User object! Let's reuse it everywhere!
This leads to a "shared library" for the whole enterprise and starts to undermine Bounded Contexts (DDD), forces you to dependent on one technology
we already have this shared library with TDOs you need, written in
java...
Repeating myself. The new version of this kind of shared libs will affect all services and complicate your deployments up to very fragile setups. The consequence is at some point, that nobody trusts himself to develop the next releases of the common shared library or everyone fears the big-bang releases.
All of this just for the sake of "Don't repeat yourself"? - This is not worth it (My experience proves it). T
The shared compromised "User" object is very seldom better than several focused User objects in the particular Microservices in the praxis.
However, there is never a silver bullet and Sam gives us only a guideline and advice (a heuristic if you like) based on his projects.
My take
I can give you my experience. Don't start a microservice project with reasoning about shared libraries. Just don't do them in the beginning and accept some code repetition between services. Invest time in DDD and the quality of your Domain Objects and Service Boundaries. Learn on the way what are stable parts and what evolves fast.
Once you or your team gained enough insides you can refactor some parts to libraries. Such refactoring is usually very cheap in comparison to the reverse approach.
And these libraries should probably cover some boilerplate code and be focussed on one task - have several of them, not one common-lib-for- everything In the comment above Oswin Noetzelmann gave some advice on how to proceed. Taking his approach to the maximum would lead to good and focused libraries and not toxic "shared libraries"
Good example of tight coupling where duplication would be acceptable can be shared library defining interface/DTOs between services. In particular using the same classes/structs to serialize/deserialize data.
Let's say you have two services - A and B - they both may accept slightly different but overall almost same looking JSON input.
It would be tempting to make one DTO describing common keys, also including the very few ones used by service A and service B as a shared library.
For some time system works fine. Both services add shared library as dependency, build and run properly.
With time, though, service A requires some additional data that would change the structure of JSON where is was the same before. As a result you can't use the same classes/structs to deserialize the JSON for both services at the same time - the change is needed for service A, but then service B won't be able to deserialize the data.
You must change shared library, add new feature to service A and rebuild it, then rebuild service B to adjust it to new version of shared library even though no logic has been changed there.
Now, would you have the DTOs defined separately, internally, for both services from the very beginning, later on, their contracts could evolve separately and safely in any direction you could imagine. Sure, at first it might have looked smelly to keep almost the same DTOs in both services but on the long run it gives you a freedom of change.
At the end of the day, (micro)services don't differ that much from monolith. Separation of concerns and isolation are critical. Some dependencies can't be avoided (language, framework, etc.) but before you introduce any additional dependency by yourself think twice about future implications.
I'd rather follow given advice - duplicate DTOs and avoid shared code unless you can't avoid it. It has bitten me in the past. Above scenario is trivial one, but it may be much more nuanced and affect much more services. Unfortunately it hits you only after some time, so the impact may be big.
There are no absolute answer with this. You'll always find an example for a reasonable exception to the rule. We should take this as 'guidelines'.
With that being said, yes coupling between services is something to avoid and a shared library is a warning alarm for coupling.
As other answers have explained, microservices lifecycles should be independant.
And as for your example, I think it strongly depends on what kind of logic / responsibilities does the library have.
If it is business logic, something is odd. Maybe you need to split the library in different libraries with different responsibilities, if that responsability is unique and can't be splited, you should wonder if those two services should be only one. And if that library has business logic that feels weird on those two services, most likely that library should be a service in his own right.
Each microservice is autonomous so executables will have its own copy of shared libraries so there is no coupling with shared library?
Spring Boot, packages language run time also in package of microservice
Nothing is shared even runtime so I don't see problem in using library or common package in microservice
If shared library creates coupling in Microservice then using same languages in different Microservice also a problem?
I was also confused while reading "Building Microservices" by Sam Newman
Keeping with the spirit of a microservice architecture, I'm pondering using a git repository for each service of my Scala+Akka based system. The build for each service produces an artifact that is published to a packaging system (e.g. maven) repo. These artifacts are the mechanism used for sharing common code.
Now, since using case classes for message passing between services, the same class version needs to be available everywhere. Would it be advantageous to separate each service in an interface and implementation artifacts using a multi-project build, and then only import the interface artifact from the dependent projects?
Some alternatives would be to include both the interface and implementation on the same artifact and importing that, or have separate repos for the interface and implementation which seems overkill and likely to be too much overhead.
Here's where you will get the 2 view points on micro-service based designs, share everything and share nothing. I'm on the share nothing side. Agree to a communication interface (like JSON or some other serialization mechanism) and allow each service to handle the domain object representations separately. Here's why
If one updates its code base, another is free to not update until it absolutely has to update in order to interface properly. It also means that your parsing libs can interpret the objects as needed and ignore fields that they don't care about.
Logic tends to find its ways into things. Worse, business logic tends to find it's ways into little "helper" methods on classes, even case classes. This can adversely couple services in benign ways ...right up until it's no longer benign.
Let's assume that I have a couple of MicroServices with each exposing a set of REST end points. Assume that MicroService A is communicating with MicroService B and they exchange JSON data.
This JSON data needs to be Serialized and De-Serialized on both the MicroService A and B. This Serialization logic and the models are going to be the same on both the MicroService code base.
I can reduce this duplication by just moving the model classes into a small dependency and use it on both the MicroServices. Not a problem! This might go against the goal of a MicroService architecture, which is "share nothing". But I feel even more potential problem to address is code duplication. What do you guys think?
I do not see the point 'share nothing' in this scenario. As long as you will hold your De/Serializer as an Artifact in some nexus, you do not "share" anything, instead you are using an (somehow) external library.
If you use e.g. logging, both of your projects will use the e.g. slf4s, but they do not share it, as each uses it separately.
There are a number of things to bear in mind when separating out a functionality into communicating micro-services:
Tying of scala versions between server and client
If your server requires specific versions of scala (because, for example, you use a library that only exists for version 2.10), this should not impact your choice of scala version in the client. This points towards the idea of having the classes representing your communication path, as being in a separate project which can be cross-compiled separately.
Tying of libraries between server and client
The less requirements your shared library places on your client code, the better. Even forcing a particular choice of Play server enforces a level of rigidity and coupling between client and server that is best avoided.
The best option is that this library causes a dependency on zero other libraries.
Supporting protocol changes over time
One of the advantages of having separate services is that they can be upgraded and improved at separate points in time. You should always try and have the server support the previous version of your communications protocol, whenever it changes. This allows you to roll back an update easily, and also update the client at a different point in time.
Not allowing backwards compatibility means you need to update both services in lock-step. This not only reduces a lot of the advantages of using micro-services, it also makes it a huge pain to deal with rollbacks, if that becomes necessary.
The universal story here is to enforce as little as possible in the way of choice (scala version, library version, time period when protocol changes must happen) on the client, through what choices are made on the server.
If you can follow this approach, I don't see a problem with using code to enhance the accessibility of talking to a service.
I am working with an application that has some Wicket pages, divided into some Applications. We are expanding the Wicket development to substitute other legacy content. Right now, there is no clear path wether to write new Wicket Applications for each workflow, or if we should have one big Application with many URL mappings. I did not find any information about this either.
As far as we are, we see following issues:
Many Wicket Applications pattern:
Each Application (Workflow) can be easily mounted without much of a hassle.
Even if it's not more time consuming, you end up writing more Java Classes (at least for each Application you need at least some basic structure).
Each Application default URL get's accessed by it's homepage, so no further config is necessary.
One big Application pattern:
Each Workflow needs a Page, which has to be mapped in the Application class. As far as I've seen, there is no configuration in xml files to archieve this, but it should be possible to develop some schema that allows to structure this in some xml file. Disatvantage: more time consuming for the first time
For further addings, it should be somewhat easier than with the Application pattern, but it doesn't make a difference that would make a real difference considering that the workflow development is always way bigger than the initial config.
Each Workflow default URL can be accessed by the URL mapping, and can be changed easily, it seems a little easier than with the Application approach, but doesn't make a big difference either.
Now, what I'm looking for:
Opinion based on experiences, maybe arguments for deciding for one or another way.
Is there any documentation from Apache or some source for this? If so, some reference would be a great advice.
As I understand it, you would still deploy all of your Wicket Applications within one single Web Archive.
Doing that, in my opinion you lose the only real advantage of separating your code into different Wicket Applications. If you separate your code into multiple Wicket Application classes
you have to think of configuring each Wicket Application the same way and not forget a single one (include it in the web.xml, call the same settings in the init()-method, ...)
you are writing more boilerplate code as you already said yourself
The configuration and code would be more complex than with the "single application" approach. With a single application
you only have to mount the start page of each workflow in your single application class...which is one line of code compared to a new class and some lines of web.xml config with the multiple applications approac
So, if you don't want to deploy your workflows separately, I'd go with a single application. It makes it so much easier. Especially when you have accumulated more than a couple workflows the single application approach will probably be much easier to maintain.
How much shared coda are you likely to have?
Are there different performance/load tolerance/availability requirements for the different workflows?
These are the questions I use in general to decide whether two things should go in one application or not, and that's pretty much independent from Wicket.
Obviously much shared code points towards a single application. Of course you can still use separate applications with all of them depending on a set of shared modules but in practice you'll spend a lot of time trying to keep your modules in sync.
Similarly, wildly different availability requirements might steer you in the direction of separate applications as you'd probably want to deploy them separately.
The most difficult scenario is if you have much shared code AND you still want to deploy them separately, in that case a multi-tiered approach (multiple frontends connecting to a common backend) might be worth considering.