Is my understanding of different ways commands can be delivered to CQRS-based application correct:
1) A CQRS application can receive commands in two ways:
a) either it implements a Command Bus, in which case client puts a command into a Command Bus and sends it to the server or it implements "regular" Application services, which client can then call?
2) If instead of using a Command Bus client can send a command by simply calling Application Service, then this would suggest Command Bus is just an implementation detail of CQRS and thus CQRS may be implemented without it?
3) If CQRS application is using a Command Bus, then couldn't we argue that in that case Application Services exist in the form of Command Handlers?
4) If client doesn't use Command Bus, but instead calls a regular Application Service, then it is a responsibility of a called Application service to create a command object and delegate it to the appropriate command handler?
thanks
You're confusing things, CQRS simply means have at least 2 models: one for writes (commands) and at least one for reads (query). That's it. If you want to us a service bus is ok, calling directly a service is ok as well. CQRS is the concept, how you want to implement it is up to you
Just a guess:
The command bus is an techonology strategy that is used to decouple client from command handlers. In this case, on the client side, all we need is just a simple interface.
Command handlers with a bus and app services are just 2 flavors of the application layer API.
Application service is a "classic" approach while command handlers is a design created with a distributed environment in mind (you can add multiple nodes for handling heavy/heavily used commands).
Neither two are directly related with CQRS.
Command bus is just a layer of abstraction and makes it simpler for the client to just use one interface: $commandBus->dispatch($command);
Example of the application service without the command bus: https://github.com/VaughnVernon/IDDD_Samples/tree/master/iddd_collaboration/src/main/java/com/saasovation/collaboration/application/forum
Related
The classic EDA example involves a command triggering events - like a chain of dominos.
PlaceOrder -> OrderPlaced -> PaymentSucceeded -> OrderShipped
Typically the Order Service listens to events along the way to keep the status of the order updated. Presumably (and this is the part that every article skips!) because at some point the order service will receive a ViewOrder command, which will require a response beyond "OK".
So my question is: In a EDA, do at least some of your services have to react to both events and commands?
If not, what architecture could separate the "command world" (required for supporting a HTTP API) from the "event world" of services performing async processing?
In my experience, every microservice we've built does both things. Participating with the Messaging Plane (publishing and/or subscribing) is always a requirement, and in most cases, exposing at least one API endpoint is also a requirement. In fact I don't believe we have any live services that don't expose an API endpoint although we have a few that probably could be that way.
So far, we've not run into a case where there was value in splitting a service into separate parts for API serving vs event bus interaction. I wouldn't say that's impossible, but we are very focused on services encapsulating a (functional) domain without much concern for implementation. That has allowed us to use a very formulaic approach to creating services themselves which is a big part of why we chose this architecture style.
I have an aggregate root with the business logic in a c# project. Also in the solution is a REST web.api project that passes commands / requests to the aggregate root to do work and handle queries. This is my microservice. Now I want some of my events / commands / request to come of a message queue. I'm considering this:
Put a console app in the solution to listen for messages from a message queue. Then reference the aggregate root project in the console app
Is it a bad pattern to share "microservice business logic" between two services? Because now I have two "services" an api and a console app doing the work. I would have to ensure that when the business logic changes both services are deployed.
Personally I think it is fine to do what I suggest, a good CI/CD pipeline should mitigate that. But are there any other cons I might have missed?
For some background I would suggest watching DDD & Microservices: At Last, Some Boundaries! by Eric Evans.
A bounded context is the micro service. How you surface it is another matter. What you describe seems to be what I actually do quite frequently. I have an Identity & Access open source project that I'm working on (so depending on when you read this it may be in a different state) that demonstrates this structure.
Internal to an organization one may access the BC either via a service bus or via the web-api. External parties would utilize only the web-api as messaging should not be exposed.
The web-api either returns data from the query layer or sends commands via the service bus (messaging) to the BC functional endpoint. Depending on the complexity of the system I may introduce an orchestration concern that interacts with multiple BCs. It is probably a BC in its own right much along the lines of a reporting BC.
We are embarking on a new project development , where we will have multiple micro-services communicating each other to provide information in cloud native system. Our application will be decomposed into multiple services like Text Cleaner , Entities Extractor, Entities Resolver , Output Converter. As you can see in diagram we have some forking where input to one service in required by other service and so forth.
Only one service is going to be exposed outside. Others would be internal. And we have to provide synchronous response to clients.
I wanted to check if some one can guide me here to best patterns:
1- Should we have one Wrapper class which has model classes for all projects as one all of details is needed in final output convertors or how should the data flow so data is sorted out in last micro-service. We want to keep systems loosely coupled and are thinking about how orchestrate this flow without having a middle layer which composes all this data?
2- How to orchestrate this flow? Service Mesh / Api Gateway?
Looks like a workflow based solution.. When so many steps are involved ; the only response you can give to consumer is that request accepted.. and in background the process starts..You cannot let consumer wait for very long because they will get connection time out.
if all these services are deployed on different servers ( which should be the case for Micro services definition for scalability); you can communicate via HTTP or using some messaging solution like JMS or if u are deployed on cloud ; they give workflow based services..
Is there a conventional way to write a program such that commands can be issued to the program from the command line without a repl? For example, how you can send commands to a running nginx server using sudo /etc/init.d/nginx restart (or any other valid command besides restart)
One idea I had was having the long-running program create and monitor a unix socket that other programs can write to to send it commands. Another was to create a local server with a REST interface that can be sent commands that way, though that seems a bit gross.
What's the right way to do this?
Both ways are ok, and you could even consider using some RPC machinery, such as making your application serve JSONRPC on some unix(7) socket. Or use a fifo(7). Or use D-Bus.
A common habit on Unix is to have applications reload their configuration files on e.g. SIGHUP signal, and save some persistent state (before terminating) on SIGTERM. Read signal(7) (notice that only async-signal-safe routines can be called fro signal handlers; a good way is to only set some volatile sig_atomic_t variable inside the handler and test it outside). See also POSIX signal.h documentation.
You might make your application become a specialized HTTP server (e.g. using some HTTP server library like libonion) and give it some Web interface (or REST, or SOAP ...); the user (or sysadmin) will then use his browser to interact with your application.
You could make your server systemd compatible. (I don't know exactly what that requires, it is perhaps D-bus related).
You could embed some command interpreter (like Guile and Lua) in your app and have some limited kind of REPL loop running on some IPC like a socket or a fifo. Beware of nasty code injection.
I had a similar issue where I have a plethora of services running on any number of machines and each is in need of communicating with several others.
My main problem was not so much the communication between the services. That can be done with a simple message sent over a connection (as Basile mentioned, it can be TCP, UDP, Unix sockets, FIFOs...). However, when you have over 20 services, many of which need to communicate with several other services, you start having a headache on how to get all the connections right (I have such a system, although it has a relatively limited number of services, like just 10 and that's already very complicated).
So I created a process (yet another service) called Communicator. All services connect to the Communicator service and when they need to send a message, they include the name of the service they want to reach. The Communicator service is in charge of sending the message to the right placeāi.e. it could be to another Communicator service running on a different computer. Communicator has a graph of all the services available on your network and knows how to send messages to them without your service having to know anything about all of that. Computing a graph can be really complex.
For the purpose, I created the eventdispatcher project. It is in C++, which may not be what you're interested in, although you could use it in other languages that interface with C/C++. The structure of the messages are "proprietary" (specific to the Communicator), but you can create any message you want. A message includes a name and parameters (param-name=value). The first version has a simple one line text communication system. The newer version accepts JSON as well (still must be one line of text per message).
The system supports TCP, UDP, Unix sockets, FIFO, and between threads, you can have thread safe fifos. It also understand signals (like SIGHUP, SIGTERM, etc.) It has a specific connection to listen for the death of a thread. It supports encryption over TCP via OpenSSL. The messages can automatically be dispatched (hence the current name of the library). Connections are assigned a timer. And there are CUI and GUI (Qt) extensions as well.
The one main point here is that all your connections can be polled (see poll()) and thus you can implement a system that reacts to events instead of a system which sleeps and checks for events, sleeps and check, etc. or worth, you have a single blocking connection and everything has to happen on that one connection or your service gets stuck. This is one reason Unix has been using signals since early version of Unix did not have select() nor poll().
I would like to send all data received from fuse, in a specific Topic, to a Business Process in BPM Studio. Is there any way?
Example:
I send a value to 'testTopic' in Fuse. Then Fuse send this value to a Business Process (or the Business Process retrieve it), then the Business Process do things based on the value recevied, like sending another value to another topic
Is somithing of this kind possible?
Yes it most definitely is possible, although you would need to route from the 'testTopic' to one of the JMS Queues that jBPM can listen on and transform the message to reflect a valid jBPM command. The generic principle is described in the documentation at http://docs.jboss.org/jbpm/v6.0/userguide/jBPMRemoteAPI.html#d0e12149. The real power becomes clear when you look at all the jBPM commands you can send in the packages
org.drools.core.command.runtime.process (Maven: org.drools:drools-core)
and
org.jbpm.services.task.commands (Maven: org.jbpm:jbpm-human-task-core).
When talking from the outside world, it would typically be necessary to identify a correlationKey in the process which is basically the "Business Key" that can be used to identify a process uniquely e.g. as 'ApplicationNumber' for an application process. This can be used to then identify which process you may want to signal/abort/etc.
Since you are working in Fuse you should probably also consider routing that message to the jBPM Rest API described at http://docs.jboss.org/jbpm/v6.0/userguide/jBPMRemoteAPI.html#d0e10088. This may simplify your code a bit because it is a more synchronous API. The drawback however is the REST over HTTP invocation typically does not respect the local transaction.