I have recently watched several videos about ES and CQRS models as well as I have watched few talks about AKKA persistence. I know what they are about but I have issues writing actual code that will execute.
I have few questions though.
How should i make view and event stack communicate?
Will events be passed between view and persistent actor of same persistence id passed?
What are persistent actor and view responsible for according to the model?
Edit: Where should i place my business logic? According to model i should do that in write, but what if i need to check something in read to validate cmd?
You shouldn't need to check something in your read mode to validate a command - your command would execute against your write model.
Your business logic would go in your write side, if using Akka then inside an Actor.
You haven't said what your view is so there could be multiple ways of communicating - for example, if a web page you could do req/res, or use something like SignalR
Related
I started with reading about CQRS and I'm little confused.
Is it allowed to call the read side within the write side for getting additional informations?
http://cqrs.nu/Faq/command-handlers here they say it is not allowed, but in the cqrs journey code I found that they call a service 'IPricingService' which internally uses a DAO service class.
So what I must do to get additional informations inside my aggregation root?
CQRS Journey should not be seen as a manual. This is just a story of some team fighting their way to CQRS and having all limitations of using only Microsoft stack. Per se you should not use your read model in the command handlers or domain logic. But you can query your read model from the client to fetch the data you need in for your command and to validate the command.
Since I got some downvotes on this answer, I need to point, that what I wrote is the established practice within the pattern. Neither read side accesses the write side, not write side gets data from the read side.
However, the definition of "client" could be a subject of discussion. For example, I would not trust a public facing JS browser application to be a proper "client". Instead, I would use my REST API layer to be the "client" in CQRS and the web application would be just a UI layer for this client. In this case, the REST API service call processing will be a legitimate read side reader since it needs to validate all what UI layer send to prevent forgery and validate some business rules. When this work is done, the command is formed and sent over to the write side. The validations and everything else is synchronous and command handling is then asynchronous.
UPDATE: In the light of some disagreements below, I would like to point to Udi's article from 2009 talking about CQRS in general, commands and validation in particular.
The CQRS FAQ (http://cqrs.nu/Faq) suggests:
"How can I communicate between bounded contexts?
Exclusively in terms of their public API. This could involve subscribing to events coming from another bounded context. Or one bounded context could act like a regular client of another, sending commands and queries."
So although within one BC its not possible to use read-side from write-side and vice-versa, another bounded context or service could. In essence, this would be acting like a human using the user interface.
Yes assuming you have accepted the eventual consistency of the read side. Now the question is where. Although there is no hard rule on this, it is preferable to pass the data to command handler as opposed to retrieving it inside. From my observation there are two ways:
Do it on domain service
Basically create a layer where you execute necessary queries to build the data. This is as straightforward as doing API calls. However if you have your microservices running on Lambda/Serverless it's probably not a great fit as we tend to avoid a situation where lambda is calling another lambda.
Do it on the client side
Have the client query the data then pass it to you. To prevent tampering, encrypt it. You can implement the decryption in the same place you validate and transform the DTO to a command. To me this is a better alternative as it requires fewer moving parts.
I think it depends.
If in your architecture the "command side" updates the projections on real-time (synchronously) you could do that calling the query api. (although that seems strange)
But, if your projections (query side) is updated asyncronously would a bad idea to do it. Would be a posibility to get a "unreal" data.
Maybe this situation suggests a design problem that you should solve.
For instance: If from one context/domain you think you need information from another, could a domain definition problem.
I assume this, because read data from itself (same domain) during a command operation doesn't make much sense. (in this case could be a API design problem)
I try to under cqrs architecture style. I found sample image of cqrs architecture.
http://blog.trifork.com//wp-content/uploads/2010/01/cqrs_architecturehighlevel.png
If command handling is persisting data to database, why event handling update storage to?
Example:
If I have CreateUserCommand, where place the persistent, in command handling or event handling.
Thank You
From the diagram, it looks to me like a CQRS and event sourced architecture. This means that the domain model will generate events in response to commands. Unlike DTO or view models, you store them in an event store. The event store holds the state transitions for the domain but is not used for the front end. For the front end you need a read model. You generate a read model from the events. Hence the need for the event handlers to write to the database. Of course they are writing to the read model not the event store. I have a similar diagram with more detailed explanation at my blog. You can find the post here: CQRS: A Step by Step Guide to The Flow of a Typical Application. I hope you find the helpful.
I'm looking for a reference to a good implementation of event driven architecture based on Zend Framework. Could you share your experience in this topic?
I've found two solutions, but haven't used them yet:
http://framework.zend.com/wiki/display/ZFPROP/Zend_Event+-+Alvar+Vilu
http://components.symfony-project.org/event-dispatcher/
Edit:
Example:
http://www.slideshare.net/beberlei/towards-the-cloud-eventdriven-architectures-in-php
I don't have much practical experience in this topic, but since no one else seems to be replying, I suppose I'll share what I think of this...
This is perhaps a bit tricky thing in PHP apps, since they typically only run for the duration of a request, so the benefit of being able to subscribe and listen to generic events during that short phase may not be very large.
However, I think there can be some benefits in allowing you to decouple your code more.
From what I can tell, the Symfony dispatcher looks better - mainly because it looks simpler.
I've used a sort of dojo pubsub type system myself: Basically you have an event publisher, to which classes can publish events. This is a sort of global event handling, where you don't specifically subscribe to the class itself - instead you subscribe to a specific event, and it doesn't matter what class publishes the event.
The benefits of this vs. subscribing to a specific class is that the code is decoupled more: In my case, it's a ZF app, and classes which subscribe to events can simply do it in the bootstrap, vs. having to do subscriptions in controllers (or where ever the publishers are created)
The downside of this approach is that it can make dependencies between things harder to track. For example you only see an event publish call, but you have no idea what sort of things listen for it without digging further into the code.
In my case I don't really know if the application has got any benefits from using this architecture - in fact I've several times considered removing it entirely and just using the objects which listen to the events directly.
Most examples (if not all) that I see are the sort of a function that does some sort of computation and finishes. In that aspect, FP shines. However, I have trouble seeing how to apply it in the context of an enterprise application environment where there's not much of algorithms going on and a lot of data transfer and services.
So I'd like to ask how to implement the following problem in FP style.
I want to implement an events bus service. The service has a register method for registering listeners and publish for publishing events.
In an OO setting this is done by creating an EventBus interface with both methods. Then an implementation can use a list to hold listeners that is updated by register and used in publish. Of course this means register has a side effect. Spring can be used to create the class and pass its instance to publishers or subscribers of events.
How to model this in FP, given that clients of the event bus service are independent (e.g., not all are created in a "test" method)? As far as I can see this negates making register return a new instance of EventBus, since other clients already hold a reference to the old instance (and e.g., publishing to it will only publish to the listeners it knows of)
I prefer a solution to be in Scala.
I think you should have a closer look at functional reactive programming techniques. Since you want something in Scala, I suggest reading Deprecating The observer pattern paper by Ingo Maier, Tiark Rompf and Martin Odersky.
The sketch of the solution is that publish should return IO[Unit]. Listeners should be iteratees. Registration also returns IO[Unit].
I am working on a web application which needs to get data from some local and some non local resources and then display it. As it could take arbitrary amount of time to get the data from these resources I am thinking of using the actors concept so that each actor is responsible for getting data from the respective resource. The request thread will wait for each actor to finish its task and then use ajax to update only the portion of the web page that is dependent on that data. This way user will start seeing the data as soon as it is received rather than wait for all of them to finish and then get a first look at the data.
I am planning to look into scala/lift framework for this. I have read some articles on the web for scala/lift and want to explore if this is the correct way to approach this problem and also if scala/lift is good platform of choice. I have worked in Java and C# previously. Any opinions, comments, suggestions are welcome.
Thanks,
gary
Take a look a message queue technology like Java's JMS. Message queues allow you to handle long running background tasks asynchronously and reliably. This is the technique sites like Flickr and YouTube use to do media transcoding asynchronously. You can use a Java EE server, or a JMS technology like Apache's ActiveMQ, and then layer your Scala/Lift code on top of it.
Richard Monson-Haefel's book on JMS covers it well.
For more general help with web site scaling and construction, take a look at Todd Hoff's excellent blog, highscalability.com/. There are some good pointers to using message queues to offload long-running tasks this way.
BTW, Twitter uses Scala for something much like what you're considering. Here's an interview with some of their developers; they describe one way they use Scala:
Robey Pointer: A lot of our architecture is based on letting Rails do what it does best, which is the AJAX, the web front ends, the website—what the user sees. Anything we can offload out of the request/response cycle, we do. So we queue those tasks into a messaging system and have back-end daemons handle them.
If the non-local resources originate at some other service or system, Event Driven Architecture might work for you. Instead of pulling from the non-local resources you could set up this web-application as a subscriber to the events published by these services. Upon receiving a message regarding part of its functionality it would cache locally the data it's interested in. This should let you escape the issue of asynchronous update of parts of the page (all data would be accessible locally).
Udi Dahan blogs about this approach a lot and is also an author of a .NET message bus (NServiceBus) that can be used in such scenarios. See for example http://msdn.microsoft.com/en-us/architecture/aa699424.aspx
Actors would be a way to go. You're essentially setting up a light weight version of JMS. And Lift does the comet stuff very well.
In addition to the Scala actors, and the Lift Actors, you also have akka actors. When Scala Swarm becomes production ready you'll be ready for that too.
If the delayed information is distinct from the information that needs to display immediately, with Lift you can use a LazyLoad snippet that has the longer-running computation (calling the web service) as part of its logic. Lift will take care of inserting it into the page when its ready.