My scenario is this:
I have multiple webservers that:
need to communicate with the backend (IBus.Publish/IBus.Subscribe)
need to communicate with each-other (IBus.Publish/IBus.Subscribe)
Aside from the webservers, I have a number of windows services that consume the same messages.
In order to make this work, I have the webservers send messages to a central hub, which sole responsebility it is to wrap the message in a new message type and publish it to all subscribers.
Can I somehow avoid this, so I can publish the messages directly from the webservers?
EDIT (Added some code) - Current situation:
... WebServer
_bus.Send(new Message{Body="SomethingChanged"});
... Hub
public void Handle(Message message){
_bus.Publish(new WrappedMessage{Message = message})
}
... Handlers (WebServers, WindowsServices etc)
public void Handle(WrappedMessage message){
//Actually do important stuff
}
Wanted situation:
... WebServer
_bus.Publish(new Message{Body="SomethingChanged"};
... Handlers (WebServers, WindowsServices etc)
public void Handle(Message message){
//Do important stuff
}
Well, there isn't anything that technically prevents you from publishing messages inside your web application, and likewise there's nothing that prevents you from subscribing to those messages in all instances of the same web application. The question is whether you should :)
Without knowing the details of your problem, my immediate feeling is that you would be better off using some kind of shared persistent storage for whatever it is that you're trying to synchronize (a cache?), possibly using some kind of read replication if you'd like to scale out and make reads really fast.
Again, without knowing the details of your problem, I'll try and suggest something, and then you can see if that could inspire you into an even better solution... here goes:
Use MongoDB (possible as a replica set if you want to scale out your read operations) as the persistent storage of the thing you're caching
Whenever something happens in the web application, bus.Send a message to your backend
In your backend message handler, you update Mongo (which automatically will replicate to read slaves)
Whenever you need to query your data, you just query your Mongo set (using slaveOk=true whenever you can accept slightly stale values)
The reason I'm suggesting this alternative solution, is that web applications (at least in .NET land) have this funny transient nature where the IIS will dictate its lifecycle, and at any given time you can have n instances of it. This complicates matters if you keep state in it. This makes me think of the web application as a client, not a publisher.
A simpler solution is to keep state in something that does not come & go, e.g. a database. And the reason I'm suggesting Mongo is that my guess is that you're worried about being able to serve web requests fast, but since MongoDB is fairly easy to install as a replica set where read operations will be pretty fast (and, more importantly: horisontally scaleable), my guess is that this setup would make everything much simpler.
How does that sound?
Related
In my Play app, I do this in Module.configure():
bind(classOf[GadgetsReader]).toInstance(GadgetsCsvReader)
bind(classOf[Gadgets]).asEagerSingleton()
Then, I do this:
#Singleton
class Gadgets #Inject()(reader: GadgetsReader) {
val all:Seq[Gadget] = reader.readGadgets()
}
That synchronously loads a large collection of gadgets from a CSVfile into memory on startup, in a Play's rendering thread.
I did not see a similar scenario implemented anywhere in Play examples. I would like to know whether what I am doing is idiomatic Scala & Play.
Is it OK to load a very large file synchronously like this, given that I don't want any requests served until the data is fully loaded?
Is it a good thing that I created aGadgets class and then injected it, as opposed to a static/object method Gadget.all?
Should Gadget and Gadgets classes live under model?
Any other comments would be appreciated, too.
I guess it depends how large, how fast you want your startup to be, etc. In general, I'd say yes, even Akka's cluster sharding has (or at least, last I read, had) a blocking call that waits for initialisation to complete before returning. In your case it's probably fine, but one gotcha with blocking calls like this is blocking generally means doing IO, and IO can fail (eg, what if you're reading from a network filesystem, and the network fails when you're starting up?). So sometimes, it's better to design your app so that it's capable of responding (perhaps with a not available status) without the operation having being done yet, and do that operation asynchronously, with retries etc in case it fails. But perhaps this is overkill in your case.
To answer your other questions - yes, it is definitely better to dependency inject Gadgets than use a static singleton, this means you can control how Gadgets is created (perhaps you might want to initialise it differently in tests).
It's probably fine to be in the model package, but this is greatly dependent on your domain and what it looks like.
I have implemented a Notify/Listen mechanism, so when a special request is sent to the web server, using notify I can notify the workers (in Python) that there's a pending request waiting to be processed.
The implementation works fine, but the problem is that if the workers server is restarting, the notification gets lost, since at that particular time there's no listener.
I can implement a service like MQRabbit or similar, but my needs are so simple that implement such a monster is too much.
Is there any way, a configuration variable perhaps, that can give some persistence to the notification mechanism?
Thanks in advance
I don't think there is a way to persist notification channels, but you can simply store the pending requests to a table, and have the worker check for any missed work on startup.
Either a timestamp or a pending/completed flag would work, depending on what kind of work it's doing.
For consistency, you can have the NOTIFY fire from an INSERT trigger on the queue table, and have the worker always check for any remaining work (not just a specific request) when notified.
I am attempting to learn and apply the CQRS design approach (pattern and architecture) to a new project but seem to be missing a key piece.
My client application executes a query and retrieves a list of light-weight, read-only DTOs from the read model. The user selects an item and clicks a button to initiate some action. The action is performed by creating and sending the corresponding command object to the write model (where the command handler carries out the action, updates the data store, etc.) At some point, however, I need to update the UI to reflect changes to the state of the application resulting from the action.
How does the UI know when it is time to refresh the original list?
Additional Info
I have noticed that most articles/blogs discussing CQRS use MVC client apps in their examples. I am working on a Silverlight client right now and am beginning to wonder if the pattern simply doesn't work in that case.
Follow-Up Question
After thinking more about Bartlomiej's response and subsequent discussion, I am wondering about error handling in CQRS. Given that commands are basically fire-and-forget asynchronous operations, how do we report an error condition to the UI?
I see 'refreshing the UI' to take one of two forms:
The operation succeeds, data has changed and the UI should be updated to reflect these changes
The operation fails, data has not changed but the user should be notified of the failure and potential corrective actions.
Even with a Post-Redirect-Get pattern in an MVC, you can't really Redirect until you know the outcome of the operation. None of the examples I've seen thus far address these real-world concerns.
I've been struggling with similar issues for a WPF client. The re-query trigger for any data is dependent on the data your updating, commands tend to fall into categories:
The command is a true fire and forget method, it informs the back-end of a state change but this change does not need to be reflected in the UI, or the change simply isn't important to the UI.
The command will alter the result of a single query
The command will alter the result of multiple queries, usually (in my domain at least) in a cascading fashion, that is, changing the state of a single "high level" piece of data will likely affect many "low level" caches.
My first trigger is the page load, very few items are exempt from this as most pages must assume data has been updated since it was last visited. Though some systems may be able to escape with only updating financial and other critical data in this way.
For short commands I also update data when 'success' is returned from a command. Though this is mostly laziness as IMHO all CQRS commands should be fired asynchronously. It's still an option I couldn't live without but one you may have to if your implementation expects high latency between command and query.
One pattern I'm starting to make use of is the mediator (most MVVM frameworks come with one). When I fire a command, I also fire a message to the mediator specifying which command was launched. Each Cache (A view model property Retriever<T>) listens for commands which affect it and then updates appropriately. I try to minimise the number of messages while still minimising the number of caches that update unnecessary from a single message so I'll (hopefully) eventually end up with a shortlist of update reasons, with each 'reason' updating a list of caches.
Another approach is simple honesty, I find that by exposing graphically how the system updates itself makes users more willing to be patient with it. On firing a command show some UI indicating you're waiting for the successful response, on error you could offer to retry / show the error, on success you start the update of the relevant fields. Baring in mind that this command could have been fired from another terminal (of which you have no knowledge) so data will need to timeout eventually to avoid missing state changes invoked by other machines also.
Noting the irony that the only efficient method of updating cache's and values on a client is to un-separate the commands and queries again, be it through hardcoding or something like a hashmap.
My two cents.
I think MVVM actually fits into CQRS quite well. The ViewModel simply becomes an observable ReadModel.
1 - You initialize your ViewModel state via a query on the ReadModel.
2 - Changes on your ViewModel are automatically reflected on any Views that are bound to it.
3 - Certain changes on your ViewModel trigger a command to propegate to a message queue, an object responsible for sending those commands to the server takes those messages off the queue and sends them to the WriteModel.
4 - Clients should be well formed, meaning the ViewModel should have performed appropriate validation before it ever triggered the command. Once the command has been triggered, any event notifications can be published onto an event bus for the client to communicate changes to other ViewModels or components in the system interested in those changes. These events should carry the relevant information necessary. Typically, this means that other view models usually don't have to re-query the read model as a result of the change unless they are dependent on other data that needs to be retrieved.
5 - There is an object that connects to the message bus on the server for real-time push notifications when other clients make changes that this client is interested in knowing about, falling back to long-polling if necessary. It propagates those to the internal message bus that ties the components on the client together.
6 - The last part to handle is the fact that clients can be occasionally connected, which should be the only reason a command fails (they don't have internet access at the moment), which is when the client should be notified of problems.
In my ASP.NET MVC 3 I use 2 techniques depending on use case:
already well-known Post-Redirect-Get pattern which fits nicely with CQRS. Your MVC action that triggers the command returns a redirection to action that performs a query.
in some cases, like real-time updates of other clients, I rely on domain events/messages. I create an event handler that uses singlarR to push changes to all connected and interested clients.
There are two major ways you can take as far as I know :
1) design your UI , so that the user does not see its changes right away. Like for instance a message to tell him his action is a success, and offering him different choices to continue his work. this should buy you enough time to have updated your readmodel.
2) more complex, but you might keep the information you have send to the server and shows them in the interface.
The most important I guess, educate your user if you can so that they know why the data is not here... yet!
I am thinking about it only now, but these are for sync command handling, not async, in async things go really harder on the brain...the client interface becomes an event eater too..
I'm very new to Node.JS and asynchronous programming and have a challenging question. I want to fork a process from Node and then shoot that output back to the browser with Websockets, specifically the Sockets.io library. What is the best and most robust way to handle this?
The data isn't mission critical, it's just for updating the user on status. So if they leave the page, the socket can close and the child process can continue to run. It'd also be neat if there was some way to access the socket via a specific URL in Express and come back to it later (but that may be another days work).
Use the Redis Store support of socket.io:
var RedisStore = require('socket.io').RedisStore;
var io = require('socket.io').listen(app);
io.set('store', new RedisStore());
The socket.io library use redis server to storage the data and the events.
I have an iPad app that works both on and offline but when I am offline there are web service calls that will need to be made once online availability is an option again.
Example:
A new client is added to the app, this needs to be sent to the web service but since we are offline we dont want to slow the user down so we let them add locally and keep going but we need to remember that that call needs to be made to the web service when we can. Same thing for placing orders and such.
Is there some sort of queue that can be setup that will fire once we have connectivity?
I don't think the overhead of a heavyweight tool like MSMQ is needed for a simple action. You can use Core Data, persist managed objects with the data needed to call the web service, and only delete each managed object after a successful post. There might or might not be a way to capture an event when connectivity starts, but you can certainly create a repeating NSTimer when the first message is queued and stop it when there are no messages in the queue.
This library handles offline persistent message queueing for situations like you describe. It says alpha from a year ago, but I have confirmed it is used in production apps:
https://github.com/gcamp/IPOfflineQueue