I wonder why there is no callback defined on the publish method in AutobahnJS? I think it would be useful for the client who tries to publish something to know whether their publish call succeeded or not and react accordingly. I wonder if other frameworks that support pub/sub have such a callback for publishing.
How do you define "success" for a publish? Depending on that, PubSub will get very different. Today, WAMP PubSub is a Best-Effort service. WAMPv2 might get other QoS levels in addition.
Related
I am using Axon 3.1.1 and wanted to know,
How can I get a list of eventprocessor in my configuration file,
I went through the springAmQPmessageSource file but still not sure how to exactly do it.
So that I can pass my event to appropriate eventhandler on Query side.
List<Consumer<List<? extends EventMessage<?>>>> eventProcessors = new CopyOnWriteArrayList<>();
Updated
I was retrieving message from kafka topic and wanted to wire them to specific eventhandler but since I am not able to get evenprocessors, I am not able to do that.
Can you please tell me how to do it, if I am using Axon 3.0.5
If you're using the SpringAmqpMessageSource, you will not need to retrieve the list of eventProcessors you've shared, as Axon will automatically subscribe all the event handling components to it for you.
Subsequently, the events the Message Source receives will automatically be pushed to all the listeners in your query side.
As this is all covered as Axon infrastructure under the hood, there is no one-off way to pull them out of it for your own use (other than potentially wiring them yourself).
Hence, you shouldn't have to do this yourself.
But, maybe I'm missing an obvious point here.
Could you elaborate a little more why you need the list of handlers in the first place?
Having background workers is a pretty common requirement for a decent sized app - but there doesn't seem to be any way to do this out of the box in Sails JS. What's the best way to implement this?
I don't know if this is the BEST way to do it, but this is how I have implemented it in our current system.
We're using sails.js in Elastic Beanstalk on AWS. This allows me to deploy a worker environment which listens to a queue. The configuration requires an endpoint which receives the messages from the queue.
With this in mind, I have created a WorkerController which has only an index action. This index action behaves as a router for the functions required to be executed via the queue.
You will need to lock down this controller+action route for security purposes.
To trigger the worker, I send a message to the queue with JSON body containing the parameters required. An example message would be:
{
"task": "processTransaction",
"id": 1207
}
These parameters are available to the WorkerController.index action as request parameters, which are then used to trigger a function and pass the arguments.
I'm interested in watching a stream of Events from Kubernetes, to determine whether a deployment was successful, or if any of the Pods were unable to be scheduled.
I could call the endpoint /api/v1/watch/events, or I could call /api/v1/events?watch=true. Is there a difference between those two? I'm confused about the purpose of them.
Thanks.
We're making watch a query param and removing it from the path (legacy form). You should call /api/v1/events?watch=true. See more discussions here if you're interested.
As far as I understand, in a CQRS-oriented API exposed through a RESTful HTTP API the commands and queries are expressed through the HTTP verbs, the commands being asynchronous and usually returning 202 Accepted, while the queries get the information you need. Someone asked me the following: supposing they want to change some information, they would have to send a command and then a query to get the resulting state, why to force the client to make two HTTP requests when you can simply return what they want in the HTTP response of the command in a single HTTP request?
We had a long conversation in DDD/CRQS mailing list a couple of months ago (link). One part of the discussion was "one way command" and this is what I think you are assuming. You can find out that Greg Young is opposed to this pattern. A command changes the state and therefore prone to failure, meaning it can fail and you should support this. REST API with POST/PUT requests provide perfect support for this but you should not just return 202 Accepted but really give some meaningful result back. Some people return 200 success and also some object that contains a URL to retrieve the newly created or updated object. If the command handler fails, it should return 500 and an error message.
Having fire-and-forget commands is dangerous since it can give a consumer wrong ideas about the system state.
My team also recently had a very heated discussion about this very thing. Thanks for posting the question. I have usually been the defender of the "fire and forget" style commands. My position has always been that, if you want to be able to move to an async command dispatcher some day, then you cannot allow commands to return anything. Doing so would kill your chances since an async command doesn't have much of a way to return a value to the original http call. Some of my team mates really challenged this thinking so I had to start thinking if my position was really worth defending.
Then I realized that async or not async is JUST an implementation detail. This led me to realize that, using our frameworks, we can build in middleware to accomplish the same thing our async dispatchers are doing. So, we can build our command handlers the way we want to, returning what ever makes sense, and then let the framework around the handlers deal with the "when".
Example: My team is building an http API in node.js currently. Instead of requiring a POST command to only return a blank 202, we are returning details of the newly created resource. This helps the front-end move on. The front-end POSTS a widget and opens a channel to the server's web socket using the same command as the channel name. the request comes to the server and is intercepted by middleware which passes it to the service bus. When the command is eventually processed synchronously by the handler, it "returns" via the web socket and the front-end is happy. The middleware can be disabled easily, making the API synchronous again.
There is nothing stopping you from doing that. If you execute your commands synchronously and create your projections synchronously, then it will be easy for you to just make a query directly after executing the command and returning that result. If you do this asynchronously via the rest-api, then you have no query result to send back. If you do it asynchronously within your system, then you can wait for the projection to be created and then send the response to the client.
The important thing is that you separate your write and read models in classic CQRS style. That does not mean that you cannot do a read in the same request as you do the command. Sure, you can send a command to the server and then with SignalR (or something) wait for a notification that your projection have been created/updated. I do not see a problem with waiting for the projection to be created on the server side instead for on the client.
How you do this will affect you infrastructure and error handling. Also, you will hold the HTTP request open for a longer time if you return the result at once.
I've got a Backbone web application that talks to a RESTful PHP server. For PUT and POST it matters in which order the requests arrive at the server and for GET it matters in which order the responses arrive at the client.
The web application does not need to be used concurrently by multiple users, but what might happen is that the user changes its name twice really fast. Then the order in which the server processes PUT /name/Ann and PUT /name/Bea determines whether the name is set to Ann or Bea.
Backbone.Safesync and Backbone.Sync.AjaxQueue are two libraries that try to solve this problem. Doesn't Safesync only solve the problem with GET? Sync.AjaxQueue is outdated, but might serve as inspiration to implement a custom queued sync function. Making sync synchronous would solve the problem. If a request is only sent after the previous response is received, then only one request is processed at a time.
Any advice on how to proceed?
BTW: I don't think using PATCH requests would solve anything, because in my example the same attribute is changed twice.
There's a few ways to solve this, here's two:
add a timestamp to all requests, store it in the DB as "modified" and let the server check whether the timestamp of the new request is later than the one in the DB in order to be valid
use Promises to delay the second request from being made before the first one is responded on, there's a promise/deferred mechanism built into jquery, but you can also use a 3rd party one, for instance Q or when
If you can afford the delay, an easy approach is to set the async option to false when you call whatever method you're calling that results in the Backbone.sync. For example, in the appropriate model(s) simply override the default sync method to include the additional option.