Mule: What's the difference between a multicasting-router and a static-recipient-list-router? - router

I can't really see a difference between a multicasting-router and a static-recipient-list-router. Why would I use one over the other?
According to Mule-2.x user guide
Recipient List
the Recipient list router can be used
to send the same event to multiple
endpoints over the same endpoint or to
implement routing-slip behaviour where
the next destination for the event is
determined from the event properties
or payload. Mule provides an abstract
Recipient list implementation
org.mule.routing.outbound.AbstractRecipientList
that provides a thread-safe base for
specialised implementations. Mule also
provides a Static recipient list that
takes a configured list of endpoints
from the current event or statically
declared on the endpoint.
<outbound>
<static-recipient-list-router>
<payload-type-filter expectedType="javax.jms.Message"/>
<recipients>
<spring:value="jms://orders.queue"/>
<spring:value="jms://tracking.queue"/>
</recipients>
</static-recipient-list-router> </outbound>
Multicasting Router
The Multicasting router can be used to
send the same event over multiple
endpoints. When using this router care
must be taken to configure the correct
transformers on the endpoints to
handle the event source type.
<outbound>
<multicasting-router>
<jms:endpoint queue="test.queue"
transformer-refs="StringToJmsMessage"/>
<http:endpoint host="10.192.111.11"
transformer-refs="StringToHttpClientRequest"/>
<tcp:endpoint host="10.192.111.12"
transformer-refs="StringToByteArray"/>
<payload-type-filter expectedType="java.lang.String"/>
</multicasting-router> </outbound>
Remember that care should be taken to
ensure that the message being routed
is transformed to a format that the
endpoint understands.

Straight from the horse's mouth (Mule in Action, by David Dossot, John D'Emic, p. 98..100)
The static-recipient-list router lets you simultaneously send the same message to multiple endpoints. You'll usually use a static recipient list when each endpoint is using the same transport. This is often the case with VM and JMS endpoints.
Use static recipient lists when sending the same message to endpoints using identical transports
The multicasting router is similar to the static recipient list in that it simultaneously sends the same message across a set of outbound endpoints. The difference is that the multicasting router is used when the endpoint list contains different types of transports.
Use the multicasting router when sending the same message to endpoints using different transports

This is how I understand these:
The static-recipient-list router will send the payload to each recipient in the order that they are listed. This gives you the ability to modify the payload before proceeding to the next endpoint. This also gives you the ability to stop processing in the event of an error.
The multicast-router sends the same payload to all endpoints at the same time. You will not be able to change the payload for each endpoint. You will not be able to stop other endpoints from processing if one of the endpoints fail.

Related

Handling multiple requests with same body - REST API

Let's say I have a micro service which just registers a user into the database and we expose it to our client. I want to understand what's the better way of handling the following scenario,
What if the user sends multiple requests in parallel(say 10 requests within the 1 second) with same request body. Should I keep the requests in a queue and register for the very first user and deny all the other 9 requests, or should I classify each request and compare whichever having similar request body and if any of them has different request body shall be picked up one each and rest are rejected? or What's the best thing I can do to handle this scenario?
One more thing I would like to understand, is it recommended to have rate-limiting (say n requests per minute) on a global API level or micro-service level?
Thanks in advance!
The best way is to use an idempotent call. Instead of exposing an endpoint like this :
POST /users + payload
Expose an endpoint like this :
PUT /user/ID + payload
You let the caller generate the id, and you ask for an UUID. With UUID, no matter who generates it. This way, if caller invokes your endpoint multiple times, the first time you will create the user, the following times you will juste update the user with the same payload, which means you'll do nothing. At least you won't generate duplicates.
It's always a good practice to protect your services with rate-limiting. You have to set it at API level. If you define it at microservice level, you will authorize N times the rate if you have N instances, because you will ditribute the requests.

Which ActiveMQ properties/settings should be stated in an interface specification document?

A customer wants to exchange data between his application and our application via ActiveMQ, so we want to create an Interface Specification Document which describes the settings and properties that both applications must use so that they can communicate. We don't know which programming language or API the customer will use; so if the specification is incomplete they might implicitly use settings that we don't expect.
So I'm wondering which settings must be the same on both sides, and which settings can be decided by each application on its own. This is what I have so far:
Must be specified in document:
connector type (openwire, stomp, ...)
connector settings (host name where broker runs, TCP port, user name, password)
message type (TextMessage, BytesMessage...)
payload details (XML with XSDs, JSON with schema, ...)
message encoding (UTF-8), for text payload
use queues, or topics, or durable topics
queue names
is any kind of request/response protocol being used
use single queue for requests and responses (with selectors being used to get correct messages), or use separate queues for requests and responses
how to transfer correlation ID used for correlating requests and responses
message expiration
Must not be specified in document:
ActiveMQ broker version (all versions are compatible, right?)
message compression (it should be transparent?)
What did I miss? Which things should be stated in such a document to ensure that two applications can communicate via ActiveMQ?
What did I miss?
You missed message headers. These can be broken into two categories:
Built-in (JMS) headers
Custom headers
Examples of the built-in headers are things such as JMSMessageID, JMSXGroupID, etc. In some cases, your interface definition will need to include details of whether and how these values will be set. For example, if messages need to be grouped, then any message producer or consumer using the definition will need to be aware of this.
Similarly, if there will any custom headers (common uses include bespoke ordering, source system identification, authorization tokens, etc.) attached to the messages need to be part of any interface definition.
In fact, I would argue that the interface definition only needs to include two things:
a schema definition for the message body, and
any headers + if they are required or optional
Everything else you have listed above is either a deployment or a management concern.
For example, whether a consumer or producer should connect to a queue or topic is a management concern, not an interface concern. The address of the queue/topic is a deployment concern, not an interface concern.

Can ZMQ publish message to specific client by pub-sub socket?

I am using pub/Sub Socket and currently the server subscribe byte[0] (all topics)
while client subscribe byte[16] - a specific header as topic
However, I cannot stop client to subscribe byte[0] which can receive all other messages.
My application is a like a app game which has one single server using ZMQ as connection
and many clients have a ZMQ sockets to talk with server.
What pattern or socket I should use in this case?
Thanks
" ... cannot stop client to subscribe byte[0] which can receive all other messages."
Stopping a "subscribe to all" mode of the SUB client
For the ZMQ PUB/SUB Formal Communication Pattern archetype, the SUB client has to submit it's subscription request ( via zmq_setsockopt() ).
PUB-side ( a Game Server ) has got no option to do that from it's side.
There is no-subscription state right on a creation of a new SUB socket, thus an absolutely restrictive filter, thas no message pass through. ( For furhter details on methods for SUBSCRIBE / UNSUBSCRIBE ref. below )
ZeroMQ specification details setting for this:
int zmq_setsockopt ( void *socket,
int option_name,
const void *option_value,
size_t option_len
);
Caution: only ZMQ_SUBSCRIBE
ZMQ_UNSUBSCRIBE
ZMQ_LINGER
take effect immediately,
other options are active only for subsequent socket bind/connects.
ZMQ_SUBSCRIBE: Establish message filter
The ZMQ_SUBSCRIBE option shall establish a new message filter on a ZMQ_SUB socket. Newly created ZMQ_SUB sockets shall filter out all incoming messages, therefore you should call this option to establish an initial message filter.
An empty option_value of length zero shall subscribe to all incoming messages.
A non-empty option_value shall subscribe to all messages beginning with the specified prefix.
Multiple filters may be attached to a single ZMQ_SUB socket, in which case a message shall be accepted if it matches at least one filter.
ZMQ_UNSUBSCRIBE: Remove message filter
The ZMQ_UNSUBSCRIBE option shall remove an existing message filter on a ZMQ_SUB socket. The filter specified must match an existing filter previously established with the ZMQ_SUBSCRIBE option. If the socket has several instances of the same filter attached the ZMQ_UNSUBSCRIBE option shall remove only one instance, leaving the rest in place and functional.
How to enforce an ad-hoc, server-dictated, ZMQ_SUBSCRIBE restrictions?
This is possible via extending the messaging layer and adding a control-mode socket, that will carry server-initiated settings for the client ZMQ_SUB messages filtering.
Upon receiving a new, the server-dictated, ZMQ_SUBSCRIBE/ZMQ_UNSUBSCRIBE setting, the ZMQ_SUB client side code will simply handle that request and add zmq_setsockopt() accordingly.
FSA-driven grammars for this approach are rich of further possibilites, so will allow any Game Server / Game Community to smoothly go this way.
What pattern or socket I should use?
ZeroMQ is rather a library of LEGO-style elements to get assembled into a bigger picture.
Expecting such a smart library to have a one-size-fits-all ninja-element is on a closer look an oxymoron.
So, to avoid a "Never-ending-story" of adding "although this ... and also that ..."
Review all requirements and & list features for the end-to-end scaleable solution,
Design a messaging concept & validate it to meet all the listed requirements & cover all features in [1]
Implement [2]
Test [3] & correct it for meeting 1:1 the end-to-end specification [1]
Enjoy it. You have done it end-to-end right.

Forward message to next Round Robin routee

I have this Play app that connects to a distant server in order to consume a given API. In order to load balance my requests to the distant server, I connect multiple accounts to that same server. Each account can query the API a given number of times. Each account is handled by an Akka actor and these actors are behind an Akka Round Robin router. Thus when wanting to consume the distant API, I "ask" the RR router for the wanted info.
This implementation runs fine until, one account gets disconnected. Basically, when one account is disconnected, the actor returns a given object that says "something was wrong with the connection", and then I send a second request to the RR router again to be handled by another account.
My question is, instead of having to have the "retry" logic outside the router-routee group, is there a way to do it inside? I am thinking that for example at router level, define a logic that handles these "something was wrong with the connection" messages by automatically forwarding the request to the next routee to be handled by it, and only return a final response once all routees have been tried and none worked?
Does Akka provide a simple way of achieving this or should I just carry on with my implementation?
I'm not sure if I fully understand your design but I think you should try using first complete model supported by the ScatterGatherFirstCompleted routing logic.
router.type-mapping {
...
scatter-gather-pool = "akka.routing.ScatterGatherFirstCompletedPool"
scatter-gather-group = "akka.routing.ScatterGatherFirstCompletedGroup"
..
}
in the simple form
---> Route
--> SGFC-1
RR ->
or possibly combined with round robin router.
---> Route
--> SGFC-1
RR ->
--> SGFC-2
---> Route
The same as in your proposal connections are represented by the routes. SGFC-1 and SGFC-2 should have access to the same pool of routees (conenctions.) or part of the pool.

Same routing behavior on different nodes/routers

I know, that if I use a consistent hashing group router, it will always rout to the same registered routees.
So I wrote my application, with a few routees on there own routee-nodes and a public-node with a router, which is reachable by the client.
If the client sends a message it is routed as it should be and it works fine.
Now what I want to do is, add more public-nodes with thier own router that provides the same sending/routing behavior as every other public-node.
What I mean is, that it should not matter if a client sends message XYZ to public-node A, B or C. It should always go to the same routee-node.
At first I thought that akka may provides this behavior by default, because:
I used a group and not a pool router, so everyone knows the same routees
I leared that cluster nodes could be ordered
So I assumed that the routees list is ordered and the different routers map the same keys to the same routees. But testing showed me that I was wrong.
So, is there way in akka to achieve this behavior? Thanks.
As I expected, this behavior shoud be the standart for consistent hashing group routers and it is a bug in the akka-cluster package (current version 2.3.0-RC1)
See this tickt and this google-group post for more details.