Scala Akka | How to use the allButSelf property inside an Cluster? - scala

Based on the Akka documentation for Cluster, would I like to Publish a Broad message to all nodes in the Cluster except myself. Currently it's always sending the Broad message to me as well.
val mediator = DistributedPubSub(context.system).mediator
mediator ! Subscribe("content", self) // subscribe to the topic named "content"
mediator ! Publish("content", "msg") // sends the msg out Broad to all nodes including myself
How exactly can I set the documentation property "allButSelf"?
https://doc.akka.io/docs/akka/current/distributed-pub-sub.html

You want to do
mediator ! DistributedPubSubMediator.Put(testActor)
mediator ! DistributedPubSubMediator.SendToAll(path, msg, allButSelf=false) // it is false by default
See the example here https://github.com/akka/akka/blob/0e4d41ad33dbeb00b598cb75b4d29899371bdc8c/akka-cluster-tools/src/test/scala/akka/cluster/pubsub/DistributedPubSubMediatorRouterSpec.scala#L56

Related

Send message to dynamic kafka topic in helidon

In Quarkus/small rye, we can send message to dynamic topic. Please check the below link for the example.
https://beyondvelocity.blog/2022/01/05/dynamic-kafka-topics-in-quarkus/
Kindly suggest how can we implement same in Helidon
I could not find equivalent api or classes in helidon to send message to dynamic topic.
There's nothing preventing you from using KafkaProducer + ProducerRecord classes directly and calling send method with any topic String parameter
Otherwise, just create the Channel with the topic name when you need a dynamic value
KafkaConnector kafkaConnector = KafkaConnector.create();
messaging = Messaging.builder()
.publisher(
Channel.<String>builder()
.subscriberConfig(KafkaConnector.configBuilder()
.bootstrapServers(kafkaServer)
.topic("some random string")
.keySerializer(StringSerializer.class)
.valueSerializer(StringSerializer.class)
.build()
).build(),
Multi.just("test1", "test2").map(Message::of) // example messages
)
.connector(kafkaConnector)
.build()
.start();
https://helidon.io/docs/v2/#/se/reactivemessaging/04_kafka

How to limit the number of actors of a particular type?

I've created an actor to send messages to a chat server. However, the chat server only permits 5 connections per user. If I hammer my scala server I get error messages because my chat clients get disconnected.
So how can I configure akka so that my XmppSenderActors only use a maximum of 5 threads? I don't want to restrict the rest of the actor system, only this object (at the path /XmppSenderActor/).
I'm trying this config since I think it's the dispatcher I need to configure, but I'm not sure:
akka.actor.deployment {
/XmppSenderActor {
dispatcher = xmpp-dispatcher
}
xmpp-dispatcher {
fork-join-executor.parallelism-min = 2
fork-join-executor.parallelism-max = 3
}
}
This gives me an error though: akka.ConfigurationException: Dispatcher [xmpp-dispatcher] not configured for path akka://sangria-server/user/XmppSenderActor
I would probably try to configure a Router instead.
http://doc.akka.io/docs/akka/2.0/scala/routing.html
A dispatcher seems to deal with sending messages to the inbox rather than the actual number or Actor targets.
That configuration in particular could work for you:
akka.actor.deployment {
/router {
router = round-robin
nr-of-instances = 5
}
}
The nr-of-instances will create 5 childrens from the get going and therefore fill your needs.
You might need to find the right Router implementation though.

vertX eventBus consumer listens to all addresses

I'd like to write a catch all eventBus consumer. Is this possible?
eB = vertx.eventBus();
MessageConsumer<JsonObject> consumer = eB.consumer("*"); // What is catch all address ???
consumer.handler(message -> {
Log.info("Received: " + message.body().toString());
});
A solution to your problem might be an interceptor.
vertx.eventBus().addInterceptor( message -> {
System.out.println("LOG: " + message.message().body().toString());
});
This handler will write every message that comes to the event-bus in vertx.
Reference is here:
http://vertx.io/docs/apidocs/io/vertx/rxjava/core/eventbus/EventBus.html#addInterceptor-io.vertx.core.Handler-
Also, version of vertx-core that I'm using is 3.3.2, I think interceptor functionality is not available in older versions (e.g. 3.0.0).
Having looked through the Java code, I don't think this is possible.
Vert.x stores event bus consumers in a MultiMap looking like:
AsyncMultiMap<String, ServerID>
where the String key is the consumer address.
And as you'd guess, Vert.x just does a map.get(address) to find out the relevant consumers.
Update after OP comment
While I think your use case is valid, I think you're going to have to roll something yourself.
As far as I can see, Vert.x doesn't store consumers of send and publish separately. It's all in one MultiMap. So it would be inadvisable to try to register consumers for all events.
If someone does an eventBus.send(), and Vert.x selects your auditing consumer, it will be the only consumer receiving the event, and I'm going to guess that's not what you want.
I dont know if that´s possible but referring to the documentation, you can put a listener to the events to know when a publish, send, open_socket, close_socket is invoked
sockJSHandler.bridge(options, be -> {
if (be.type() == BridgeEvent.Type.PUBLISH || be.type() == BridgeEvent.Type.RECEIVE) {
Log.info("Received: " + message.body().toString());
}
be.complete(true);
});

Broadcast messages using JMSComponent

I have a problem where I have to broadcast messages to different output locations. I am using JMSComponent for configuring my output queues. My output queues configuration have something like this:
ConnectionFactory factory = createOrGetConnectionFactory(brokerUrl);
JmsConfiguration jmsConfiguration = new JmsConfiguration();
jmsConfiguration.setPreserveMessageQos(true);
jmsConfiguration.setConnectionFactory(factory);
counter++;
outputLocations = new StringBuilder("hubOutput"+counter+":queue://queueName");
JmsEndpoint endpoint = new JmsEndpoint();
JmsComponent component = new JmsComponent();
component.setConcurrentConsumers(5);
component.setConfiguration(jmsConfiguration);
component.setConnectionFactory(factory);
//Add new JMS component in the context. This is done so that the output locations having same queue can be differentiated using the component name in camel registry. getContext().addComponent("hubOutput"+counter, component);
endpoint = (JmsEndpoint) component.createEndpoint(outputLocations.toString());
endpoint.setConfiguration(jmsConfiguration);
I have a camel route for broadcasting the messages to the output queues.
from(fromLocation)
.setHeader("hubRoutesList",constant(hubUrl))
.log(urlToLog)
.setExchangePattern(ExchangePattern.InOnly)
.multicast()
.parallelProcessing()
.to(hubUrl.split(","));
All the output queues have different broker URL but same queue name.
The code works normally but if one of the queues is down, then the message is not broadcasted to other queues also.
Kindly help me with this.
Thanks,
Richa
You can use the recipientList instead of .to(hubUrl.split(","));
With the option stopOnException=false, which is the defaultValue, the forwarding of the messages to the other endpoints will not stop even if one of your queues is down.
See http://camel.apache.org/recipient-list.html for more information.

Consumer Poll Rate with Akka, SQS, and Camel

A project I'm working on requires the reading of messages from SQS, and I decided to use Akka to distribute the processing of these messages.
Since SQS is support by Camel, and there is functionality built in for use in Akka in the Consumer class, I imagined it would be best to implement the endpoint and read messages this way, though I had not seen many examples of people doing so.
My problem is that I cannot poll my queue quickly enough to keep my queue empty, or near empty. What I originally thought was that I could get a Consumer to receive messages over Camel from SQS at a rate of X/s. From there, I could simply create more Consumers to get up to the rate at which I needed messages processed.
My Consumer:
import akka.camel.{CamelMessage, Consumer}
import akka.actor.{ActorRef, ActorPath}
class MyConsumer() extends Consumer {
def endpointUri = "aws-sqs://my_queue?delay=1&maxMessagesPerPoll=10&accessKey=myKey&secretKey=RAW(mySecret)"
var count = 0
def receive = {
case msg: CamelMessage => {
count += 1
}
case _ => {
println("Got something else")
}
}
override def postStop(){
println("Count for actor: " + count)
}
}
As shown, I've set delay=1 as well as &maxMessagesPerPoll=10 to improve the rate of messages, but I'm unable to spawn multiple consumers with the same endpoint.
I read in the docs that By default endpoints are assumed not to support multiple consumers. and I believe this holds true for SQS endpoints as well, as spawning multiple consumers will give me only one consumer where after running the system for a minute, the output message is Count for actor: x instead of the others which output Count for actor: 0.
If this is at all useful; I'm able to read approximately 33 messages/second with this current implementation on the single consumer.
Is this the proper way to be reading messages from an SQS queue in Akka? If so, is there way I can get this to scale outward so that I can increase my rate of message consumption closer to that of 900 messages/second?
Sadly Camel does not currently support parallel consumption of messages on SQS.
http://camel.465427.n5.nabble.com/Amazon-SQS-listener-as-multi-threaded-td5741541.html
To address this I've written my own Actor to poll batch messages SQS using the aws-java-sdk.
def receive = {
case BeginPolling => {
// re-queue sending asynchronously
self ! BeginPolling
// traverse the response
val deleteMessageList = new ArrayList[DeleteMessageBatchRequestEntry]
val messages = sqs.receiveMessage(receiveMessageRequest).getMessages
messages.toList.foreach {
node => {
deleteMessageList.add(new DeleteMessageBatchRequestEntry(node.getMessageId, node.getReceiptHandle))
//log.info("Node body: {}", node.getBody)
filterSupervisor ! node.getBody
}
}
if(deleteEntryList.size() > 0){
val deleteMessageBatchRequest = new DeleteMessageBatchRequest(queueName, deleteMessageList)
sqs.deleteMessageBatch(deleteMessageBatchRequest)
}
}
case _ => {
log.warning("Unknown message")
}
}
Though I'm not certain if this is the best implementation, and it could of course be improved upon so that requests are not constantly hitting an empty queue, it does suit my current needs of being able to poll messages from the same queue.
Getting about 133 (messages/second)/actor from SQS with this.
Camel 2.15 supports concurrentConsumers, though not sure how useful this is in that I don't know if akka camel support 2.15 and I do not know if having one Consumer actor makes a difference even if there are multiple consumers.