Combining a RabbitMQ consumer and a Mojolicious websocket server - perl

I've got a Mojolicious server inspired by this example, and a RabbitMQ consumer inspired by this example. My plan is to combine them, so that a webclient can access the Mojolicious server and subscribe to some kind of updates. The mojo server will from time to time check a rabbitmq queue, to see if there is some updates, and if it is, it will send the data to the connected websocket clients.
I'm struggling to see how this can be done. Do I put the rabbitmq stuff inside the mojo server stuff, or the other way around. How do I prevent the rabbitmq consume from blocking incoming websocket connections. I guess I have to use a timeout on the consume, but then I might have to run it in a loop which might block the websocket. Or have I misunderstood something? Maybe Mojolicious is not the right library to use?
I'm thinking that the server is checking the rabbitmq queue every 10 seconds, but at the same time accepting websocket connections.
Anyone have some ideas on how to solve this? Some pseudo code or anything would be appreciated.

Related

Ressource announcement - Message queing registry on Kubernetes

I have an application which makes use of RabbitMQ messages - it sends messages. Other applications can react on these messages but they need to know which messages are available on the system and what they semantically mean.
My message queuing system is RabbitMQ and RabbitMQ as well as the applications are hosted and administered using Kubernetes.
I am aware of
https://kubemq.io/: That seems to be an alternative to RabbitMQ?
https://knative.dev/docs/eventing/event-registry/ also an alternative to RabbitMQ? but with a meta-layer approach to integrate existing event sources? The documentation is not clear for me.
Is there a general-purpose "MQ-interface service availabe, a solution, where I can register which messages are sent by an application, how the payload is technically and semantically set up, which serialization format is used and under what circumstances errors will be sent?
Can I do this in Kubernetes YAML-files?
RabbitMQ does not have this in any kind of generic fashion so you would have to write it yourself. Rabbit messages are just a series of bytes, there is no schema registry.

Standalone Kafka Producer

I'm thinking about creating a stand alone Kafka producer that runs as a daemon and takes messages via a socket and send them reliable to Kafka.
But, I must not be the first one to think about this idea. The idea is to avoid writing a Kafka producer in for example PHP or Node but just deliver messages via a socket to a stand alone daemon from these languages that takes care of the delivery while the main applications keeps doing its thing.
This daemon should take care of retry delivery in case of outages and acts as a delivery point for all programs that run on the server.
Is this something that is a good idea, or is writing producers in every used language the common approach? That mmust not be the case right?
You should have a look at Kafka connectors.
Here is one of the them:
Kafka Connect Socket Source
Here you can find how to use it:
https://www.baeldung.com/kafka-connectors-guide
Sample Configuration connect-socket-source.properties:
name=socket-connector
connector.class=org.apache.kafka.connect.socket.SocketSourceConnector
tasks.max=1
topic=topic
schema.name=socketschema
port=12345
batch.size=100

Use Kafka in AWS EBS microservice docker environment to avoid losing user requests and handle more concurrent hits

Currently, I am using AWS EBS microservice docker environment to deploy the micro-services which are written in Scala and Akka. If anyone of the microservice docker is crashed and restarted again. In this case, we will lose the user requests and service will not return any response for those cases. My current architecture can handle up to 1000 concurrent requests without any issues. To avoid this issues, I am planning to store and retrieve all the requests and responses using Kafka.
So I want to use Kafka to manage the request and responses of all my web services and include a separate service or web socket to process all the requests and store the responses again to Kafka. In this case, if my core process docker crashed or restarted. It won't lose any request and responses at any point in time. It will again start to read the requests from Kafka and process it.
All the web services will store the request in relevant topic in Kafka and get the response from relevant response topic and return back to an API response. I have found the following library to use Kafka in Scala web services.
https://github.com/akka/reactive-kafka/
Please check the attached architecture diagram which I am going to use it to efficiently handle a large number of concurrent requests from client apps. Is it a good approach to proceed? Do I need to changes anything in my architecture?
I have created this architecture after done more research about Kafka and microservice dockers. Please let me know if anything wrong with this architecture.
This is Kafka's bread and butter so I don't think you're going to run into any architectural issues with this. Just be aware that there is a pretty large amount of operational overhead with Kafka. A good resource for getting started is Kafka: The Definitive Guide written by Confluent (https://www.confluent.io/wp-content/uploads/confluent-kafka-definitive-guide-complete.pdf) . It goes into detail on a lot of common operational issues that they don't mention in the documentation.

Can I use Kafka queue in my Rest WEBSERVICE

I have a rest based application deployed in server(tomcat) ,
Every request comes to server it takes 1 second of time to serve, now I have a issue, sometimes Server receive more request then it is capable of serving which making server non responsive. Now I was thinking if I can store the requests in a queue so that server can pull request and serve that request and handle the pick time issue.
Now I was thinking can Kafka be helpful for this, if yes any pointer where I can start.
You can use Kafka (or any other messaging system for this ex- ActiveMQ, RabbitMQ etc).
When WebService receives request, add request (with all details required to process it) in Kafka queue (using Kafka message producer details)
Separate service (having Kafka consumer details) will read from topic(queue) and process it.
In case need to send message to client when request is processed, server can push information to client using WebSocket (Or client can poll for request status however this need request status endpoint and will cause load on that endpoint).
Apache Kafka would be helpful in your case. If you use a Kafka broker, it will allow you to face with a peak of requests. The requests will be stored in a queue as you mentionned and be treated by your server at its own speed.
As your are using tomcat, I guess you developped your server in Java. Apache Kafka propose a Java API which is quite easy to use.

Storm cluster not working in Production mode

I have a storm topology which in two nodes. One is the nimbus and the other is the supervisor.
A proxy which is not part of storm accepts an HTTP request from a client and passes it to the storm topology.
The topology is like this:
1. The proxy passes data to a storm spout.
2. The spout passes data to multiple bolts.
3. The result is passed back to the proxy by the last bolt.
I am running the proxy and passing data to storm. I am able to connect a socket to the listener at the topology side. The data emitted by the spout is shown to be 0 in the UI. The same topology works fine in a local mode.
Thought it was a problem with supervisor, but the supervisor seems to be running fine because I am able to see the supervisor description and the individual spouts and bolts. But none of them emit anything.
Now, I am confused if the problem is the data being passed to the wrong machine or something. In order to communicate to the spout, Im creating the socket from the proxy as follows:
InetAddress stormInetAddr=InetAddress.getByName("198.18.17.16");
int stormPort=4321;
Socket stormSocket=new Socket(stormInetAddr,stormPort);
Here 198.18.17.16 is the nimbus IP. And 4321 is the port where data is being expected.
I tried giving the supervisor IP here, and it didnt connect. However, this does.
Now the proxy waits for the output on a specific port.
On the other side, after processing, data is read from the bolt. And there seems to be no activity from the cluster. But, I am getting a response which is basically the same request I had sent with some jumbled up data. And this response is supposed to be sent by the last bolt to a specific port which I had defined. And I GET data back, but the cluster shows NO ACTIVITY. I know this is very vague, but, does anyone have any idea as to whats happening?
It sounds like Storm is working fine, but your proxy/network settings are not. If it were a storm error, you should see exceptions in Nimbus UI and/or in the Storm supervisor logs.
Consider temporarily shutting down storm and use nc -l 4321 on the supervisor machines to assert your proxy is working as expected.
However...
You may have a fundamental flaw in your model. Storm's spouts are pull-based, so it seems odd to have incoming requests pushed to them. This is possible, of course, if you have your spouts start listening when they spin up and simply queue the requests. However, this presents another challenge for your model: you will likely have multiple spouts running on a single machine and they cannot share the same port (4321).
If you want to meld these two world of push & pull; then consider using a Kafka Spout.