akka tcp server-client heartbeat message block by scheduler processing - scala

I am using Akka cluster (Server) and exchanging HeartBeat message with client using Akka TCP in every 5 seconds.
HeartBeat is working fine till I am not using scheduler.
but when I am starting 4-5 schedulers, Server is not receiving heartbeat buffer message from client (tcp connection). After scheduler processing, I am getting 4-5 heartbeat messages at same time.
Akka sceduler is blocking actor other processing (buffer reading etc).
I already tried below, but still facing same issue.
different-2 dispatcher
created new actor and added scheduler call in seperate actor.
using 8 core machine
tried fork-join-executor and thread-pool-executor
already tried changing Tcp-SO-ReceivedBufferSize and Tcp-SO-SendBufferSize to 1024 or 2048, but it didn’t work.
already tried Tcp-SO-TcpNoDelay
Kindly help.

Related

Message count is not zero even after all messages are consumed and acknowledged

We have containerized ActiveMQ Artemis 2.16.0 and deployed it as a K8s deployment for KEDA.
We use STOMP using stomp.py python module. The ACK-mode is set as client-individual and consumerWindowSize = 0 on the connection. We are promptly acknowledging the message as soon as we read it.
The problem is, sometimes, the message count in the web console does not become zero even after all the messages are actually consumed and acknowledged. When I browse the queue, I don't see any messages in it. This is causing KEDA to spin up pods unnecessarily. Please refer to the attached screenshots I attached in the JIRA for this issue.
I fixed the issue in my application code. My requirement was one queue listener should consume only one message and exit gracefully. So, soon after sending ACK for the consumed message, I disconnected the connection, instead of waiting for the sleep duration to disconnect.
Thanks, Justin, for spending time on this.

Lenses MQTT Source Connector doesn't send PINGREQ to MQTT Broker when idle in its keep-alive time

PROBLEM -
I have created an MQTT Source Connector using Lenses. The connector works fine till the data is being published on my MQTT Mosquitto broker and works seemlessly. Now, when i stop publishing the data and there is no data sent to the mqtt source connector, after about 4-5 mins , if i start publishing the data again , the data doesn't come in my source connector even though the connector is still in running state. For resolving this i need to restart my connector everytime which is bad.
METHODS I HAVE ALREADY TRIED -
Even though the client id is unique, i still changed it everytime to see if that was the issue. But it didn't work.
Tried increasing the keepalive interval to 10 mins but that didn't work too.
There were no error logs in kafka connect logs for this for a long time, but once in 15-20 tries i received this Socket Connection Error
UPDATE
Upon digging more into the issue, I found out that my source connector (behaving as an mqtt client) was not sending any PINGREQ packets to my Mosquitto MQTT broker when it was idle in its keep-alive time interval while other client conencted to my mqtt broker were sending their PINGREQ packets when they were idle and hence the connection between the source connector and the mqtt broker was getting dropped.
Do I need to specify any property explicitly in my MQTT Source Connector properties file to send a PINGREQ packet to the MQTT broker in the keep-alive time or does the connector handles that itself ?
After a long research , I found out the default keep-alive time which Lenses connector was using (5000) was in seconds. 5000 seconds was way too big, so the mqtt-broker was disconnecting the client even before the first PINGREQ was sent by the client in its keep-alive time. I reduced the keep-alive time to 5 seconds by adding the line connect.mqtt.keep.alive=5 in my connector properties file. This resolved the issue.

How many Amazon SQS ReceiveMessageRequest i can perform on single sqs queue using multiple consumer

I have sqs queue "someQueue" and i have implemented multiple 100 consumer for it.
i added wait_time_seconds as 20 seconds.
now i am getting
"com.amazonaws.SdkClientException: Unable to execute HTTP request: Timeout waiting for connection from pool"
Is there any limit for receive message request?
or i am missing something?
AWS default connection pool size is 50 connections, for one client.
when i reduced the connection request to less than 50 it worked for me.

Scala Akka TCP Actors

I have a question about the Akka 2.4 TCP API.
I running a server and have 2 TCP servers in Akka TCP, one for incoming clients and one for my server's worker nodes (which are on other computers/IPs). I have one current connection to a client, and one connection to a worker node.
If receiving a message from a client, I want to pass some of that information to the worker node, but my TCP Akka Actor representing the worker node connection doesn't seem to like when I, from the thread running the Client Akka Actor, send messages to the Akka Actor worker node.
So, as an example, if the client sends a message to delete a file, and that partitions on that file is on a worker node, I want to send a TCP message to that worker node that it should delete the partitions.
How can I from the client Actor send a message to the worker node Actor, that it should pass to the worker node server through TCP? When just doing the regular workerActorRef ! msg it doesn't receive it at all and no logging is shown.
I hope this question wasn't unclear but essentially I want the workerActorRef to in some way be able to have some functionality similar to "send this through the TCP socket".
Cheers,
Johan
Have you looked at Akka Remoting at all? If used properly it should be able to achieve what you want. You might want to look into Clustering too as it's built on top of Remoting.

Storm cluster not working in Production mode

I have a storm topology which in two nodes. One is the nimbus and the other is the supervisor.
A proxy which is not part of storm accepts an HTTP request from a client and passes it to the storm topology.
The topology is like this:
1. The proxy passes data to a storm spout.
2. The spout passes data to multiple bolts.
3. The result is passed back to the proxy by the last bolt.
I am running the proxy and passing data to storm. I am able to connect a socket to the listener at the topology side. The data emitted by the spout is shown to be 0 in the UI. The same topology works fine in a local mode.
Thought it was a problem with supervisor, but the supervisor seems to be running fine because I am able to see the supervisor description and the individual spouts and bolts. But none of them emit anything.
Now, I am confused if the problem is the data being passed to the wrong machine or something. In order to communicate to the spout, Im creating the socket from the proxy as follows:
InetAddress stormInetAddr=InetAddress.getByName("198.18.17.16");
int stormPort=4321;
Socket stormSocket=new Socket(stormInetAddr,stormPort);
Here 198.18.17.16 is the nimbus IP. And 4321 is the port where data is being expected.
I tried giving the supervisor IP here, and it didnt connect. However, this does.
Now the proxy waits for the output on a specific port.
On the other side, after processing, data is read from the bolt. And there seems to be no activity from the cluster. But, I am getting a response which is basically the same request I had sent with some jumbled up data. And this response is supposed to be sent by the last bolt to a specific port which I had defined. And I GET data back, but the cluster shows NO ACTIVITY. I know this is very vague, but, does anyone have any idea as to whats happening?
It sounds like Storm is working fine, but your proxy/network settings are not. If it were a storm error, you should see exceptions in Nimbus UI and/or in the Storm supervisor logs.
Consider temporarily shutting down storm and use nc -l 4321 on the supervisor machines to assert your proxy is working as expected.
However...
You may have a fundamental flaw in your model. Storm's spouts are pull-based, so it seems odd to have incoming requests pushed to them. This is possible, of course, if you have your spouts start listening when they spin up and simply queue the requests. However, this presents another challenge for your model: you will likely have multiple spouts running on a single machine and they cannot share the same port (4321).
If you want to meld these two world of push & pull; then consider using a Kafka Spout.