I'm sending and receiving message on an msmq. It works fine under low load, but as soon as I pump it up to 100 messages a minute, after a few minutes I get when I try to read off the queue:
System.Exception: Stream was not readable.
at System.IO.BinaryReader..ctor(Stream input, Encoding encoding)
at System.IO.BinaryReader..ctor(Stream input)
Any idea on how I go about resolving an issue like this?
100 messages per minute is trivial for MSMQ... are you 100% sure the exception isn't coming from somewhere else? More code would have helped. How exactly are you reading from the queue? Are you writing to it from the same process or thread?
Related
I'm sending data daily to my elk-stack via https://metacpan.org/pod/Search::Elasticsearch::Client::7_0::Bulk
Sometimes it happens, more often recently, that I receive a "Data too large" error. The first part of my data was received, but after this error my sending script stops and I end up with incomplete data.
As far as I understood, correct me if I'm wrong, this happens when my stack is experiencing memory issues while processing the data it already received. I assume that, after some time, I could send the rest of the data, because the next day, the same issue occurs: The first bunch of my data is processed, the rest rejected with "Data too large".
I saw that I can add an "on-error" callback, but I have no clue what I can do in it. My idea would be to implement a delay and retry after some time.
Can anyone give me have a hint how to achieve it?
Are there any ideas how to avoid the issue in the first place? I already increased heap space some time ago, but after 2 month the issue reoccured.
you'd need to check your Elasticsearch logs and the full response that Elasticsearch sends back (eg was it a 429?). however heap pressure can cause this, and you'd probably need to dig into why you are experiencing that
the other option is to reduce the size of your requests that you are sending
Update Remembering my "experience" with Java I simply did a restart of my ELK stack and the next import went through smoothly.
So despite the fact that 512m memory seem a bit low, it worked after a restart. Will check again today and then.
Increase memory
Schedule a nightly restart
Since eventbus in Vertx is made for asynchronous message passing, is it possible to throttle the rate at which these messages gets processed? If so can we achieve using worker verticles or do we have to create a separate thread group.
The point is event bus might be capable of queuing a million messages (i am guessing the number) where are the subsequent operations happening under workers/threads should not get flooded and ultimately make something down.
Please shred some light.
Workers won't ever be flooded by EventBus, since the handlers will process only one message at a time.
What may happen, though, is that you may run out of memory, if you produce literally millions of unprocessed messages.
This usually wouldn't be the case, but you could attempt to solve that using the Counter in SharedData:
https://vertx.io/docs/apidocs/io/vertx/core/shareddata/Counter.html
I'm pretty new to this queue service and I don't know what really means poisoned message.
I read that is a message you cant consume, but It means you can Peek() and see the details but not Receive() or what?
From my point of view, I would say a poisoned message is a message on top of the queue that because of its format or even corrupted format is not consumible because the business in charge of handle it can't do it and it maybe generates a exception that in a transactional scenario is catched and handled with a rollback, so the message stays on top forever.
What do you think? Am I totally wrong?
I've had to deal with poison MSMQ messages before, ugh! I'd say your definition is close.
A poison message is basically a message that is repeatedly read from a queue when the service reading the message cannot process the message because of an exception or some other issue and terminates the transaction under which the message is read. In such cases, the message remains in the queue is retried again upon next read from the queue. This can theoretically go on forever if there is a problem with the message.
For example, the message contained data that would violate a database constraint. I sometimes would create an error queue and have the service processing the messages throw the "poison" message into that if an exception occurred during processing. This would at least remove the message from the queue and give me an opportunity to view it later without effecting the main production queues.
Here is some advice and information on poison message handling.
JMS mesages are sometimes moving to the DLQ without throwing any exception.
Jboss server instance used is 4.3.0.GA_CP04_EAP.
We are using an an MDB that listens for incoming messages on a queue A, when it receives any message it updates the database and sens an email in one transaction.Transaction is CMT.
Now, what is happening is, sometimes mesages are not picked up by the consumer and they end up in the DLQ. Though from the JMX- console message count i could see that the message once did arrive to the queue A but then goes to the DLQ.
This happens intermittently and does not throw any exceptions on the logs either .
What seems to work most of the times is restarting the servers. No idea about what happens behind the scenes though.
**And after 29 days, same problem has returned.
This follows a pattern but varies with every restart.
There are 2 clustered serevrs which also do loadbalancing , P1 and P2.
First two email messages go to and processed by P1-Email sent
Next email message resquest goes to P2-Email sent
Next two email messages go to and processed by P1-Email sent
Next email message resquest goes to P2-Email NOT SENT
and the cycle repeats
I have found a workaround to this nagging problem thanks to the helpful info found at http://leakfromjavaheap.blogspot.in/2013/05/when-dead-letter-queue-becomes-zombie.html
DLQ listener is set up to listen for any incoming messages and puts them back to their intended destination if any of them is found on DLQ.
Also, considering the situation where any message is travelling from DLQ to the Queue and back to the DLQ in endless loops, a counter is set to check how many times the message has been to the DLQ before, if it exceeds the limit, then it is put to a Permanent DLQ (DLQ for a DLQ).
Application has been running smoothly ever since.
If you can provide the log details when message goes to DLQ, would be better to dig into this issue.
The logs did not contain any useful info; not even an exception to give a hint.
Finally,changed the local tx data source to xa data source and it was a success.Still wondering if there is a reason behind it.
I have a Microsoft Message Queue that gets populated with messages. If there is a problem with the processing of the message, I would like to retry the message, I do not want to retry the message immidiatley.
Is there a way to add a delay to the message in the MSMQ to avoid it being available for a certain amount of time??
The other alternative is to have another queue (A retry queue) and read that queue every 15 minutes, But i would rather not do this.
What you are looking for is "Poison Message Handling" ( even if its not the message fault, but an temporary environment problem ).
There are lots of articles on that. Here are some:
Poison Message Handling in MSMQ 3.0
Poison Message Handling in MSMQ 4.0
Surviving poison messages in MSMQ
In short: you have to move them to a retry queue.
So I've seen some code recently that handles this in the exception logic, the code has a built in retry step that attempts after a delay. It fails, waits for a specific amount of time, then tries again.
Essentially it recursively tries a set number of times (lengthening the delay each time). Fairly neat, no reason to have another queue. There is alot of generics and delegates used to execute the methods. Don't know if something like this could be done or not. I would suspect you would still want to handle the case of the message not being able to be delivered with another queue though.