How do I increase the performance of MSMQ Receive() without using multiple threads - msmq

I am not able to use parallel threads in my application(C# Application) as each msmq msg is a data for sql statement
The C# Application consumes messages in the msmq , then processes it.
I am using msg.Recoverable=true, to prevent data loss when server powers down.
Currently, it is transactional message.
The msmq messages(sql statements) must be processed in order in the sequence(FIFO)
Using multiple threads to consume is a no-no as unable to control which thread will consume and perform which sql statement.
Currently , the speed of Receive() for a msmq message of 490 KB is about 0.016 -0.032 sec, which makes about 30-50 Tx/sec, which is unacceptable.
Currently, MSMQ receive is the bottleneck of the whole application
Server is on Windows Server 2008 R2
and the application is getting(Receive()) from the Local server MSMQ, not remote

I am not an expert, but i think you can try to use batching - http://msdn.microsoft.com/en-us/library/ms788973.aspx

This blog discusses creating multiple "QueuePeekCompleted" event handlers. It worked for me.
http://code-ronin.blogspot.com/2008/09/msmq-transactional-message-processing.html

Related

Biztalk - How to throttle a streaming disassemble pipeline

I need to limit the number of orchestration instances spawned while debatching a large message in a streaming disassemble receive pipeline. Let’s say that I have a large xml coming in that contains 100 000 separate "Order" message. The receive pipeline would then debatch it and create 100 000 "ProcessOrder" orchestrations. This is too much and I need to limit that.
Requirements
The debatching needs to be done in a streaming manner so that I only load one "Order" message in memory at a time before sending it to the messagebox;
The debatching needs to be throttled based on the number of current running "ProcessOrder" orchestration instances (say if I already have 100 running instances, the debatching would wait till one is over to send another "Order" message to the messagebox).
Where I'm at
I have the receive pipeline that does the debatching and functional modifications to my messages. It does what it should in a streaming manner and puts individual messages in VirtualStreams;
I have an orchestration and helper methods that can limit the number of “ProcessOrder” orchestration instances.
The problem
I know that I can run a receive pipeline inside an orchestration (and that would solve my problem since on every "getnext" call to the pipeline, I could just hold on if there are too many running orchestration instances) but, digging in biztalk dlls, I noticed that using Microsoft.XLANGs.Pipeline.XLANGPipelineManager still loads up all the messages in memory instead of enumerating them like Microsoft.BizTalk.PipelineOM.PipelineManager does. I know they are putting every messages in VirtualStream but this is still inadequate, memory wise, for such a large message number.
Question
My next step would be to run the receive pipeline directly in the receive port (so it would use Microsoft.BizTalk.PipelineOM.PipelineManager) without having the orchestration that limits the number of “ProcessOrder” instances, but to meet the requirements, I would need to add a delay logic in my pipeline. Is this a viable option? If not, why? and what other alternative do I have?
You should debatch all messages once from pipeline and store those individual messages in MSMQ before even they are processed by orchestration. Use standard pipeline to debatch messages as they are efficient to handle large files debatching. MSMQ is available for free through Turn On Windows Features. Using MSMQ is very easy and does not require any development. Sending to MSMQ will be very fast 100K messages is not issue at all.
Then have a receive location to read from MSMQ. Depending on your orchestration throughput, you can control message flow by using BizTalk receive host throttling or by receiving the messages from MSMQ in Order or using the combination of both. Make sure you have separate host instance for both receive MSMQ and send MSMQ and for your orchestration processing.
This will be done through all configurations without any extra code simplifing your design. Make sure you have orchestration with minimum number of persistent points.

With MSMQ how do I avoid Insufficient Resources when importing a large number of messages into a queue?

What?
I have a private MSMQ transactional queue that I need to export all (600k) messages, purge then import the messages back into the queue. When importing these messages I'm currently using a single transaction and getting an insufficient resources error. I can switch to use multiple transactions but I need a way to work out how many messages I can process in a single transaction, any ideas?
Why ?
If we don't periodically perform this operation the .mq files become bloated and fragmented. If there is another way to fix this problem let me know.
We had the same problem with MQ files when we got 7500 MQ-files with the total size about 30 gigabytes.
The solution is very easy.
You should purge Transaction dead-letter messages on this machine and after that you should restart MSMQ service.
When this service starts it runs defragmentation procedure. It compacts used space and removes unused MQ files.

Consuming MSMQ at a faster speed using multiple threads in asp.net C#

I currently have a console application that has this algorithm:
Consume from MSMQ using Peek
Get message out from MSMQ , do sql insert statement
if insert is successful, get message out from MSMQ using Receive()
Using the above consumes 20 messages from MSMQ /sec(20 sql tx per sec), with 1 consume thread
Now , I would want to increase the threads consuming MSMQ , however I discovered that messages get inserted 2 times compared to 1, using peek.
I use Peek() to get the message and process the sql statement initially to deal with server power down suddenly.
If I use Receive() , and the sql statement is not performed , this will result in a loss of data
Please advise.

MSMQ multiple readers

This is my proposed architecture. Process A would create items and add it to a queue A on the local machine and I plan to have multiple instances of windows service( running on different machines) reading from this queue A .Each of these windows service would read a set of messages and then process that batch.
What I want to make sure is that a particular message will not get processed multiple times ( by the different windows service). Does MSMQ by default guarantee single delivery?
Should I make the queue transactional? or would a regular queue suffice.
If you need to make sure that the message is delivered only once, you would want to use a transactional queue. However, when a service reads a message from the queue it is removed from the queue and can only be received once.

Slow Subscriber

I am newbie to ZMQ
ZMQ Version - 2.2.1
Ubuntu - 10.04
I am using the PUB-SUB pattern for communication between multiple publishers and multiple subscribers. A forwarder is used to subscribe data from multiple publishers and the same is published to all the subscribers.
Currently, if three publishers are running and if each publisher sends 1000 messages in 1second via the PUB channel. The subscriber receives the data, stores it and writes to a database every 1second.
Because of the involvement of database, the rate at which subscriber receives the data is getting delayed, as a result the memory usage (RAM) increases by 6-7MB every 1second. Finally the subscriber gets killed by OS due to OOM
I tried using the options ZWQ_HWM & ZMQ_SWAP on both the sockets of forwarder. But still the issue persists.
Is there any solution for this???
Overall your problem is that your database cannot keep up with your publisher. 0MQ cannot solve this for you. You need an architectural solution based on changing the behavior of your system, presumably the way you do inserts.
You have a few options:
Use a faster database
Use a faster database insert method
Write to a log which is processed asynchronously by another process
Change to a socket pattern that lets the receivers tell the senders that they are backed up, so the senders pause (if that's possible)
I think in your case the spool-to-disk-file option is the best.