Debatching of large number of XML messages inside MSMQ - msmq

I am able to debatch the xml messages by loading the xml messages first and then searching for the nodes to be debatched. But it is a poor choice from performance point of view as my solution can be suitable for 10 to 100 messages but for handling 50,000 messages, it is not a suitable choice. So i need some faster approach to do so. Please Help.
Note:- Please do not suggest any Biztalk based solution.I need simple logic in c# or java.

Take a look at https://www.rabbitmq.com/ becouse it does not have message size limition
also it's support C# and JAVA

Related

Is there a way to programmatically get the NOTIFY payload size limit?

I'm trying to figure out the size limit of the NOTIFY payload in PostgreSQL, in order to split my messages into smaller packages.
This has to be done programmatically, as my script is intended to run on other people's servers.
Is there a way to achieve this?
I looked around in the documentation and on the web and didn't find anything.

Is there a way for Apama to read files line by line?

I'm new to Apama. I see that a com.apama.file lib exists, but I am unsure how to actually use it to read a file. I want to send each line as an event to be parsed and then depending on the contents sent as a different event from there, but googling suggests that I'd need a transport (not sure what that is either) to do so, but my project lead is under the impression that this can all be done using Apama EPL. How true is this and if it has some validity, how can I go about achieving that?
Yes, this is certainly possible. To help you do it, though, please can you provide a little more information about your setup? For example, what is the file type and is the file local to where the correlator will be running? Will there only be one file to process at a time? How large is the file, and are there any specific performance requirements?
You may find this helpful:
https://github.com/SoftwareAG/apama-streaming-analytics-connectivity-FileTransport
You don't say quite what you are trying to achieve, but if you are new to Apama then I will say that that is not something that is done frequently, especially in simpler solutions when your are just starting.
Depending what you are trying to achieve, are you aware of the "engine_send" tool and the ability to use it to send in a text file of Apama events (normally a .evt file), and with batch tags if you want spread them over time?
http://www.apamacommunity.com/documents/10.5.3.0/apama_10.5.3.0_webhelp/apama-webhelp/apama-webhelp/re-DepAndManApaApp_sending_events_to_correlators.html
http://www.apamacommunity.com/documents/10.5.3.0/apama_10.5.3.0_webhelp/apama-webhelp/apama-webhelp/co-DepAndManApaApp_event_file_format.html

Single request to multiple asynchronous responses

So, here's the problem. iPhones are awesome, but bandwidth and latency are serious issues with apps that have serverside requirements. My initial plan to solve this was to make multiple requests for bits of data (pun unintended) and have that be how the issue of lots of incoming//outgoing data was handled. This is a bad idea for a lot of reasons, most obvious to me is that my poor database (MySQL) can't handle this very well. From what I understand it's better to request large chunks all at once, especially if I'm going to ask for all of it anyways.
The problem is now I'm waiting again for a large amount of data to get through. I was wondering if there's a way to basically send the server a bunch of IDs to get from the database, and then that SINGLE request then sends a lot of little responses, each one containing all the information about a single db entry. Order is irrelevant, and ideally I'd be able to send another request to the server telling it to stop sending me things because I have what I need.
I realize this is probably NOT a simple thing to do so if you (awesome) guys could point me in the right direction that would also be incredible.
Current system is iPhone (Cocoa//Objective-C) -> PHP -> MySQL
Thanks a ton in advance.
AFAIK, a single request cannot get multiple responses. From what you are asking, it seems that you need to do this in two parts.
Part 1: Send a single call with the IDs.
Your server responds with a single message that contains the URLs or the information needed to call the unique "smaller" answers.
Part 2: Working from that list of responses, fire off multiple requests that run on their own threads.
I am thinking of this similar to how a web page works. You call the HTML URL in a web browser. The HTML tells the browser all the places/URLS it needs to get additional pieces (images, css, js, etc) to build the full page.
Hope this helps.

Whether Serialization or Database?

I'm developing an application via which one can send sms by directing it to sms server. My problem is I'm supposed to store the messages sent with date,time and also with the names of users to whom the message was sent. What should I use to save that? database Or I should think f 'serialization'? Later on I'll have to display the records containing names of the users and sms according to date and time at which it was sent.
Suggest me something. Thanks.
It depends.
Database is all eggs in one basket and a bit more work to get the data in.
Writing the SMSs to a daily log file type format is much simpler and eggs in many daily files.
If you have to, often, produce fancy complicated reports then go database.
Or go log style now as you can always migrated your data and interface to database later if it becomes neccesary.
A database is your best bet for those kind of records in my opinion. When you have dates, names, other data and the need to relate it, a RDBMS generally will work the best.
If you need to do any kind of querying of your records, the database will win out over a simple serialized-object file; I'd only use the latter approach if you only ever need all of your data at once.
If you want a simple, lightweight DB I'd suggest looking at SQLite, for small apps like what you're describing the convenience and ease-of-use are a major win over using a full-scale 'production' DB engine like MySQL or Postgres. See this answer for more on that.

Best practices to follow/read large mailing-lists?

You're probably a lot to be subscribers to various mailing list, some more updated than others.
What are your best practices to follow all information going by these lists?
What are the best clients you've used to managed that?
I'm sure I'm not the only one trying to get the best signal out of this noisy way of communication :)
I like gmail because of the way it groups messages by conversation so I can just page down through a thread.
Use a rule in GMail to slap a label on and archive all of them. Then they are easily sortable, searchable, and threaded.
I just use Thunderbird. For some lists, in flat mode, for others (the Lua mailing list), in threaded mode. Following is natural for mailing list, the messages are pushed to your client.
At first, I just received the messages and routed them to the right folder with some rules.
Now, I read them as newsgroups using Gmane, which also allow to catch up history (including mails which were sent before my subscription started and those which were sent during a temporary unsubscription).
Sometime, when a thread has no interest for me, I just right click on the first message and select Mark all messages of this thread as read.
Using KDE Ia m using Kontact for my mail and RSS feeds. That gives me a nice command center.