MSMQ: What is the best way to read a queue in a non-blocking fashion? - msmq

If a queue is empty and you don't want to block for too long, you risk getting a System.Messaging.MessageQueueException.
How would you tell the difference between a timeout on waiting for a message and a real error?

Try this:
MessageQueueException.MessageQueueErrorCode == MessageQueueErrorCode.IOTimeout
See http://msdn.microsoft.com/en-us/library/t5te2tk0.aspx for a sample similar to what you're trying to do.

Related

ReactiveX Retry with Multiple Consumers

Quick question, because I feel like I must be missing something.
I'm using rxjs here because it's what I've got in-front of me, this is a general reactiveX question, I believe.
Let's say I have a set of Observables like so:
network_request = some_thing // An observable that produces the result of a network call
event_stream = network_request.flatMapLatest(function(v) {
return connectToThing(v) // This is another observable that needs v
}) // This uses the result of the network call to form a long-term event-based connection
So, this works ok.
Problem, though.
Sometimes the connection thing fails.
So, if I do event_stream.retry() it works great. When it fails, it redoes the network call and gets a new v to use to make a new connection.
Problem
What happens if I want two things chained off of my network_request?
Maybe I want the UI to do something every time the network call completes, like show something about v in the UI?
I can do:
shared = network_request.share() // Other implementations call this refCount
event_stream = shared.flatMapLatest(...) // same as above
ui_stream = shared.flatMapLatest(...) // Other transformation on network response
If I didn't do share then it would have made two requests, which isn't what I want, but with share, when event_stream later has an error, it doesn't retry the network request because the refcount is still at 1 (due to ui_stream), so it immediately returns completed.
What I want
This is obviously a small example I've made up to explain my confusion.
What I want is that every time the result of event_stream (that long term connection) has an error all of the following happens:
the network request is made again
the new response of that request is used to build a new connection and event_stream goes on with new events like nothing happened
that same response is also emitted in ui_stream to lead to further processing
This doesn't feel like a complicated thing, so I must just be misunderstanding something fundamental when it comes to splitting / fanning out RX things.
Workarounds I think I could do but would like to avoid
I'm looking to export these observables, so I can't just build them again and then say "Hey, here's the new thing". I want event_stream and all the downstream processing to not know there's been a disconnection.
Same for ui_stream. It just got a new value.
I could probably work something out using a Subject as a generation counter that I ping every time I want everything to restart, and put the network_request into a flatMap based on that, so that I can break the share...
But that feels like a really hacky solution, so I feel there has to be a better way than that.
What have I fundamentally misunderstood?
As I've been thinking about this more I've come to the same realization as ionoy, which is that retry just disconnects and reconnects, and upstream doesn't know it was due to an error.
When I thought about what I wanted, I realized I really wanted something like a chain, and also a spectator, so I've got this for now:
network_request = some_thing
network_shadow = new Rx.Subject()
event_stream = network_request.do(network_shadow).flatMapLatest(...)
ui_stream = network_shadow.whatever
This has the property where an retry in event_stream or downstream will cause the whole thing to restart, whereas ui_stream is its own thing.
Any errors over there don't do anything, since network_shadow isn't actually a subscriber to event_stream, but it does peel the values off so long as the main event chain is running.
I feel like this isn't ideal, but it is better than what I was concerned I would have to do, which is have a restartEverything.onNext() in an doOnError, which would have been gross.
I'm going to work with this for now, and we'll see where it bites me...
You need to make your cold observable hot by using Publish. Read http://www.introtorx.com/Content/v1.0.10621.0/14_HotAndColdObservables.html#HotAndCold for a great explanation.

Continious operation using services

I would like to read data from a server and write it to file every 5 minutes.
In case there is a network problem I need to report it to the MainActivity.
What is the best way to do it? Can I use IntentService?
Is it a good idea to put a while() loop inside onStart() method?

Keeping stream open with AnyEvent::Twitter::Stream

I'm using AnyEvent::Twitter::Stream to write a Twitter bot. I'm new to event-based programming, so I'm relying heavily on the docs.
Since this is a bot, I want the stream to keep going, unsupervised, when there's an error (such as a broken pipe, or a timeout error).
If I don't include an error handler at all, the entire program dies on error. Similarly, if I use an error handler like what's in the sample docs:
on_error => sub {
my $error = shift;
warn "Error: $error";
$done->send;
},
The program dies. If I remove the "$done->send;" line, the stream is interrupted and the program hangs.
I've looked at the (sparse) docs for AE::T::S, and for AnyEvent, but I'm not sure what I need to do to keep things going. The stream can give me 5,000 events a minute, and I can't lose this on random network hiccups.
Thanks.
Solution for anyone else using this:
I spoke to someone more knowledgeable about event-based stuff. Basically, each ::Stream object represents a single HTTP request to Twitter; when that request dies, that's the end of a stream. The on_error method is called at this point, so it's just giving you a place to say that the stream ended, for logging purposes or whatever. This method doesn't let you restart the stream.
If you want the bot to keep going, you have to create a new stream, either by having the OS restart the program, or just wrapping the entirety of the program in a while(1) {} loop.
Hope this helps someone else!

How to request a replay of an already received fix message

I have an application that could potentitally throw an error on receiving a ExecutionReport (35=8) message.
This error is thrown at the application level and not at the fix engine level.
The fix engine records the message as seen and therefore will not send a ResendRequest (35=2). However the application has not processed it and I would like to manually trigger a re-processing of the missed ExecutionReport.
Forcing a ResendRequest (35=2) does not work as it requires modifying the expected next sequence number.
I was wonderin if FIX supports replaying of messages but without requiring a sequence number reset?
When processing an execution report, you should not throw any exceptions and expect FIX library to handle it. You either process the report or you have a system failure (i.e. call abort()). Therefore, if your code that handles execution report throws an exception and you know how to handle it, then catch it in that very same function, eliminate the cause of the problem and try processing again. For example (pseudo-code):
// This function is called by FIX library. No exceptions must be thrown because
// FIX library has no idea what to do with them.
void on_exec_report(const fix::msg &msg)
{
for (;;) {
try {
// Handle the execution report however you want.
handle_exec_report(msg);
} catch(const try_again_exception &) {
// Oh, some resource was temporarily unavailable? Try again!
continue;
} catch(const std::exception &) {
// This should never happen, but it did. Call 911.
abort();
}
}
}
Of course, it is possible to make FIX library do a re-transmission request and pass you that message again if exception was thrown. However, it does not make any sense at all because what is the point of asking the sender (over the network, using TCP/IP) to re-send a message that you already have (up your stack :)) and just need to process. Even if it did, what's the guarantee it won't happen again? Re-transmission in this case is not only doesn't sound right logically, the other side (i.e. exchange) may call you up and ask to stop doing this crap because you put too much load on their server with unnecessary re-transmit (because IRL TCP/IP does not lose messages and FIX sequence sync process happens only when connecting, unless of course some non-reliable transport is used, which is theoretically possible but doesn't happen in practice).
When aborting, however, it is FIX library`s responsibility not to increment RX sequence unless it knows for sure that user has processed the message. So that next time application starts, it actually performs synchronization and receives missing messages. If QuickFIX is not doing it, then you need to either fix this, take care of this manually (i.e. go screw with the file where it stores RX/TX sequence numbers), or use some other library that handles this correctly.
This is the wrong thing to do.
A ResendRequest tells the other side that there was some transmission error. In your case, there wasn't, so you shouldn't do that. You're misusing the protocol to cover your app's mistakes. This is wrong. (Also, as Vlad Lazarenko points out in his answer, if they did resend it, what's to say you won't have the error again?)
If an error occurs in your message handler, then that's your problem, and you need to find it and fix it, or alternately you need to catch your own exception and handle it accordingly.
(Based on past questions, I bet you are storing ExecutionReports to a DB store or something, and you want to use Resends to compensate for DB storage exceptions. Bad idea. You need to come up with your own redundancy solution.)

Boost Async UDP Client

I've read through the boost:asio documentation (which appears silent on async clients), and looked through here, but can't seem to find the forest for the trees here.
I've got a simulation that has a main loop that looks like this:
for(;;)
{
a = do_stuff1();
do_stuff2(a);
}
Easy enough.
What I'd like to do, is modify it so that I have:
for(;;)
{
a = do_stuff1();
check_for_new_received_udp_data(&b);
modify_a_with_data_from_b(a,b);
do_stuff2(a);
}
Where I have the following requirements:
I cannot lose data just because I wasn't actively listening. IE I don't want to lose packets because I was in do_stuff2() instead of check_for_new_received_udp_data() at the time the server sent the packet.
I can't have check_for_new_received_udp_data() block for more than about 2ms, since the main for loop needs to execute at 60Hz.
The server will be running elsewhere, and has a completely erratic schedule. Sometimes there will be no data, othertimes I may get the same packet repeatedly.
I've played with the async UDP, but that requires calling io_service.run(), which blocks indefinitely, so that doesn't really help me.
I thought about timing out a blocking socket read, but it seems you have to cheat and get out of the boost calls to do that, so that's a non-starter.
Is the answer going to involve threading? Either way, could someone kindly point me to an example that is somewhat similar? Surely this has been done before.
To avoid blocking in the io_service::run() you can use io_service::poll_one().
Regarding loosing UDP packets, I think you are out of luck. UDP does not guarantee delivery, and any part of the network may decide to drop UDP packets if there is much traffic. If you need to ensure delivery you need to have either implement some sort of flow control or just use TCP.
I think your problem is that you're still thinking synchronously. You need to think asynchronously.
Async read on UDP socket - will call handler when data arrives.
Within that handler do your processing on the incoming data. Keep in mind that while you're processing, if you have a single thread, nothing else dispatches. This can be perfectly OK (UDP messages will still be queued in the network stack...).
As a result of this you could be starting other asynchronous operations.
If you need to do work in parallel that is essentially unrelated or offline that will involve threads. Create a thread that calls io_service.run().
If you need to do periodic work in an asynch framework use timers.
In your particular example we can rearrange things like this (psuedo-code):
read_handler( ... )
{
modify_a_with_data_from_b(a,b);
do_stuff2(a);
a = do_stuff1();
udp->async_read( ..., read_handler );
}
periodic_handler(...)
{
// do periodic stuff
timer.async_wait( ..., periodic_handler );
}
main()
{
...
a = do_stuff1();
udp->async_read( ..., read_handler )
timer.async_wait( ..., periodic_handler );
io_service.run();
}
Now I'm sure there are other requirements that aren't evident from your question but you'll need to figure out an asynchronous answer to them, this is just an idea. Also ask yourself if you really need an asynchronous framework or just use the synchronous socket APIs.