Mongodb tailable cursor - Isn't that bad practice to have a continually spinning loop? - mongodb

I'm looking into the best way to have my app get notified when a collection is updated in mongo. From everything I read on the interwebs, the standard practice is to use a capped collection with a tailable cursor and here's a snippet from mongodb's docs on tailable cursors.
I notice in there snippet that they have a continuous while loop that never stops. Doesn't this seem like a bad practice? I can't imagine this would perform well when scaling.
Does anyone have any insight as to how this could possibly scale and still be performant? Is there something i'm not understanding?
EDIT
So this is a good example where i see the stream is just open and since the stream is never closed, it just has a listener listening. That makes sense to me i guess.
I'm also looking at this mubsub implementation where they use a setTimeout with 1 second pause.
Aren't these typically bad practices - to leave a stream open or to use a setTimeout like that? Am i just being old school?

I notice in there snippet that they have a continuous while loop that never stops. Doesn't this seem like a bad practice?
Looks like it to me as well, yes.
Does anyone have any insight as to how this could possibly scale and still be performant?
You can set the AwaitData flag and make the more() call blocking for some time - it won't block until data is available though, but it will block for some time. Requires server support (from v. 1.6 or so) That is also what's being done in the node.js example you posted ({awaitdata:true}).
where they use a setTimeout with 1 second pause.
The way I read it, they retry to get the cursor back when lost in regular intervals and return an error iff that failed for a full second.
Aren't these typically bad practices - to leave a stream open [...]?
You're not forgetting the stream (that would be bad), you keep using it - that's pretty much the definition of a tailable cursor.

Related

Manually check requests on port in kdb

From what I understand the main q thread monitors it socket descriptors for requests and respond to them.
I want to use a while loop in my main thread that will go on for an indefinite period of time. This would mean, that I will not be able to use hopen on the process port and perform queries.
Is there any way to manually check requests within the while loop.
Thanks.
Are you sure you need to use a while loop? Is there any chance you could, for instance, instead use the timer functionality of KDB+?
This could allow you to run a piece of code periodically instead of looping over it continually. Depending on your use case, this may be more appropriate as it would allow you to repeatedly run a piece of code (e.g. that could be polling something periodically), without using the main thread constantly.
KDB+ is by default single-threaded, which makes it tricky to do what you want to do. There might be something you can do with slave threads.
If you're interested in using timer functionality, but the built-in timer is too limited for your needs, there is a more advanced set of timer functionality available free from AquaQ Analytics (disclaimer: I work for AquaQ). It is distributed as part of the TorQ KDB framework, the specific script you'd be interested in is timer.q, which is documented here. You may be able to use this code without the full TorQ if you like, you may need some of the other "common" code from TorQ to provide functions used within timer.q

Since redis is single threaded, how wrapping these calls in a Future?

Since redis is single threaded, making a call like the one below will block until it returns:
redis.hgetall("some_key")
Now say I was to wrap all my calls in Futures, for example if I had to make 100K of these types of calls all at once:
Future.sequence(redis_calls)
Would doing something like this help in terms of performance? Or failure tracking or would it potentially cause a problem if the calls get backed up?
You'll find that the slowest part is getting commands to Redis and reading the results back again, rather than waiting for Redis to carry out the requests.
To avoid this, you can use pipelines to send a bunch of commands at once and receive the results back together.

In drools is there a way to detect endless loops and halt a session programmatically?

in short my questions are:
Is there anything built-in into drools that allows/facilitates detection of endless loops?
Is there a way to programmatically halt sessions (e.g. for the case of a detected endless loop)?
More details:
I'm planning to have drools (6.2 or higher) run within a server/platform where users will create and execute their own rules. One of the issues I'm facing is that carelessly/faulty rule design can easily result in endless loops (whether its just a forgotten "no-loop true" statement or the more complex rule1 triggers rule2 triggers rule3 (re)triggers rule1 circles that lead to endless loops.
If this happens, drools basically slows down my server/platform to a halt.
I'm currently looking into how to detect and/or terminate sessions that run in an endless loop.
Now as a (seemingly) endless loop is nothing that is per-se invalid or in certain cases maybe even desired I can imagine that there is not a lot of built-in detection mechanism for this case (if any). But as I am not an expert I'd be happy to know if there is anything built-in to detect endless loops?
In my use case I would be ok to determine a session as "endlessly looped" based on a threshold of how often any rule might have been activated.
As I understand I could use maybe AgendaEventListeners that keep track of how often any rule has been fired and if a threshold is met either insert a control fact or somehow trigger a rule that contains the drools.halt() for this session.
I wonder (and couldn't find a lot of details) if it is possible to programmatically halt/terminate sessions.
I've only come across a fireUntilHalt() method but that didn't seem like the way to go (or I didnt understand it really).
Also, at this point I was only planning to use stateless session (but if it's well encapsulated I could also work with stateful sessions if that makes my goal easier to achieve).
Any answers/ideas/feedback to my initial approach is highly welcome :)
Thanks!
A fundamental breaking point of any RBS implementation is created where the design lets "users create and design their own rules". I don't know why some marketing hype opens the door for non-programmers to write what is program code, without any safeguarding.
Detecting whether a session halts is theoretically impossible. Google "Halting problem".
For certain contexts you might come up with a limit of the number of rules that might be executed at most or something similar. And you can use listeners to count and raise an exception, etc etc.
Basically you have very bad cards once you succumb to the execution of untested code created by amateurs.

How cursor.observe works and how to avoid multiple instances running?

Observe
I was trying to figure it out how cursor.observe runs inside meteor, but found nothing about it.
Docs says
Establishes a live query that notifies callbacks on any change to the query result.
I would like to understand better what live query means.
Where will be my observer function executed? By Meteor or by mongo?
Multiple runs
When we have more than just a user subscribing an observer, one instance runs for each client, leading us to a performance and race condition issue.
How can I implement my observe to it be like a singleton? Just one instance running for all.
Edit: There was a third question here, but now it is a separated question: How to avoid race conditions on cursor.observe?
Server side, as of right now, observe works as follows:
Construct the set of documents that match the query.
Regularly poll the database with query and take a diff of the changes, emitting the relevant events to the callbacks.
When matching data is changed/inserted into mongo by meteor itself, emit the relevant events, short circuiting step #2 above.
There are plans (possibly in the next release) to automatically ensure that calls to subscribe that have the same arguments are shared. So basically taking care of the singleton part for you automatically.
Certainly you could achieve something like this yourself, but I believe it's a high priority for the meteor team, so it's probably not worth the effort at this point.

Ado net command time out in property in sqlcommand?

In webapplication, I put
sqlcommand.CommandTimeout=0;
Is this statement is recommended or not [good progamming style], or which one is good if it is not good?
From the SqlCommand.CommandTimeout documentation:
A value of 0 indications no limit, and should be avoided
It could cause the request and the thread processes it to hang indefinitely. This is a waste of resources if nothing else.
It would also make it harder to identify if you have commands that are not completing in a reasonable time.
is this statement is recommended or no
Not.