Difference between cursors opened within a session vs out a session - mongodb

I refer to quite a lot of documentation for the server session concept, but I just can't understand what it is. What does this sentence mean?
By default, cursors not opened within a session automatically timeout
after 10 minutes of inactivity. Cursors opened under a session close
with the end or timeout of the session.
Documentation
What is a server session?
What is the difference between cursors not opened within a session vs cursors opened under a session?

Related

How can I know if a mgo session is closed

I'm using *mgo.Session of MongoDB driver labix_mgo for Go, however I don't know if a session is closed. When I use a closed session, a runtime error will be raised. I want to skip the session copy if I know a session is closed.
First, the mgo driver you are using: gopkg.in/mgo.v2 (hosted at https://github.com/go-mgo/mgo) is not maintained anymore. Instead use the community supported fork github.com/globalsign/mgo, it has a backward compatible API.
mgo.Session does not provide a way to detect if it has been closed (using its Session.Close() method).
But you shouldn't depend on others closing the session you are using. The same code that obtains a session should be responsible to close it. Follow this simple principle, and you won't bump into problems of using a closed session.
For instance, if we take a web server as an example, obtain a session using Session.Copy() (or Session.Clone()) in the beginning of the request, and close the session (preferably with defer) in the same handler, in the same function. And just pass along this session to whoever needs it. They don't have to close it, they mustn't, that's the responsibility of the function that created it.

how does CONN_MAX_AGE work in Django

can someone ELI5 what CONN_MAX_AGE does? I thought it worked like this:
1) Request #1 comes in, opens connection 1 to database
2) Request #1 uses connection 1 to do some work
3) Request #1 completes. Because CONN_MAX_AGE is non-zero (and the age has not been reached), the connection is left open.
4) Request #2 comes in, and Django re-uses connection #1 to the database.
But that doesn't seem to be happening. I have a page on my site that does an AJAX poll every 15 seconds. In my development environment, I see the number of open connections (select count(*) from pg_stat_activity), slowly grow, until eventually I get
OperationalError: FATAL: sorry, too many clients already
So I'm wondering where I've gone awry. Is CONN_MAX_AGE only used to keep connections open within a single HTTP request?
UPDATE:
Looking more carefully at the docs, I see this:
The development server creates a new thread for each request it
handles, negating the effect of persistent connections. Don’t enable
them during development.
Ah, so that seems to imply that a connection "belong to" a thread. (And the thread may open/close the connection, based on the value of CONN_MAX_AGE).

PhantomJS not killing webserver client connections

I have a kind of proxy server running on a WebServer module and I noticed that this server is being killed due to its memory consumption.
Every time the server gets a new request it creates a child client process, the problem I see is that the process remains alive indefinitely.
Here is the server I'm using:
server.js
I thought response.close() was closing and killing client connections, but it is not.
Here is the list of child processes displayed on htop:
(Those process are even more, it is just a fragment of the list)
I really need to kill those processes because they are using all the free memory. Am I missing something?
I could simply restart the server, but the memory will still be wasted.
Thanks you !
EDIT:
The processes I mentioned before are threads and no independient processes as I thought (check this).
Every http request creates a new thread, and that's ok, but this thread is not being killed after the script ends.
Also, I found out that no new threads are created if the request handler doesn't run casper (I mean casper.run(..)).
So, new threads are created only if the server runs a casper instance, the problem is that this instance doesn't end after run function does.
I tried casper.done() as mentioned below, but it kill the whole process instead of the current running thread. (I did not find any doc for this function).
When I execute other casper scripts, outside the server in the same machine, the instanced threads and the whole phantom process ends successfully. What would be happening?
I am using Phantom 2.1.1 and Casper 1.1.1 versions.
Please ask me anything if you want more or specific information.
Thanks again for reading !
This is a well known issue with casper:
https://github.com/casperjs/casperjs/issues/1355
It has not been fixed by the casper guys and is currently marked as an enhancement. I guess it's not on their priority list.
Anyways, the workaround is to write a server side component e.g. a node.js server to handle the incoming requests and for every request run a casper script to do the scraping in a new child process. This child process will be closed when casper terminates it's job. While this is a workaround, it is not an optimal solution as the cost of opening a child process for every request is not cheap. it will be hard to heavily scale an approach similar to this. However, it is a sufficient workaround. More on this fully sensible approach is in the link above.

Lifetime of a sugar session

1) What is the default lifetime of session returned by SugarCRM login REST call.
2) Can storing the session be deemed a good practice?
Please advise.
The session life is the same as the PHP session life on the server, which can be controlled somewhat via the session.gc_maxlifetime directive in your php.ini file.
When you say "storing" the session, do you mean trying to use it across multiple scripts. Not sure if there is a good reason to do that, mainly because of the weirdness of how PHP sessions GC. I would initialize a session for each script, or at the very least check to see if you session is valid on each call to see if you need to re-init or not.

Gracefully updating server code

I have exactly one Node server, which is currently running some code. This code is now outdated. How can I switch to the new code without any server down-time? Do I need to get another server to act as a buffer?
Basically you "kill" the old process and immediately start the server again, read the following article for more details & code sample:
http://codegremlins.com/28/Graceful-restart-without-downtime