Our .NET web app uses ODP.NET for connections and its Oracle User for connecting to database is "webuserOracle". That web app always close connections and dispose connections.
However, on our Oracle 10g database, we see that sessions and processes values of Oracle User "webuserOracle" is always high, as they woudn't close or die.
We have decided to set up on Oracle profile for "webuserOracle" in order to limit the connect time up to 5 minutes.
CREATE PROFILE profile_webuserOracle LIMIT CONNECT_TIME 5;
ALTER USER webuserOracle PROFILE profile_webuserOracle;
Question:
For a web app, limiting connection to 5 minutes, means that the user could interact, say, 2 hours with the web app. The limit of 5 minutes is only for events triggered (like clicking a button) to connect to database. 5 minutes for everything that happened between Con.Open and Con.Dispose:
Dim con As OracleConnection = oraConexion()
con.Open()
''' There'll be a limit of 5 minutes to run the code here
con.Close()
con.Dispose()
Setting a CONNECT_TIME in a profile for a web application is likely to be a very bad idea.
First off, generally, a three-tier application is going to make use of connection pools in the middle tier. That means that the middle tier servers open a pool of connections to the database that remain open for a long period of time and get handed out to web sessions as needed. That means that a single web user that is clicking around through the site is likely to get different database sessions with each click and that a single database session will be used by a very large number of web users.
If you set a CONNECT_TIME for your connection pool connections,
The middle tier is likely to constantly get errors that the particular connection it obtained from the connection pool has exceeded its allowed connection time. You can mitigate some of that by having the middle tier execute a dummy query (i.e. select 1 from dual) on every connection that it gets from the pool to verify that the 5 minutes hasn't elapsed before the interaction starts but there is no guarantee that the timeout won't be reached when you run the first query on the page.
The middle tier will constantly be opening new physical connections to the database (a rather expensive process) to replace the connections that have been closed because they've been open for 5 minutes. Those connection storms are likely to put a substantial load on the database. That will also create performance problems for the application as users are constantly waiting for new physical connections to be opened rather than being able to reuse connections from the pool.
The number of sessions and processes is likely to be much higher if you make this change. The middle tier is going to maintain however many real physical connections it needs to service the users plus a number of expired connections that have to remain simply to inform the next caller that they are expired.
What is the real problem you are trying to solve? It is perfectly normal that the middle tier will maintain a pool of database connections that do not close. That is perfectly normal and healthy. If you want to reduce the number of connections that are open at any one time, you can adjust the connection pool settings on your middle tier servers.
Related
I want to create a web application for restaurants and because of business model reasons it should be a online web application (on the cloud). So a restaurant can have an account on the app and it creates its own menu and adds its waiters and the cook.
The waiters should be able to access the menu and place orders all the time. My main issue which i should decide how to go about is:
"How can i grant fulltime availability to the waiters or the cook even when the internet connection is lost for several seconds to several minutes or even hours during the day"
I was thinking of installing the app in a sever in the local network of the restaurant which takes over the responsibility of the could server when there is no internet connection which means all orders are saved in the DB of the local server. And as soon as the connection is back the local DB is synced to the cloud DB (i was told Postgresql might have plugins supporting this, via on-premise or sth similar). Which means local DB records should be pushed to the cloud DB.
Can someone give me a hint on what tech (open source and no enterprise solutions pls) to use, to accomplish the DB syncing when internet goes on and off.
Am i on the right track or completely off with what i suggested previously?
In order to secure our database we create a schema for each new customer. We then create a user for this schema and when a customer logs in via the web we use their user and hence prevent them gaining access to other areas of the database.
Our issue is with connection pooling as it is a bit inefficient to keep creating/dropping new connections for these users. We would like to have a solution that can work across many hundreds of different database users.
We've looked at pg_bouncer, but the issue here is that we have to create a text record in an ini file for each user and restart pg_bouncer every time we set up a customer. This is not a great solution.
Is there an alternative solution that works in real time and would mean a customers connection/connection(s) would stay in the pool whilst they were active?
According to the latest release notes pgbouncer might actually do this. But I haven't tried.
Pooling mode can be configured both per-database and per-user.
As for use case in general. We also had this kind of issue a while ago. We just went with connection pooling with one user/database and multiple schemas. Before running psql query we just used SET search_path TO schemaName. As for logging, we had compliance mode, when we could log activity per customer and save it in appropriate schema.
Let's imagine that my node.js+express+socket.io server with express-session middleware is using mongoDB as storage ('connect-mongo') with maxAge of session set to null (i.e cookie lasts as long as user's browser is opened), and now this server is completely down.
Ages are passing by and in a new century, while Earth being torn apart by Zombies, Werewolfs and Alien Invaders, a bunch of insanely brave scientists discover intact remnants of my server and boots them up.
So, by this time many (if not every) client's browsers was closed and cookies cleaned. If one of those clients will connect to my server, server will discover that he (client) not presenting any valid cookie and will make a new one for him.
Now - the part in which i'm interested - what happend with those old sessions stored in connect-mongo storage. Obviously server wasn't able to clean them up while he was down, and now they will just hang as dead cargo in DB storage? Or there is some mindblowing magic behind it, that will, after server reboot, somehow 'know' that those users ended their sessions long ago, while server was down and will clean everything up accordingly?
express-session doesn't enforce any clean-up behavior for its stores (at least I didn't see any evidence of that in the source code). However, stores may certainly clean up stale sessions. For example, from the connect-mongo documentation:
By default, connect-mongo uses MongoDB's TTL collection feature (2.2+) to have mongod automatically remove expired sessions. But you can change this behavior.
Running postgresql 9.x (9.1 - 9.3)
I have a custom web app built using php's PDO library. Every query in our app uses prepared statements (via our internal PDO wrapper library).
Our production system uses AWS EC2 small instances for the web server and RDS for the app server.
For local development, my local machine serves as the web-server, and an office machine running Mac OSX (Mavericks) serves up the DB.
When I'm at work, I can typically ping my local office DB server and get 1-5ms ping times. Everything performs well, page load times are very speedy, my internal timer shows that PHP runs the page from start to end in about 12ms.
Now, the issue comes in when I take my work laptop home-- from home, I get about 50-60ms ping times to the office DB server. If I run my development machine at home, page times now take 5-10 seconds to load-- every time. Granted, there are 4 db queries running per page load, it's very, very little data. I've tried TCP_NOWAIT settings, I've tried loading pgbouncer on my local machine with persistent connections to the remote db-- nothing has helped so far.
When timing the queries, we have a simple query that returns 100 bytes of data that runs in .0006 seconds locally to taking around 1 second to run remotely. Lastly, I'll say it appears to be related to latency only, no matter how much data a query returns, it's like it takes around 1 second longer than it normally would if running locally.. give or take.
I was simply wondering if anyone could help me resolve what this delay might be. It seems that every single query, no matter how much data, imparts a delay of around a second, give or take. The odd thing is that when I run PGAdmin on the my machine connecting to the remote DB, it takes nowhere near that much time to run simple queries.
Does anyone have any idea of other things to try? I'm not runnig an SSL DB connection, or using any compression, I'm willing to try if necessary, however, that's one thing I haven't gotten to work before, and I doubt that'll help with latency anyway.
I am noticing latency in REST data the first time I visit a web site that is being served via Azure Mobile Services. Is there a cache or timeout of a connection after a set amount of time, because I am worried about user experience while waiting 7-8 seconds for the data to load (and there is not a lot of data, as I am testing 10 records returned). Once the first connection is made, subsequent visits appear to load quickly... but if I don't visit the site for a while, we are back to 7-8 seconds on first load.
Reason: The reason for this latency is the "shared" mode. When the first call to the service is made, it performs a "cold start" (initializing and starting the virtual server etc)
As you described in your question, after a while when the service is not used, it is put into the "sleep mode" again.
Solution: If you do not want this waiting-time, you can set your service to "reserved" mode, which forces the service to be active all time even when you do not access it for a while. But be aware that this requires you to pay some extra fees.