Sqlalchemy: Connections aren't closed when pool is overflowed - postgresql

When I run ab (apache benchmark) on my site (with SQLAlchemy and postgresql hosted on Apache web server), SQLAlchemy makes many connections to postgre and I got too many connections error.
I traced the problem, and found that problem is the pool (actually QueuePool).
The documentation at http://www.sqlalchemy.org/docs/core/pooling.html#sqlalchemy.pool.Pool says that if when the pool is full, returning connections (that opened because max_overflow allowed creation of these extra connections) will be discarded and disconnected.
But it seems connections actually didn't close! They silently dropped out of pool without closing.
So SQLAlchemy continuously opens new connections, ignores them (without closing!) and opens new ones.
Increasing pool size is not the real solution, the problem is additional connections aren't closed.
(Default settings for QueuePool is pool_size=5 and max_overflow=10)

Looks like a bug in SQLAlchemy, fixed 2 weeks ago: http://hg.sqlalchemy.org/sqlalchemy/rev/aff95843c12f#l2.17
There was no release with this fix, so you have to patch it manually.

i think its bug and fixed ... install from source and have fun ;)

Related

Opening DB connections to Postgres taking long

Some of our applications are facing issues with the connection pool. I run one of them. A JEE application on Payara 4.1 which uses PostgreSQL 9.5.8.
I have as good as no problems when running the application localy with local db instance. When running on the remote environment I have seen issues happening every 10 minutes that the application was unresponsive (well, it actually responded everything with HTTP status 503). Guessing it was related to opening connections taking long, we have set the parameter idleTimeoutInSeconds="0" in jdbc-resource. Now we have the same issues about 4 times a day which is an improvement, but - well - neighbour systems are still complaining.
We usually run with 5 steady connection allowing maximum connections of 30. Our application usually uses 1 up to 2 to handle traffic. With TCP dump I have seen, that at a certain point in time the connection pool tries to open many connections (the pool realizes the connections it holds have been closed by the DB without any information like TCP FIN, opening each connections takes about 1 second). During this time of about 30 seconds not all requests can be safely queued and some 503 happen.
Locally everything is fine. Opening a connection takes ~50ms and everyone is happy. Our postgres team is not helping at all and I am stuck with a problem. As I don't see any improvement possibility with the connection pool in JEE, I have radical ideas going in the direction of:
Refreshing the connections myself. All the time. Constantly. (Which would be hard to implement in JEE where I can not simply look into the connection pool and tell each connection to be refreshed just in case).
Replacing the not-helping-at-all JEE implementation of connection pool with something that works better. (Future generations of developers maintaining our app will hate me...)
Replacing the DB with something managed by myself. (Even dumber idea)
Does anyone:
Has any idea how I could perform 1 or 2 above?
Has any other ideas what could help?
Here my current JDBC resource definition if needed:
<jdbc-resource poolName="<poolName>" jndiName="<jndiName>" isConnectionValidationRequired="true"
connectionValidationMethod="table" validationTableName="version()" maxPoolSize="30"
validateAtmostOncePeriodInSeconds="30" statementTimeoutInSeconds="30" isTimerPool="true" steadyPoolSize="5"
idleTimeoutInSeconds="0" connectionCreationRetryAttempts="100000" connectionCreationRetryIntervalInSeconds="30"
maxWaitTimeInMillis="2000">

Heroku Postgres Connection Limit?

I'm building a website attached to a Heroku Postgres database and am using the free hobby dev plan. Per Heroku, this means there's a "Maximum of 20 connections." Does this mean that a maximum of 20 people can be using the website with data being collected by the database on the back end? Any idea what happens if connections go above that level? The paid plans go up to a maximum connection limit of 500, but even that seems low to me if people are using this at the enterprise level. Any color on this would be greatly appreciated. There was a prior question on this but the answer wasn't quite clear to me.
Thanks!
What does database connection limit mean?
PostgreSQL could be configured to limit the number of simultaneous connections to the database. The Heroku comes with plans having connection limits. The 'Hobby' plans come with 20 connections whereas standard plans comes starting with 120 connections. When we start developing and testing, especially automated testings, the hobby plans raise the error PG::Error (FATAL: too many connections for role "xxxxxxx"). If we check the connections with Heroku CLI, we get
Heroku CLI
The immediate solution is to kill all connections with the command :
$ heroku pg:killall --app <app name>
This is not a permanent solution. We had the same issue with this website also. We tried many solutions available in the internet, especially in stack overflow.
It is very important to know how to calculate the no of connections required. Heroku documentation says...
Assuming that you are not manually creating threads in your application code, you can use your web server settings to guide the number of connections that you need. The Unicorn web server scales out using multiple processes, if you aren’t opening any new threads in your application, each process will take up 1 connection. So in your unicorn config file if you have worker_processes set to 3 like this:
worker_processes 3
Then your app will use 3 connections for workers. This means each dyno will require 3 connections. If you’re on a “Dev” plan, you can scale out to 6 dynos which will mean 18 active database connections, out of a maximum of 20. However, it is possible for a connection to get into a bad or unknown state.
Solution - Limit connections with PgBouncer
The easiest fix is to limit the connections with PG bouncer. For many frameworks, you must disable prepared statements in order to use PgBouncer. Then add the PgBouncer buildpack to your app.
$ heroku buildpacks:add https://github.com/heroku/heroku-buildpack-pgbouncer
The output will be something like
Buildpack added. Next release on will use:
heroku/python
https://github.com/heroku/heroku-buildpack-pgbouncer
Run git push heroku master to create a new release using these buildpacks.
Now you must modify your Procfile to start PgBouncer. In your Procfile add the command bin/start-pgbouncer-stunnel to the beginning of your web entry. So if your Procfile was
web: gunicorn .wsgi:application --worker-class gevent
Change it to:
web: bin/start-pgbouncer-stunnel gunicorn .wsgi:application --worker-class gevent
Commit the results to git, test on a staging app, and then deploy to production.
On deployment, you will see
OUTPUT
Depending on the web-framework you are using this can be different, but:
Typically you will have a maximum of one database connection per server process. This could be one per running web- or worker-dyno. Or more if your framework runs multiple thread / worker processes per dyno (most do).
These connections are then only used if there is an actual request to your application, not when the use is just viewing a page.
When you're running an async framework (node.js for example, or greenlets in python) this get's a little more complicated.
The easy way: just test it. You'll see the current connection count in the heroku interfaces. There are frameworks and services in the wild that let you test concurrent users.
The even easier way (since this runs on hobby plans, it seems like a hobby application): just see when it breaks :) .

Connected To XEPDB1 From SQL Developer [duplicate]

I am using ORACLE database in a windows environment and running a JSP/servlet web application in tomcat. After I do some operations with the application it gives me the following error.
ORA-12518, TNS: listener could not hand off client connection
can any one help me to identify the reason for this problem and propose me a solution?
The solution to this question is to increase the number of processes :
1. Open command prompt
2. sqlplus / as sysdba; //login sysdba user
3. startup force;
4. show parameter processes; // This shows 150(some default) processes allocated, then increase the count to 800
5. alter system set processes=800 scope=spfile;
As Tried and tested.
In my case I found that it is because I haven't closed the database connections properly in my application. Too many connections are open and Oracle can not make more connections. This is a resource limitation. Later when I check with oracle forum I could see some reasons that have mentioned there about this problem. Some of them are.
In most cases this happens due to a network problem.
Your server is probably running out of memory and need to swap memory to disk.One cause can be an Oracle process consuming too much memory.
if it is the second one, please verify large_pool_size or check dispatcher were enough for all connection.
You can refer bellow link for further details.
https://community.oracle.com/message/1874842#1874842
I ran across the same problem, in my case it was a new install of the Oracle client on a new desktop that was giving the error, other clients were working so I knew it wouldn't be a fix to the database configuration. tnsping worked properly but sqlplus failed with the ora-12518 listener error.
I had the tnsnames.ora entry with a SID instead of a service_name, then once I fixed that, still the same error and found I had the wrong service_name as well. Once I fixed that, the error went away.
If from one day to another the issue shows for no apparent reasons, add these following lines at the bottom of the listner.ora file. If your oracle_home environment variable is set like this:
(ORACLE_HOME = C:\oracle11\app\oracle\product\11.2.0\server)
The lines to add are:
ADR_BASE_LISTENER = C:\oracle11\app\oracle\
DIRECT_HANDOFF_TTC_LISTENER=OFF
I had the same problem when executing queries in my application. I'm using Oracle client with Ruby on Rails.
The problem started when I accidentally started several connections with the DB and didn't close them.
When I fixed this, everything started to work fine again.
Hope this helps another one with the same problem.
I experienced the same error after upgrading to Windows 10. I solved it by starting services for Oracle which are stopped.
Start all the services as shown in the following image:
I had the same issue. After restarting all Oracle services it worked again.
same problem encountered for me.
And from oracle server listener log, can see more information.
and I found that the SERVICE_NAME is not match the tnsnames.ora configured Service name. so I changed the application's data source configuration from SID value to Service_NAME value and it fixed.
23-MAY-2019 02:44:21 * (CONNECT_DATA=(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=XXXXXX$))(SERVICE_NAME=orclaic)) * (ADDRESS=(PROTOCOL=tcp)(HOST=::1)(PORT=50818)) * establish * orclaic * 12518
TNS-12518: TNS:listener could not hand off client connection
TNS-12560: TNS:protocol adapter error
TNS-00530: Protocol adapter error
64-bit Windows Error: 203: Unknown error
I had the same issue in real time application and the issue gone by itself next day. upon checking, it was found that server ran out of memory due to additional processes running.
So in my case, the reason was server run out of memory
first of all
check the listener log
check the show parameter processes vs select count(*) from v$processes;
increase the process, it may require SGA increase
;

Intermittent connection failures with heroku postgres while using play-slick

I have a play app on heroku connecting to a postgres instance with play-slick. Around 30% of the time when I deploy a new application I get this in my logs:
java.sql.SQLTransientConnectionException: db - Connection is not available, request timed out after 1007ms.
When I restart the application it will usually start again, though sometimes it takes a few tries.
Any advice for what I can do to debug this?
Most likely, there is a period of time where both the old app and the new app are trying to get connections to the database, which means you have double you max allowed connections active.
There are two solutions:
Upgrade your database plan to allow for more connection
Reduce your max db connections by half
play-slick uses HikariCP to pool connections, so you can probably configure your max connections with maximumPoolSize.
I believe I've figured out what the issue was. I used the default heroku play Procfile which contains -Ddb.default.url=${DATABASE_URL} and additionally had the slick db url specified in my conf. Removing the former solved the problem.

Grails shareable vs unshareable connection pools for postgres datasource

My problem is that I have two apps that are both getting these exceptions:
Caused by: org.apache.tomcat.jdbc.pool.PoolExhaustedException:
[pool-2-thread-273] Timeout: Pool empty. Unable to fetch a connection
in 30 seconds, none available[size:100; busy:99; idle:0;
lastwait:30000].
There are two apps:
grails app war running in tomcat connecting to postgres data source A
standalone jar connecting to data source B which is a different db on the same server as postgres data source A.
Both apps use org.apache.tomcat.jdbc.pool.ConnectionPool by default it seems (because I didn't configure the default pool anywhere and both apps use this). Also, my max connection limit is 200 and only using < 130 connections, so I'm not hitting a max connection issue. Since two apps are using separate data sources, I read that this would mean they cannot be the same conn pool.
When I login to my postgres server, I can see that app 2 has 100 idle connections and the max idle size of the pool is 100. So this is fine. However, what I was not expecting is that my app 1 would use connections from app 2's pool - or rather, since it appears that apps share a connection pool - I suppose that app 1 is trying to take from this common pool which already has 100 connections allocated. I would not really have expected that because they use a tomcat conn pool and AppB doesn't even use tomcat, so why would they be shared...
So my questions are (since I really am having a hard time finding docs about this):
Is it accurate that by default different apps would use the same conn pool?
If they're using the same conn pool then how can they share a conn pool if they're using different data sources?
Is it possible in grails to specify shared vs unshared conn pool? https://tomcat.apache.org/tomcat-7.0-doc/jndi-datasource-examples-howto.html This link mentions postgres specifically and that it seems like there's shared vs unshared concept (though I can't find any good documentation about this) but it's configured outside of grails. Any way to do it in grails?
Notes: Using grails 2.4.5 and postgres server 90502