Large number of Sleep connections to database in TYPO3 - typo3

Our TYPO3 application is experiencing downtime issues, with the logs displaying the error:
Core: Exception handler (WEB): Uncaught TYPO3 Exception: An exception occured in driver: Too many connections
If I connect to the MySQL database and run SHOW PROCESSLIST, all I see are lots of connections with the command “Sleep”. This seems like a red flag to me but this is not my area of expertise; is there a good reason for this and if not what might the fix be?

the PHP mysqli Driver allows for Persistent connections.
which basically means a connection is kept open and put to "sleep" till needed again.
this is a tradeoff: an already open connection does not need to be re-establisched so you can query faster. but at the cost that both systems spend some resources (Memory, CPU) on keeping the connection alive.
see the PHP documentation for more details:
https://www.php.net/manual/de/mysqli.persistconns.php
and configuration options (php.ini)
https://www.php.net/manual/de/mysqli.configuration.php

Related

Connected To XEPDB1 From SQL Developer [duplicate]

I am using ORACLE database in a windows environment and running a JSP/servlet web application in tomcat. After I do some operations with the application it gives me the following error.
ORA-12518, TNS: listener could not hand off client connection
can any one help me to identify the reason for this problem and propose me a solution?
The solution to this question is to increase the number of processes :
1. Open command prompt
2. sqlplus / as sysdba; //login sysdba user
3. startup force;
4. show parameter processes; // This shows 150(some default) processes allocated, then increase the count to 800
5. alter system set processes=800 scope=spfile;
As Tried and tested.
In my case I found that it is because I haven't closed the database connections properly in my application. Too many connections are open and Oracle can not make more connections. This is a resource limitation. Later when I check with oracle forum I could see some reasons that have mentioned there about this problem. Some of them are.
In most cases this happens due to a network problem.
Your server is probably running out of memory and need to swap memory to disk.One cause can be an Oracle process consuming too much memory.
if it is the second one, please verify large_pool_size or check dispatcher were enough for all connection.
You can refer bellow link for further details.
https://community.oracle.com/message/1874842#1874842
I ran across the same problem, in my case it was a new install of the Oracle client on a new desktop that was giving the error, other clients were working so I knew it wouldn't be a fix to the database configuration. tnsping worked properly but sqlplus failed with the ora-12518 listener error.
I had the tnsnames.ora entry with a SID instead of a service_name, then once I fixed that, still the same error and found I had the wrong service_name as well. Once I fixed that, the error went away.
If from one day to another the issue shows for no apparent reasons, add these following lines at the bottom of the listner.ora file. If your oracle_home environment variable is set like this:
(ORACLE_HOME = C:\oracle11\app\oracle\product\11.2.0\server)
The lines to add are:
ADR_BASE_LISTENER = C:\oracle11\app\oracle\
DIRECT_HANDOFF_TTC_LISTENER=OFF
I had the same problem when executing queries in my application. I'm using Oracle client with Ruby on Rails.
The problem started when I accidentally started several connections with the DB and didn't close them.
When I fixed this, everything started to work fine again.
Hope this helps another one with the same problem.
I experienced the same error after upgrading to Windows 10. I solved it by starting services for Oracle which are stopped.
Start all the services as shown in the following image:
I had the same issue. After restarting all Oracle services it worked again.
same problem encountered for me.
And from oracle server listener log, can see more information.
and I found that the SERVICE_NAME is not match the tnsnames.ora configured Service name. so I changed the application's data source configuration from SID value to Service_NAME value and it fixed.
23-MAY-2019 02:44:21 * (CONNECT_DATA=(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=XXXXXX$))(SERVICE_NAME=orclaic)) * (ADDRESS=(PROTOCOL=tcp)(HOST=::1)(PORT=50818)) * establish * orclaic * 12518
TNS-12518: TNS:listener could not hand off client connection
TNS-12560: TNS:protocol adapter error
TNS-00530: Protocol adapter error
64-bit Windows Error: 203: Unknown error
I had the same issue in real time application and the issue gone by itself next day. upon checking, it was found that server ran out of memory due to additional processes running.
So in my case, the reason was server run out of memory
first of all
check the listener log
check the show parameter processes vs select count(*) from v$processes;
increase the process, it may require SGA increase
;

Intermittent connection failures with heroku postgres while using play-slick

I have a play app on heroku connecting to a postgres instance with play-slick. Around 30% of the time when I deploy a new application I get this in my logs:
java.sql.SQLTransientConnectionException: db - Connection is not available, request timed out after 1007ms.
When I restart the application it will usually start again, though sometimes it takes a few tries.
Any advice for what I can do to debug this?
Most likely, there is a period of time where both the old app and the new app are trying to get connections to the database, which means you have double you max allowed connections active.
There are two solutions:
Upgrade your database plan to allow for more connection
Reduce your max db connections by half
play-slick uses HikariCP to pool connections, so you can probably configure your max connections with maximumPoolSize.
I believe I've figured out what the issue was. I used the default heroku play Procfile which contains -Ddb.default.url=${DATABASE_URL} and additionally had the slick db url specified in my conf. Removing the former solved the problem.

PostgreSQL connection issue - Dropping idle connections

Brief Background:
We have a cloud based Warehouse Management System that uses Glassfish to dish out the java interface. The Warehouse Management System consists of a Dashboard and a mobile application - both of which talk constantly with the Glassfish server (using a web browser).
Issue:
Recently our PostgreSQL database server HDD failed. After restoring from a backup and moving the database to an Amazon Web Service Server, idle connections seem to be dropping out. This causes the entire Warehouse Management System to fail. Restarting the Glassfish server seems to fix the issue until the idle connection causes it to fail again.
It happens around 3-4 times per day after approx 20mins of idle activity i.e. our customer's lunch breaks, after hours etc..
Question:
Is there a setting that I'm missing in the postgresql.conf file? What else could be causing this?
Attachments:
I've attached a screenshot containing the output of running 'select * from pg_stat_activity;' and also the postgresql.conf file.
select * from pg_stat_activity
postgresql.conf
Log:
postgresql-8.4-main.log shows this occasionally, although it doesn't seem to be when it cuts out.
2015-10-19 07:51:41 NZDT [9971-1] postgres#customerName LOG: unexpected EOF on client connection
glassfish server.log is riddled with these lines:
[#|2015-10-19T07:46:49.715+1300|SEVERE|glassfish3.1.1|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=25;_ThreadName=Thread-2;|WebModule[/pns-CustomerName]Received InterruptedException on request thread
[#|2015-10-20T09:34:42.351+1300|WARNING|glassfish3.1.1|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=17;_ThreadName=Thread-2;|GRIZZLY0023: Interrupting idle Thread: http-thread-pool-8080(2).|
[#|2015-10-20T07:33:55.414+1300|WARNING|glassfish3.1.1|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=14;_ThreadName=Thread-2;|Response Error during finishResponse java.lang.NullPointerException
Thanks in advance

Sqlalchemy: Connections aren't closed when pool is overflowed

When I run ab (apache benchmark) on my site (with SQLAlchemy and postgresql hosted on Apache web server), SQLAlchemy makes many connections to postgre and I got too many connections error.
I traced the problem, and found that problem is the pool (actually QueuePool).
The documentation at http://www.sqlalchemy.org/docs/core/pooling.html#sqlalchemy.pool.Pool says that if when the pool is full, returning connections (that opened because max_overflow allowed creation of these extra connections) will be discarded and disconnected.
But it seems connections actually didn't close! They silently dropped out of pool without closing.
So SQLAlchemy continuously opens new connections, ignores them (without closing!) and opens new ones.
Increasing pool size is not the real solution, the problem is additional connections aren't closed.
(Default settings for QueuePool is pool_size=5 and max_overflow=10)
Looks like a bug in SQLAlchemy, fixed 2 weeks ago: http://hg.sqlalchemy.org/sqlalchemy/rev/aff95843c12f#l2.17
There was no release with this fix, so you have to patch it manually.
i think its bug and fixed ... install from source and have fun ;)

Postgres: "ERROR: cached plan must not change result type"

This exception is being thrown by the PostgreSQL 8.3.7 server to my application.
Does anyone know what this error means and what I can do about it?
ERROR: cached plan must not change result type
STATEMENT: select code,is_deprecated from country where code=$1
I figured out what was causing this error.
My application opened a database connection and prepared a SELECT statement for execution.
Meanwhile, another script was modifying the database table, changing the data type of one of the columns being returned in the above SELECT statement.
I resolved this by restarting the application after the database table was modified. This reset the database connection, allowing the prepared statement to execute without errors.
I'm adding this answer for anyone landing here by googling ERROR: cached plan must not change result type when trying to solve the problem in the context of a Java / JDBC application.
I was able to reliably reproduce the error by running schema upgrades (i.e. DDL statements) while my back-end app that used the DB was running. If the app was querying a table that had been changed by the schema upgrade (i.e. the app ran queries before and after the upgrade on a changed table) - the postgres driver would return this error because apparently it does caching of some schema details.
You can avoid the problem by configuring your pgjdbc driver with autosave=conservative. With this option, the driver will be able to flush whatever details it is caching and you shouldn't have to bounce your server or flush your connection pool or whatever workaround you may have come up with.
Reproduced on Postgres 9.6 (AWS RDS) and my initial testing seems to indicate the problem is completely resolved with this option.
Documentation: https://jdbc.postgresql.org/documentation/head/connect.html#connection-parameters
You can look at the pgjdbc Github issue 451 for more details and history of the issue.
JRuby ActiveRecords users see this: https://github.com/jruby/activerecord-jdbc-adapter/blob/master/lib/arjdbc/postgresql/connection_methods.rb#L60
Note on performance:
As per the reported performance issues in the above link - you should do some performance / load / soak testing of your application before switching this on blindly.
On doing performance testing on my own app running on an AWS RDS Postgres 10 instance, enabling the conservative setting does result in extra CPU usage on the database server. It wasn't much though, I could only even see the autosave functionality show up as using a measurable amount of CPU after I'd tuned every single query my load test was using and started pushing the load test hard.
For us, we were facing similar issue. Our application works on multiple schema. Whenever we were doing schema changes, this issue started occruding.
Setting up prepareThreshold=0 parameter inside JDBC parameter disables statement caching at database level. This solved it for us.
I got this error, I manually ran the failing select query and it fixed the error.