I have application powered by moodle API 3.3 when I active this parameter at postgresql.conf "idle_in_transaction_session_timeout" and restart the service, I get my application down because of losing connection to the database, I tried to put it 300000 milliseconds but no hope once I restart the postgresql I get my application down so any advises ?
You have to fix your application code so that it closes all transactions immediately after they are done. Then the problem will disappear.
Related
I am running into an issue where multiple different clients apps (DataGrip, DBeaver, Looker) have their queries cancelled after exactly 15 minutes, but no termination message or connection error is ever sent to the app. As far as the app is concerned, the query is still running even though it has been terminated in Postgres.
For example, if I run the following query, according to the client app it just runs forever. If I check pg_stat_activity, it shows the query no longer running after 15 minutes.
SELECT pg_sleep(16 * 60);
Does anyone know of a Postgres or AWS setting that would cause this? I've checked the configuration and couldn't find any settings set to a value of 15 minutes (or 900 seconds).
There is probably a ill-configured firewall that closes your session.
Assuming that the clients you are mentioning use libpq to connect to PostgreSQL, include this in the connection string:
keepalives_idle=300
See the documentation for details.
You could of course also configure the TCP stack on your operating system to use that value, so the problem will never surface again.
Your DB log might be able to tell you what happened.
In addition, check your statement_timeout setting. The units are milliseconds so you should be looking for 900000, not 900.
If it's not that, there exist firewalls that kill idle connections. Setting tcp_keepalives_idle could help avoid those types of problems.
I have a play app on heroku connecting to a postgres instance with play-slick. Around 30% of the time when I deploy a new application I get this in my logs:
java.sql.SQLTransientConnectionException: db - Connection is not available, request timed out after 1007ms.
When I restart the application it will usually start again, though sometimes it takes a few tries.
Any advice for what I can do to debug this?
Most likely, there is a period of time where both the old app and the new app are trying to get connections to the database, which means you have double you max allowed connections active.
There are two solutions:
Upgrade your database plan to allow for more connection
Reduce your max db connections by half
play-slick uses HikariCP to pool connections, so you can probably configure your max connections with maximumPoolSize.
I believe I've figured out what the issue was. I used the default heroku play Procfile which contains -Ddb.default.url=${DATABASE_URL} and additionally had the slick db url specified in my conf. Removing the former solved the problem.
My Sql developer was working fine from long time, suddenly from yesterday when i am trying to reconnect it is giving me following error:-
Status : Failure -Test failed: IO Error: Got minus one from a read call, connect lapse 62991 ms., Authentication lapse 0 ms.
Things I tried to solve the issue:-
Made corresponding changes in sqlnet.ora , listener.ora and tnsnames.ora files
Checked is there are too many open connections to the database service.
Restarted ORACLE service and listener
Firewall related changes
Reinstalled ORACLE
In my case the issue, it is due to a malware. Whenever i am trying to connect to DB port, it was automatically getting changed by malware.
You can also identify the problem by using lsnrctl status command, for further information. You can also see the log file of listener.
Run a full scan on your machine and restart listener, hope that helps.
Brief Background:
We have a cloud based Warehouse Management System that uses Glassfish to dish out the java interface. The Warehouse Management System consists of a Dashboard and a mobile application - both of which talk constantly with the Glassfish server (using a web browser).
Issue:
Recently our PostgreSQL database server HDD failed. After restoring from a backup and moving the database to an Amazon Web Service Server, idle connections seem to be dropping out. This causes the entire Warehouse Management System to fail. Restarting the Glassfish server seems to fix the issue until the idle connection causes it to fail again.
It happens around 3-4 times per day after approx 20mins of idle activity i.e. our customer's lunch breaks, after hours etc..
Question:
Is there a setting that I'm missing in the postgresql.conf file? What else could be causing this?
Attachments:
I've attached a screenshot containing the output of running 'select * from pg_stat_activity;' and also the postgresql.conf file.
select * from pg_stat_activity
postgresql.conf
Log:
postgresql-8.4-main.log shows this occasionally, although it doesn't seem to be when it cuts out.
2015-10-19 07:51:41 NZDT [9971-1] postgres#customerName LOG: unexpected EOF on client connection
glassfish server.log is riddled with these lines:
[#|2015-10-19T07:46:49.715+1300|SEVERE|glassfish3.1.1|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=25;_ThreadName=Thread-2;|WebModule[/pns-CustomerName]Received InterruptedException on request thread
[#|2015-10-20T09:34:42.351+1300|WARNING|glassfish3.1.1|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=17;_ThreadName=Thread-2;|GRIZZLY0023: Interrupting idle Thread: http-thread-pool-8080(2).|
[#|2015-10-20T07:33:55.414+1300|WARNING|glassfish3.1.1|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=14;_ThreadName=Thread-2;|Response Error during finishResponse java.lang.NullPointerException
Thanks in advance
I’m working on a experiment regarding to a course I’m taking about tuning DB2. I’m using the EC2 from Amazon (aws) to conduct the experiment.
My problem is, however, that I have to test a non-compression against row-compression in DB2 and to do that I’ve created a bsh file that run those experiments. But when I reach to my compression part I get the error ”Transaction log is full”; and no matter how low I set the inserts for it is complaining about my transaction log.
I’ve scouted Google for a day now trying to find some way to flush / clear the log or just get rit of it, i don’t need it. I’ve tried to increase the size but nothing has helped.
Please, I hope someone has an answer to solve this frustrating problem
Thanks
- Mestika
There is no need to "clear the log" in DB2. When a transaction is rolled back, DB2 releases the log space used by the transaction.
If you've increased the log size and it has not helped, please post more information about what you're trying to do.
No need of restarting. Just try to force the applications using DB2 force applications all.
Increase the Actie Log File Size and try to force application connections and terminate the connections.
Try to run the job now.
db2 force applications all
db2 update db cfg for sample using logfilsiz 5125
db2 force applications all
db2 terminate
db2 connect to sample
Run your job and monitor.
Just restart the instance, it would release the pending logs and you should be fine