I've got Postgres 13 version and PG-bouncer.
How can I using connection and query to database test usefulness
of pgbouncer?
I mean start query which generate connections or other and
at the end of test we see that time of query is lower, or
workload on database without bouncer is higher?
Related
I am about to move my production PostgreSQL DB to a managed DB from Digital Ocean. So, there is a possibility for a down time during the maintenance/back up procedures which may last up to 10 seconds. How to effectively handle this DB down time without getting the rails application crashed? I read about 'reconnect: true' in database.yml. But not sure it will work for PostgreSQL. Any suggestions?
I am using a Hikari data source in my application and database kills all the connections which are idle for more than 15 mins. So i wanted to set a connection test query. How do I know when this query is fired and how can I log when this query is fired?
I am facing the below issue
ERROR: canceling statement due to user request
inconsistently after I enabled query timeout for xa datasource in my xxx-ds.xml. I have added the following in my ds xml file.
<query-timeout>180</query-timeout>
The query timeout is set for 180 seconds, which means any sql query
that takes more than 180 seconds will get cancelled from application
server side.
But the issue I am facing is inconsistent and the queries gets timed out now and then without taking exactly 180 seconds.
We are using connection pooling also.
While searching stackoverflow found this question, which discusses about the possible causes for this issue while using connection pooling.
The solution suggested there was to set statement_timeout setting in postgresql.conf file. But it is bit difficult for me to enable statement_timeout setting in my database environment as the DB server is shared by multiple applications.
I would like to have a solution to terminate timed out queries from client side effectively and consistently while using connection pooling. I am using
JBoss 4.2.2-GA
postgresql 9.2 (64 bit)
java 1.7
postgresql-9.2-1002.jdbc4.jar
It looks like the issue is with postgresql driver 9.2. When I upgraded to 9.3, issue is fixed.
I added the SlowQueryReport JDBC interceptor to my application and it has been logging query failures for every single query made. The queries are executing like normal and returning with data nearly instantly (definitely < 100 ms), and yet every query is being logged with a message that looks like:
Failed Query Report SQL=select status from user where user_.id=?; time=1456183877324 ms;
I would suspect that the reason the SlowQueryReport is marking the query as a failure is because it somehow is logging the query as taking 1456183877324 ms... 46.175287840055816 years, which takes us to 1970 the starting Unix time.
Anyone know what's going on?
The Slow Query Report threshhold has been set to 100ms.
System:
Application is on AWS EC2 using Amazon Linux.
Application is running on Tomcat 7.
Database is PostgreSQL 9.3.10 running on AWS RDS.
The date on the EC2 instance is: "Tue Feb 23 00:16:09 UTC 2016" via "date".
The date on the DB is "2016-02-23 00:16:06.321741+00" via "SELECT NOW();"
The JDBC version is tomcat-jdbc, version 7.0.50.
I have installed postgresql on an Azure VM and am running tests to see if postgresql can support the expected load. I have increased the max_connections value to 1000 but when I run ab -c 300, postgresql stops responding. Are there any other settings I should be changing?
Thanks, Kate.
PostgreSQL will perform best with a lot less than 1000 connections on most hardware. Usually less than 100. If your application cannot queue work using a connection pool, you should put an external connection pool like PgBouncer between your application and PostgreSQL.
See: https://wiki.postgresql.org/wiki/Number_Of_Database_Connections