I was creating a Dashboard in Pentaho PUC which uses a postgres connection as the data source. Most of the time this causes the postgres to say
Too many clients already in Postgres'
SHOW max_connections; Query shows maximum connections of 200
I used this query select * from pg_stat_activity;. From that 90% of connections are from the Pentaho server to the database I use as the datasource in my new dashboard. waiting is f and state is idle in most of the connections. This looks like Pentaho is creating too many connections. How can I limit or Control it? I have already tried increasing connection limit from default 100 to 200 from postgres side but still the issue is there.
From the comments thread on the original question it seems you're using SQL over JDBC connections on your dashboard. This will create a different database connection for each query that needs to run and if they are somewhat slow you may reach the limit on the number of concurrent connections.
Instead, you should set up a JNDI: on your datasource management window add a new connection and set up the correct credentials. Under advanced options set up a connection pool. Give it a meaningful name. From that point on, you should refer to that name on your dashboard queries and use SQL over JNDI instead of SQL over JDBC. This way each SQL query will get a connection from the connection pool and the DB only sees 1 connection at each time, despite running multiple queries.
Related
I'm using Postgres with row-level security to lock down all queries across tables to a particular tenancy. I do this by calling a SET app.tenant_id = x; statement each time my service opens a connection, and my RLS policies use this session-level setting to limit the return data. Basically the approach described here under 'Alternative Approach'.
If this service is deployed in AWS, with RDS Proxy in between it and the database then I understand it'll be subject to 'connection pinning' since I'm using a SET statement. I'm trying to get a feel for how big an issue this actually is. A few questions:
Are SET LOCAL statements also going to cause pinning?
If my service connections to RDS Proxy are short-lived and a single transaction (which they will be 99% of the time) does this lessen the impact?
Does service connection pooling (service -> RDS Proxy) help or hinder?
Basically any advice on how much of an issue this is, how I can make this work, or any workarounds, would be appreciated.
I had to delete my previous answer because I was originally using pgAdmin, which is apparently very keen on pinning connections. This meant I couldn't trust my data. I have redone this experiment with another more well behaved tool, psql.
I understand where you're coming from. According to the documentation, SET will cause pinning, but it's unclear if that includes SET LOCAL. Since I need this information too, I will run a test today and post the results here.
I will do the following
Step 1: Open one connection through our proxy and use a regular SET so ensure that the DatabaseConnectionsCurrentlySessionPinned metric increases to 1. I will use the following query:
SET search_path TO blah;
SET app.tenant_id TO 123;
Step 2: I will close that connection and see that the metric decreases back down to 0.
Step 3: I will open a new connection, but this time I will use the following query:
SET LOCAL search_path TO blah;
SET LOCAL app.tenant_id TO 123;
Step 4: I will close the connection, and if the connection is pinned, I will monitor the metric to see if and when it decreases back to 0.
Let's do this
Caveat: don't look at the metrics in RDS Management Console. See: https://repost.aws/questions/QUfPWoiFxmR7-lios5NrFwBA/fix-database-connections-currently-session-pinned-metric-on-rds-proxy-dashboard
Step 1
The connection between proxy and server was pinned immediately when I ran SET commands, as expected.
Step 2
The pinned connection between proxy and server was closed immediately when I closed the connection between client and proxy.
Step 3
The connection between proxy and server was not pinned when I ran SET LOCAL commands.
Step 4
The connection was not pinned, so this step is superfluous.
Conclusion
SET LOCAL does circumvent pinning in RDS Proxy, with the caveat that it must be done within a transaction.
In my previous attempt to answer this question, pgAdmin's behavior made me conclude that pinning does occur in both cases. This is wrong.
To answer your other questions, if pinning does occur, it doesn't matter that your transactions are short. The server connection will remain pinned until the client connection is gone. Your only resort is to make sure you close client connections once they're pinned.
The documentation states that "when a connection is pinned, each later transaction uses the same underlying database connection until the session ends. Other client connections also can't reuse that database connection until the session ends. The session ends when the client connection is dropped."
I have a problem when querying Active Directory or MySQL database as Linked servers.
The problem occurs when running the query through SSMS on a server other than the database server where AD is mounted.
If I run these queries on the actual Db server through SSMS I get results from the linked server.
If I run these queries on a 'Management' machine on a separate VLAN they return error 7390
The requested operation could not be performed because OLE DB provider "ADsDSOObject" for linked server "ACTIVEDIR" does not support the required transaction interface.
This only affects the Linked servers, I can query any table on the Db server from the management machine, so it's not ports and networking (that I can see).
I have tried changing the settings for RPC, RPC Out and Promotion of Distributed Transactions in the properties sheet of the linked servers, with various combinations but I still get no results, just the error
For good measure I have also tried to set the TRANSACTION ISOLATION LEVEL to READ UNCOMMITTED .. in the SQL blocks executed
It used to work before I migrated from SQLserver 2008R2 to 2016....
I would appreciate any guidance and wisdom ..
Noticed below error during load test with multiple users and not in case of single SOAP request.
Could not open JDBC Connection for transaction; nested exception is java.sql.SQLException: javax.resource.ResourceException: IJ000453: Unable to get managed connection for java:
This could be due to any of the following:
The datasource connection pool has not been tuned (e.g. max-pool-size and blocking-timeout-millis) correctly for the maximum load on the application.
The application is leaking connections because it is not closing them and thereby returning them to the pool.
Threads with connections to the database are hanging and holding on to the connections.
Make sure that the min-pool-size and max-pool-size values for the respective datasource are set according to application load testing and connections are getting closed after use inside the application code.
Most likely you've found the bottleneck in your application, it seems that it cannot handle that many virtual users. The easiest solution would be raising an issue in your bug tracker system and let developers investigate it.
If you need to provide the root cause of the failure I can think of at least 2 reasons for this:
Your application or application server configuration is not suitable for high loads (i.e. number of connections in your JBOSS JDBC Connection pool configuration is lower than it is required given the number of virtual users you're simulating. Try amending min-pool-size and max-pool-size values to match the number of virtual users
Your database is overloaded hence cannot accept that many queries. In this case you can consider load testing the database separately (i.e. fire requests to the database directly via JMeter's JDBC Request sampler without hitting the SOAP endpoint of your application.) See The Real Secret to Building a Database Test Plan With JMeter article to learn more about database load testing concept.
We are using spring data jpa, and at one place we have injected entity manager. For all other functionalities we are using jpa-repository but for one functionality we are using injected entity manager. This works fine for some time after application is started but starts giving below error. It seems after some time entity-manager drops the database connection. Not able to replicate this. Is there any time out setting in entity manager after which it drops DB connection.
PersistenceException:
org.hibernate.exception.GenericJDBCException: could not inspect JDBC autocommit mode javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: could not inspect JDBC autocommit mode at org.hibernate.jpa.spi.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1763) at org.hibernate.jpa.spi.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1677) at org.hibernate.jpa.internal.QueryImpl.getResultList(QueryImpl.java:458)
There is nothing like this in the entity manager itself. But your database might close connections after some time, especially when they are idle, or do a lot of work (database can have a quota on how much CPU time a connection or a transaction may use).
Your connection pool is another possible source of such surprises.
The interesting question is: why does it only happen on the entity manager you use directly. One possible reason is that you are not properly releasing the connections. This might make the connection pool consider them stale and closing them after some timeout.
In order to debug this, I would start by inspecting your connection pool configuration and activating logging for it so you can see when connections are handed out when they get returned and any other special events the pool might trigger.
My problem is that I have two apps that are both getting these exceptions:
Caused by: org.apache.tomcat.jdbc.pool.PoolExhaustedException:
[pool-2-thread-273] Timeout: Pool empty. Unable to fetch a connection
in 30 seconds, none available[size:100; busy:99; idle:0;
lastwait:30000].
There are two apps:
grails app war running in tomcat connecting to postgres data source A
standalone jar connecting to data source B which is a different db on the same server as postgres data source A.
Both apps use org.apache.tomcat.jdbc.pool.ConnectionPool by default it seems (because I didn't configure the default pool anywhere and both apps use this). Also, my max connection limit is 200 and only using < 130 connections, so I'm not hitting a max connection issue. Since two apps are using separate data sources, I read that this would mean they cannot be the same conn pool.
When I login to my postgres server, I can see that app 2 has 100 idle connections and the max idle size of the pool is 100. So this is fine. However, what I was not expecting is that my app 1 would use connections from app 2's pool - or rather, since it appears that apps share a connection pool - I suppose that app 1 is trying to take from this common pool which already has 100 connections allocated. I would not really have expected that because they use a tomcat conn pool and AppB doesn't even use tomcat, so why would they be shared...
So my questions are (since I really am having a hard time finding docs about this):
Is it accurate that by default different apps would use the same conn pool?
If they're using the same conn pool then how can they share a conn pool if they're using different data sources?
Is it possible in grails to specify shared vs unshared conn pool? https://tomcat.apache.org/tomcat-7.0-doc/jndi-datasource-examples-howto.html This link mentions postgres specifically and that it seems like there's shared vs unshared concept (though I can't find any good documentation about this) but it's configured outside of grails. Any way to do it in grails?
Notes: Using grails 2.4.5 and postgres server 90502