How to limit Postgresql jdbc pool connections amount? - postgresql

PostgreSQL JDBC provide several classes for connection pooling. Only PGConnectionPoolDataSource is recommended to use. With this class if received connection is busy, then library creates another one.
PGPoolingDataSource (with setMaxConnections called) is waiting until some connection will become free (if all of them busy), that's what I want. But this class is marked as #Deprecated.
At the source code I see it uses PGPooledConnection, those one use BaseDataSource and there is no mention of any limitation.
Is there any correct way to limit pool connections?

You should use a third-party connection pool library like HikariCP or DBCP, or the one included with your application server (if any).
This is also document in the deprecation note of PGPoolingDataSource (see source on GitHub):
Since 42.0.0, instead of this class you should use a fully featured
connection pool like HikariCP, vibur-dbcp, commons-dbcp, c3p0, etc.
The class PGConnectionPoolDataSource does not implement a connection pool, it is intended to be used by a connection pool as the factory of connections.

Related

How can i use connection pool in perl-redis

I'm currently using the Perl language and the Mojo framework, and Redis-1.999(https://metacpan.org/pod/Redis) library.
If I want to use a connection pool, do I have to implement the objects associated with the connection pool directly? I want to use the option to create a connection and define the maximum number of clients, such as the Jedis connection pool.

How wildfly resets properties set on connection when returned to connection pool

I am doing jndi lookup for datasource configured in JBOSS
DataSource dataSource = (DataSource) new InitialContext().lookup(dataSourceStr);
return dataSource.getConnection();
Connection is closed by using try-with-resource.
Once I get connection object I'm setting isolation property on it, I need it for my functionality.
connection.setTransactionIsolation(Connection.TRANSACTION_READ_UNCOMMITTED);//1
Once my operation is done I want to check what's isolation value present in connection for that I created connection object using similar mechanism given above and tested it's value which I found as TRANSACTION_READ_COMMITTED(2) which is default one and not which I had overriden. This is actually working as I wanted it to. I've not reset value to TRANSACTION_READ_COMMITTED(2) again once my operation is done but still it's getting reset to original TRANSACTION_READ_COMMITTED(2) when returned backed to pool. I m interested in knowing how this is happening/where can I look for more details.
I have kept only 1 connection in connection pool so I know when I accessed connection again I got the same connection object on which I had previously overriden value TRANSACTION_READ_UNCOMMITTED for isolation. I double checked it's by not closing connection thus it gave error when I tried to access it again.
My question is how connection value which was overriden is getting reset when it's getting back to pool?
could you please post the configuration of you DataSource?
This Behaviour is not specified by JBoss/WildFly it depends on the Implementation of DS you are using. So the behavior you are seeing can change between vendor specific implementations of DataSources.
For example if you are using postgres you could have a look on github.com/pgjdbc/pgjdbc/blob/… this is the listener which is fired when a pooled connection is closed.. but it seems postgres doesn't have such an "reset" behavior of it's pooled connections

pg-promise: Recommended pattern for passing connections to different libraries

This question is for pg-promise, its recommended usage pattern & based on following assumption,
It does-not make sense to create more than a single pgp instance, if they are connecting to same DB(also enforced by the good warning message of "Creating a duplicate database object for the same connection.")
Given:
I have 2 individual packages which need DB connection, currently they take connection string in constructor from outside and create connection object inside them, which leads to the warning of duplicate connection object and is fair as they both talk to same DB and there is a possibility for optimisation here(since i am in control of those packages).
Then: To prevent this, i thought of implementing dependency injection, for which i pass a resolve function in libraries constructor which gives them the DB connection object.
Issue: There are some settings which are at top level like parsers and helpers and transaction modes which may be different for each of these packages what is the recommendation for such settings or is there a better patterns to address these issues.
EG:
const pg = require('pg-promise');
const instance = pg({"schema": "public"});
instance.pg.types.setTypeParser(1114, str => str);//UTC Date which one library requires other doesnt
const constring = "";
const resolveFunctionPackage1 = ()=>instance(constring);
const resolveFunctionPackage2 = ()=>instance(constring);
To sum up: What is the best way to implement dependency injection for pg-promise?
I have 2 individual packages which need DB connection, currently they take connection string in constructor from outside and create connection object inside them
That is a serious design flaw, and it's is never gonna work well. Any independent package that uses a database must be able to reuse an existing connection pool, which is the most valuable resource when it comes to connection usage. Head-on duplication of a connection pool inside an independent module will use up existing physical connections, and hinder performance of all other modules that need to use the same physical connection.
If a third-party library supports pg-promise, it should be able to accept instantiated db object for accessing the database.
And if the third-party library supports the base driver only, it should at least accept an instantiated Pool object. In pg-promise, db object exposes the underlying Pool object via db.$pool.
what happens when they want to set conflicting typeparsers?
There will be a conflict, because pg.types is a singleton from the underlying driver, so it can only be configured in one way. It is an unfortunate limitation.
The only way to avoid it, is for reusable modules to never re-configure the parsers. It should only be done within the actual client application.
UPDATE
Strictly speaking, one should avoid splitting a database-access layer of an application into multiple modules, there can be a number of problems to follow that.
But specifically for separation of type parsers, the library supports setting custom type parsers on the pool level. See example here. Note that the update is just for TypeScript, i.e. in JavaScript clients it has been working for awhile.
So you still can have your separate module create its own db object, but I would advise that you limit its connection pool size to the minimum then, like 1:
const moduleDb = pgp({
// ...connection details...
max: 1, // set pool size to just 1 connection
types: /* your custom type parsers */
});

Can I reuse an connection in mongodb? How this connections actually work?

Trying to do some simple things with mongodb my mind got stuck in something that feels kinda strange for me.
client = MongoClient(connection_string)
db = client.database
print(db)
client.close()
I thought that when make a connection it is used only this one along the rest of the code until the close() method. But it doesn't seem to work that way... I don't know how I ended up having 9 connections when it supposed to be a single one, and even if each 'request' is a connection there's too many of them
For now it's not a big problem, just bothers me the fact that I don't know exactly how this works!
When you do new MongoClient(), you are not establishing just one connection. In fact you are creating the client, that will have a connection pool. When you do one or multiple requests, the driver uses an available connection from the pool. When the use is complete, the connection goes back to the pool.
Calling MongoClient constructor every time you need to talk to the db is a very bad practice and will incur a penalty for the handshake. Use dependency injection or singleton to have MongoClient.
According to the documentation, you should create one client per process.
Your code seems to be the correct way if it is a single thread process. If you don't need any more connections to the server, you can limit the pool size by explicitly specifying the number:
client = MongoClient(host, port, maxPoolSize=<num>).
On the other hand, if the code might later use the same connection, it is better to simply create the client once in the beginning, and use it across the code.

Server Swift with mongodb manager singleton

I am working on a project using Vapor and Mongodb.
Let's say that at a specific route
drop.get("user", String.self) { request, user in
// ... query Mongodb
}
I want to query the database and see if an input user already exists.
Is it wise to have a singleton MongoManager class that handles all the connection with the database?
drop.get("user", String.self) { request, user in
MongoManager.sharedInstance.findUser(user)
}
Do I create a bottleneck with this implementation?
No, you will not create a bottleneck unless you have a single-threaded mechanism that stands between your Vapor Handler and MongoDB.
MongoKitten (the underlying driver for Swift + MongoDB projects) manages the connection pool internally. You can blindly fire queries at MongoKitten and it'll figure out what connection to use or will create a new one if necessary.
Users of MongoKitten 3 will use a single connection per request. If multiple requests are being handled simultaneously, additional connections will be opened.
Users of MongoKitten 4 will use a single connection for 3 requests, this is configurable. The connection pool will expand by opening more connections if there are too many requests are being done.
Users of the upcoming Meow ORM (which works similar to what you're building) will use a single connection per thread. The connection pool will expand if all connections are reserved.