How can i use connection pool in perl-redis - perl

I'm currently using the Perl language and the Mojo framework, and Redis-1.999(https://metacpan.org/pod/Redis) library.
If I want to use a connection pool, do I have to implement the objects associated with the connection pool directly? I want to use the option to create a connection and define the maximum number of clients, such as the Jedis connection pool.

Related

pg-promise: Recommended pattern for passing connections to different libraries

This question is for pg-promise, its recommended usage pattern & based on following assumption,
It does-not make sense to create more than a single pgp instance, if they are connecting to same DB(also enforced by the good warning message of "Creating a duplicate database object for the same connection.")
Given:
I have 2 individual packages which need DB connection, currently they take connection string in constructor from outside and create connection object inside them, which leads to the warning of duplicate connection object and is fair as they both talk to same DB and there is a possibility for optimisation here(since i am in control of those packages).
Then: To prevent this, i thought of implementing dependency injection, for which i pass a resolve function in libraries constructor which gives them the DB connection object.
Issue: There are some settings which are at top level like parsers and helpers and transaction modes which may be different for each of these packages what is the recommendation for such settings or is there a better patterns to address these issues.
EG:
const pg = require('pg-promise');
const instance = pg({"schema": "public"});
instance.pg.types.setTypeParser(1114, str => str);//UTC Date which one library requires other doesnt
const constring = "";
const resolveFunctionPackage1 = ()=>instance(constring);
const resolveFunctionPackage2 = ()=>instance(constring);
To sum up: What is the best way to implement dependency injection for pg-promise?
I have 2 individual packages which need DB connection, currently they take connection string in constructor from outside and create connection object inside them
That is a serious design flaw, and it's is never gonna work well. Any independent package that uses a database must be able to reuse an existing connection pool, which is the most valuable resource when it comes to connection usage. Head-on duplication of a connection pool inside an independent module will use up existing physical connections, and hinder performance of all other modules that need to use the same physical connection.
If a third-party library supports pg-promise, it should be able to accept instantiated db object for accessing the database.
And if the third-party library supports the base driver only, it should at least accept an instantiated Pool object. In pg-promise, db object exposes the underlying Pool object via db.$pool.
what happens when they want to set conflicting typeparsers?
There will be a conflict, because pg.types is a singleton from the underlying driver, so it can only be configured in one way. It is an unfortunate limitation.
The only way to avoid it, is for reusable modules to never re-configure the parsers. It should only be done within the actual client application.
UPDATE
Strictly speaking, one should avoid splitting a database-access layer of an application into multiple modules, there can be a number of problems to follow that.
But specifically for separation of type parsers, the library supports setting custom type parsers on the pool level. See example here. Note that the update is just for TypeScript, i.e. in JavaScript clients it has been working for awhile.
So you still can have your separate module create its own db object, but I would advise that you limit its connection pool size to the minimum then, like 1:
const moduleDb = pgp({
// ...connection details...
max: 1, // set pool size to just 1 connection
types: /* your custom type parsers */
});

How to limit Postgresql jdbc pool connections amount?

PostgreSQL JDBC provide several classes for connection pooling. Only PGConnectionPoolDataSource is recommended to use. With this class if received connection is busy, then library creates another one.
PGPoolingDataSource (with setMaxConnections called) is waiting until some connection will become free (if all of them busy), that's what I want. But this class is marked as #Deprecated.
At the source code I see it uses PGPooledConnection, those one use BaseDataSource and there is no mention of any limitation.
Is there any correct way to limit pool connections?
You should use a third-party connection pool library like HikariCP or DBCP, or the one included with your application server (if any).
This is also document in the deprecation note of PGPoolingDataSource (see source on GitHub):
Since 42.0.0, instead of this class you should use a fully featured
connection pool like HikariCP, vibur-dbcp, commons-dbcp, c3p0, etc.
The class PGConnectionPoolDataSource does not implement a connection pool, it is intended to be used by a connection pool as the factory of connections.

Server Swift with mongodb manager singleton

I am working on a project using Vapor and Mongodb.
Let's say that at a specific route
drop.get("user", String.self) { request, user in
// ... query Mongodb
}
I want to query the database and see if an input user already exists.
Is it wise to have a singleton MongoManager class that handles all the connection with the database?
drop.get("user", String.self) { request, user in
MongoManager.sharedInstance.findUser(user)
}
Do I create a bottleneck with this implementation?
No, you will not create a bottleneck unless you have a single-threaded mechanism that stands between your Vapor Handler and MongoDB.
MongoKitten (the underlying driver for Swift + MongoDB projects) manages the connection pool internally. You can blindly fire queries at MongoKitten and it'll figure out what connection to use or will create a new one if necessary.
Users of MongoKitten 3 will use a single connection per request. If multiple requests are being handled simultaneously, additional connections will be opened.
Users of MongoKitten 4 will use a single connection for 3 requests, this is configurable. The connection pool will expand by opening more connections if there are too many requests are being done.
Users of the upcoming Meow ORM (which works similar to what you're building) will use a single connection per thread. The connection pool will expand if all connections are reserved.

how does Database.forDataSource work in slick?

I am currently setting up a project that uses play slick with scala. From the docs I find that to get a session we should do
val db = Database.forDataSource(dataSource: javax.sql.DataSource)
So I followed the pattern and used this in every Repository layer.(A layer on top of model similar to a dao).
I have couple of repositories and I have duplicated this line .
My question is , does this connect to database every time of have a common pool and we get the connection from this pool ?
From slick documentation :
Using a DataSource
You can provide a DataSource object to forDataSource. If you got it from the connection pool of your application framework, this plugs the pool into Slick.
val db = Database.forDataSource(dataSource: javax.sql.DataSource)
When you later create a Session, a connection is acquired from the pool and when the Session is closed it is returned to the pool.

Connections pool in Go mgo package

In the article running-mongodb-queries-concurrently-with-go said that mgo.DialWithInfo : Create a session which maintains a pool of socket connections to MongoDB, but when I looking for in the documentacion of the function DialWithInfo I do not find something that talk me about pool connection, only I find something in the Dial Function Dial Function that said : This method is generally called just once for a given cluster. Further sessions to the same cluster are then established using the New or Copy methods on the obtained session. This will make them share the underlying cluster, and manage the pool of connections appropriately.
Can someone say me how works the pool connections on MGO and if is possible set up this pool?
Is it true that DialWithInfo create a Pool Connection or is only the Dial function that create this pool?
Thanks in Advance
Looking into the source code for the Dial function calls, you can see that the Dial function calls the DialWithTimeout function which calls the DialWithInfo function. So to answer your question about the differences between the functions, it seems like Dial is a convenience wrapper for DialWithTimeout, which in turn is a convenience wrapper for DialWithInfo, so they result in the same connection pool.
As to how to manage that connection pool, you've got it right in your question.
Further sessions to the same cluster are then established using the New or Copy methods on the obtained session. This will make them share the underlying cluster, and manage the pool of connections appropriately.
So a single call to Dial or DialWithTimeout or DialWithInfo will establish the connection pool, if you require more than one session, use the session.New() or session.Copy() methods to obtain it from the session returned from whichever Dial function you chose to use.