I am connecting to a legacy rdbms system using System.Data.OdbcClient. I would like to know if the connection pool is supported by the legacy rdbms system? Suppose I use Connection as shown below
using (OdbcConnection con = new OdbcConnection("ConnStr"))
{
con.Open();
//perform db operation
}
Does it establishes a connection and disconnects each time it is called or it comes from connection pool.
It all depends on the way connection pooling is configured on this machine. If connection pooling is effectively enabled on this computer, the snippet shown will use a connection from the pool (if available), and effectively not require the creation of a new [RDBMS] connection.
That's the nice thing about connection pooling: it is transparent to the application, i.e. you do not need to do anything different, to call a separate API etc.
There's a distinction to be made: connection pooling typically deals with connections to the DBMS server ("SQL sessions" if you will) not with the object which encapsulate such a connection. Consequently, the SQLsessions (which are, relatively, the most costly elements to produce) are effectively cached, but the ADO (or whatever objects) are created anew each time.
Also connection pooling ensures that connections to the SQL server are use efficiently, but doesn't warranty that a new connection doesn't get created (for example after a period of relative idle, some connections may time out, and are then dropped and re-created).
Edit (on the support for legacy RDBMS etc. [Comment from Amitabh])
With ODBC, connection pooling is a feature of the ODBC layer, not of the various drivers ODBC uses to connect to the underlying storages. Therefore, so long as you have ODBC version 3.0 or later (and so long the underlying driver is accessible to ODBC), ODBC can manage a connection pool for you (provided you supply it the necessary configuration details).
With ODBC, it appears that the connection pool can be configured/enabled programmatically. This doesn't invalidate the statement that connection pooling is transparent to the program, it's just that you may need, in the initialization section of the program, make a few calls to setup the pooling, the remainder of the logic actually using connections being kept unchanged.
see for example this MSDN article
It uses a connection pool, you can (ans should) have it in a using clause that will "soft close" the connection but next time you are likely to get an already established connection.
Related
The issue was that even if i target just one node of my replica set in my connection string, mongo-go-driver always want to discover and connect other nodes.
I found a solution here that basically say i should add the connect option in the connection string.
mongodb://host:27017/authDb?connect=direct
My question is: How good or bad practice is this and why mongo doesn't have documented, are there other available values that this option can have?
That option only exists for the Go driver. For all other drivers it is unrecognized, so it is not documented as a general connection string option.
It is documented for the Go Driver at https://godoc.org/go.mongodb.org/mongo-driver/mongo#example-Connect--Direct
How good or bad practice is this and why mongo doesn't have documented, are there other available values that this option can have?
As pointed out in the accepted answer, that this is documented under the driver documentation. Now for the other part of the question.
Generally speaking in the replica set context, you would want to connect to the topology instead of directly to a specific replica set member, with an exception for administrative purposes. Replication is designed to provide redundancy, and connecting directly to one member i.e. Primary is not recommended in case of fail-over.
All of the official MongoDB drivers follows MongoDB Specifications. In regards to the direct connections, the requirement currently is server-discovery-and-monitoring.rst#general-requirements:
Direct connections: A client MUST be able to connect to a single
server of any type. This includes querying hidden replica set members,
and connecting to uninitialized members (see RSGhost) in order to run
"replSetInitiate". Setting a read preference MUST NOT be necessary to
connect to a secondary. Of course, the secondary will reject all
operations done with the PRIMARY read preference because the slaveOk
bit is not set, but the initial connection itself succeeds. Drivers
MAY allow direct connections to arbiters (for example, to run
administrative commands).
It only specifies that it MUST be able to do so, but not how. MongoDB Go driver is not the only driver that currently supporting the direct option approach, there are also .NET/C# and Ruby as well.
Currently there is an open PR for the specifications to unify the behaviour. In the future, all drivers will have the same way of establishing a direct connection.
Sometimes you have static data that is used by all customers. I am looking for a solution that fetches this from localhost (127.0.0.1) using a sort of database.
I have done some tests using Golang fetching from a local Postgresql database and it works perfect. But how does this scale to 1000+ users?
I noticed that only 1 session was started at the local server regardless which computer (as I used 127.0.0.1 in Golang to call Postgres). At some point this may or maybe not be a bottleneck for 1000 users to only using one session?
My questions are:
How many concurrent users can Postgresql handle per session before
it become a bottleneck? Or is this handled by the calling language (Golang)?
Is it even possible to handle many queries per session from
different users?
Is there other better ways to manage static lookup data for all customers than a local Postgresql database (Redis?)
I hope this question fits this forum. Otherwise, please point me in right direction.
Every session creates a new postgres process, which gets forked from the "main" postgres process listening to the port (default 5432).
Default is that 100 sessions can be opened in parallel, but this can easily be changed in postgresql.conf.
There are no parallel queries being executed in one session.
I have an API running in AWS Lambda and AWS Gateway using Up. My API creates a database connection on startup, and therefore Lambda does this when the function is triggered for the first time. My API is written in node using Express and pg-promise to connect to and query the database.
The problem is that Lambda creates new instances of the function as it sees fit, and sometimes it appears as though there are multiple instances of it at one time.
I keep running out of DB connections as my Lambda function is using up too many database handles. If I log into Postgres and look at the pg_stat_activity table I can see lots of connections to the database.
What is the recommended pattern for solving this issue? Can one limit the number of simultaneous instances of a function in Lambda? Can you share a connection pool across instances of a function (I doubt it).
UPDATE
AWS now provides a product called RDS Proxy which is a managed connection pooling solution to solve this very issue: https://aws.amazon.com/blogs/compute/using-amazon-rds-proxy-with-aws-lambda/
There a couple ways that you can run out of database connections:
You have more concurrent Lambda executions than you have available database connections. This is certainly possible.
Your Lambda function is opening database connections but not closing them. This is a likely culprit, since web frameworks tend to keep database connections open across requests (which is more efficient), but on Lambda have no opportunity to close them since AWS will silently terminate the instance.
You can solve 1 by controlling the number of available connections on the database server (the max_connections setting on PostgreSQL) and the maximum number of concurrent Lambda function invocations (as documented here). Of course, that just trades one problem for another, since Lambda will return 429 errors when it hits the limit.
Addressing 2 is more tricky. The traditional and right way of dealing with database connection exhaustion is to use connection pooling. But with Lambda you can't do that on the client, and with RDS you don't have the option to do that on the server. You could set up an intermediary persistent connection pooler, but that makes for a more complicated setup.
In the absence of pooling, one option is to create and destroy a database connection on each function invocation. Unfortunately that will add quite a bit of overhead and latency to your requests.
Another option is to carefully control your client-side and server-side connection parameters. The idea is first to have the database close connections after a relatively short idle time (on PostgreSQL this is controlled by the tcp_keepalives_* settings). Then, to make sure that the client never tries to use a closed connection, you set a connection timeout on the client (how to do so will be framework dependent) that is shorter than that value.
My hope is that AWS will give us a solution for this at some point (such as server-side RDS connection pooling). You can see various proposed solutions in this AWS forum thread.
You have two options to fix this:
You can tweak Postgres to disconnect those idle connections. This is the best way but may require some trial-and-error.
You have to make sure that you connect to the database inside your handler and disconnect before your function returns or exits. In express, you'll have to connect/disconnect while inside your route handlers.
I have a task ahead of me that requires the use of local temporary tables. For performance reasons I can't use transactions.
Temporary tables much like transactions require that all queries must come from one connection which must not be closed or reset. How can I accomplish this using Enterprise library data access application block?
Enterprise Library will use a single database connection if a transaction is active. However, there is no way to force a single connection for all Database methods in the absence of a transaction.
You can definitely use the Database.CreateConnection method to get a database connection. You could then use that connection along with the DbCommand objects to perform the appropriate logic.
Other approaches would be to modify Enterprise Library source code to do exactly what you want or create a new Database implementation that does not perform connection management.
Can't see a way of doing that with DAAB. I think you are going to have to drop back to use ADO.Net connections and manage them yourself, but even then, playing with temporary tables on the server from a client-side app doesn't strike me as an optimal solution to a problem.
I'm using django ORM with Postgres.
After any operations with models (e.g. simple select) in postgres appears new opened connection in IDLE state.
I've tried all possible transaction manipulations, I've tried calling
connection.close()
manually. All useless.
And sooner or later, I'm recieveing "FATAL: connection limit exceeded for non-superusers" message.
What can I've made wrong?
Well, false alarm. It's normal behavior for PGPool, which was involved for this DB.