Could Apache DBCP validate connections on each use? - apache-commons-dbcp

Our production people are claiming that they are pretty sure that the Apache DBCP connection pooling system is validating a connection on each use, that is, every time before it issues a query via that connection. But the DBCP config at http://commons.apache.org/dbcp/configuration.html does not seem to provide any such option that could be obtained by default. It seems that the only two options are on getting a connection or on returning it.
The team claims that they determined this using a tool called DynaTrace.
Could someone throw some light on this please?
Thanks.

I've seen something similar when using testOnBorrow (which will issue a validation query every time a connection is requested from the pool) and requesting a new connection for every statement. Sharing the exported dynaTrace session (if you have access to it) could help with diagnosing this.

Related

MongoDB Connection Pooling Shutdown

We have mongodb as datastorage, and there is a MongoClient which we are using for connection pooling.
Question is whether to explicitly use the MongoClient.close to shutdown the connection pool or not.
Here's what I have explored on this so far.
The documentation for the close API says
Closes all resources associated with this instance, in particular any open network connections. Once called, this instance and any databases obtained from it can no longer be used.
But when I referred other questions on this topic, it says you can perform your operations and don't need to explicitly manage operations like MongoClient.close, as this object manages connection pooling automatically.
Java MongoDB connection pool
Both of them are contradicting. If I were to follow the second, what will be the downsides of it?
Will the connections in the pool be closed when the mongoclient object is de-referenced from jvm?
or will the connections stay open for a particular period of time and then expire?
I would like to know what are the actual downsides of this approach. Any pointers on this is highly appreciated.
IMO, using close on server shut down seems to be the clean way to do it.
But I would like to get an expert opinion on this.
Update: There is no need to explicitly close the connection pool via API. Mongo driver takes care of it.

Npgsql with Pgbouncer on Kubernetes - pooling & keepalives

I'm looking for more detailed guidance / other people's experience of using Npgsql in production with Pgbouncer.
Basically we have the following setup using GKE and Google Cloud SQL:
Right now - I've got npgsql configured as if pgbouncer wasn't in place, using a local connection pool. I've added pgbouncer as a deployment in my GKE cluster as Google SQL has very low max connection limits - and to be able to scale my application horizontally inside of Kubernetes I need to protect against overwhelming it.
My problem is one of reliability when one of the pgbouncer pods dies (due to a node failure or as I'm scaling up/down).
When that happens (1) all of the existing open connections from the client side connection pools in the application pods don't immediately close (2) - and basically result in exceptions to my application as it tries to execute commands. Not ideal!
As I see it (and looking at the advice at https://www.npgsql.org/doc/compatibility.html) I have three options.
Live with it, and handle retries of SQL commands within my application. Possible, but seems like a lot of effort and creates lots of possible bugs if I get it wrong.
Turn on keep alives and let npgsql itself 'fail out' relatively quickly the bad connections when those fail. I'm not even sure if this will work or if it will cause further problems.
Turn off client side connection pooling entirely. This seems to be the official advice, but I am loathe to do this for performance reasons, it seems very wasteful for Npgsql to have to open a connnection to pgbouncer for each session - and runs counter to all of my experience with other RDBMS like SQL Server.
Am I on the right track with one of those options? Or am I missing something?
You are generally on the right track and your analysis seems accurate. Some comments:
Option 2 (turning out keepalives) will help remove idle connections in Npgsql's pool which have been broken. As you've written your application will still some failures (as some bad idle connections may not be removed in time). There is no particular reason to think this would cause further problems - this should be pretty safe to turn on.
Option 3 is indeed problematic for perf, as a TCP connection to pgbouncer would have to be established every single time a database connection is needed. It will also not provide a 100% fail-proof mechanism, since pgbouncer may still drop out while a connection is in use.
At the end of the day, you're asking about resiliency in the face of arbitrary network/server failure, which isn't an easy thing to achieve. The only 100% reliable way to deal with this is in your application, via a dedicated layer which would retry operations when a transient exception occurs. You may want to look at Polly, and note that Npgsql helps our a bit by exposing an IsTransient exception which can be used as a trigger to retry (Entity Framework Core also includes a similar "retry strategy"). If you do go down this path, note that transactions are particularly difficult to handle correctly.

Monitoring JDBC connection pool

I sometimes get the following exception
SQLNestedException: Cannot get a connection, pool error Timeout waiting for idle object
While using play framework and scalikeJDBC to connect to a MariaDB instance
Googling around showed it can either be that connections aren't being properly closed or that I should configure my thread pool to be bigger
Now onto the actual question:
I'd like to investigate further but I need a way to monitor said connection thread pool, ideally in the form a graph of sorts, but how?
I have no idea how to configure JMX and MBeans for netty (Play uses netty, right?) or if it's possible at all and google is not helping. I don't even know if this would be the right approach so I'm giving a bounty sized ammount of points (or even more) to whoever can provide a sweet set of steps on how to proceed
A bit of googling lends me to think that scalikeJDBC use commons dbcp as the underlying connection pool.
More googling links back to Monitoring for Commons DBCP? , which proposes to monitor exactly what you want I believe !

Loopback.io backup server and server to server replication

I am thinking of adopting Loopback.io to create a REST API. I may need the following approach: an inTERnet server (run by me) to which clients connect, plus a fallback inTRAnet server to which clients connect only in case the internet connection is down. This secondary fallback server should then replicate data on the main server when the internet connection is up and running again. As clients are on the same inTRAnet they should be able to switch automatically to the fallback server. Is this possible as an idea and if so, what do you recommend i start digging into?
Thank you all!
Matteo
Simon from my other account. I believe what you want is possible as you can use whatever client side technology you want with LoopBack. As for easy solutions, I'm not familiar enough with Cordova to give any insight there.
It is definitely possible, but I suggest going through the getting started tutorial first. You'd probably create two application servers and have another proxy in front to route the requests to server a or b based a heartbeat from the main server. You would have to code all the logic and set up the infrastructure yourself though.

Connection Pooling in SQL Azure

I'm working on an ASP.NET MVC app. This app is relying on data stored in a SQL Azure database. When I logged into the SQL Azure management interface, I notice that I had 17 active connections to my database. I vaguely remember the concept of database connection pooling from long ago. For some reason, I thought to use connection pooling, you needed to add a setting to your connection string in your web.config file. For the life of me though, I can't remember or find documentation on a setting.
For reference sake, I'm using System.Data.SqlClient as the provide in my connection string settings. Can someone please tell me how to use connection pooling in SQL Azure? Considering I'm the only one hitting the database, 17 active connections seemed high. I figured if connection pooling was turned on, only 1 active connection should appear.
Thank you.
Connection pooling is a default behavior that cannot be configured by the client for Sql Azure. Your app should be getting the benefits of connection pooling by default. Ensure that your connections strings are unique as a new pool will be created for connections with different strings. This article in MSDN specifies:
When a new connection is opened, if the connection string is not an
exact match to an existing pool, a new pool is created. Connections
are pooled per process, per application domain, per connection string
and when integrated security is used, per Windows identity. Connection
strings must also be an exact match; keywords supplied in a different
order for the same connection will be pooled separately.
Now with regards to a setting that you don't remember. You may have been talking about MARS (Multiple Active Result Sets). This feature is now available on Sql Azure.
I would suggest you to read the following article which explains how to correctly design a connection to SQL Azure and it also have a C# base sample code to show you how:
http://blogs.msdn.com/b/sqlazure/archive/2010/05/11/10011247.aspx