Monitoring JDBC connection pool - scala

I sometimes get the following exception
SQLNestedException: Cannot get a connection, pool error Timeout waiting for idle object
While using play framework and scalikeJDBC to connect to a MariaDB instance
Googling around showed it can either be that connections aren't being properly closed or that I should configure my thread pool to be bigger
Now onto the actual question:
I'd like to investigate further but I need a way to monitor said connection thread pool, ideally in the form a graph of sorts, but how?
I have no idea how to configure JMX and MBeans for netty (Play uses netty, right?) or if it's possible at all and google is not helping. I don't even know if this would be the right approach so I'm giving a bounty sized ammount of points (or even more) to whoever can provide a sweet set of steps on how to proceed

A bit of googling lends me to think that scalikeJDBC use commons dbcp as the underlying connection pool.
More googling links back to Monitoring for Commons DBCP? , which proposes to monitor exactly what you want I believe !

Related

MongoDB Connection Pooling Shutdown

We have mongodb as datastorage, and there is a MongoClient which we are using for connection pooling.
Question is whether to explicitly use the MongoClient.close to shutdown the connection pool or not.
Here's what I have explored on this so far.
The documentation for the close API says
Closes all resources associated with this instance, in particular any open network connections. Once called, this instance and any databases obtained from it can no longer be used.
But when I referred other questions on this topic, it says you can perform your operations and don't need to explicitly manage operations like MongoClient.close, as this object manages connection pooling automatically.
Java MongoDB connection pool
Both of them are contradicting. If I were to follow the second, what will be the downsides of it?
Will the connections in the pool be closed when the mongoclient object is de-referenced from jvm?
or will the connections stay open for a particular period of time and then expire?
I would like to know what are the actual downsides of this approach. Any pointers on this is highly appreciated.
IMO, using close on server shut down seems to be the clean way to do it.
But I would like to get an expert opinion on this.
Update: There is no need to explicitly close the connection pool via API. Mongo driver takes care of it.

Npgsql with Pgbouncer on Kubernetes - pooling & keepalives

I'm looking for more detailed guidance / other people's experience of using Npgsql in production with Pgbouncer.
Basically we have the following setup using GKE and Google Cloud SQL:
Right now - I've got npgsql configured as if pgbouncer wasn't in place, using a local connection pool. I've added pgbouncer as a deployment in my GKE cluster as Google SQL has very low max connection limits - and to be able to scale my application horizontally inside of Kubernetes I need to protect against overwhelming it.
My problem is one of reliability when one of the pgbouncer pods dies (due to a node failure or as I'm scaling up/down).
When that happens (1) all of the existing open connections from the client side connection pools in the application pods don't immediately close (2) - and basically result in exceptions to my application as it tries to execute commands. Not ideal!
As I see it (and looking at the advice at https://www.npgsql.org/doc/compatibility.html) I have three options.
Live with it, and handle retries of SQL commands within my application. Possible, but seems like a lot of effort and creates lots of possible bugs if I get it wrong.
Turn on keep alives and let npgsql itself 'fail out' relatively quickly the bad connections when those fail. I'm not even sure if this will work or if it will cause further problems.
Turn off client side connection pooling entirely. This seems to be the official advice, but I am loathe to do this for performance reasons, it seems very wasteful for Npgsql to have to open a connnection to pgbouncer for each session - and runs counter to all of my experience with other RDBMS like SQL Server.
Am I on the right track with one of those options? Or am I missing something?
You are generally on the right track and your analysis seems accurate. Some comments:
Option 2 (turning out keepalives) will help remove idle connections in Npgsql's pool which have been broken. As you've written your application will still some failures (as some bad idle connections may not be removed in time). There is no particular reason to think this would cause further problems - this should be pretty safe to turn on.
Option 3 is indeed problematic for perf, as a TCP connection to pgbouncer would have to be established every single time a database connection is needed. It will also not provide a 100% fail-proof mechanism, since pgbouncer may still drop out while a connection is in use.
At the end of the day, you're asking about resiliency in the face of arbitrary network/server failure, which isn't an easy thing to achieve. The only 100% reliable way to deal with this is in your application, via a dedicated layer which would retry operations when a transient exception occurs. You may want to look at Polly, and note that Npgsql helps our a bit by exposing an IsTransient exception which can be used as a trigger to retry (Entity Framework Core also includes a similar "retry strategy"). If you do go down this path, note that transactions are particularly difficult to handle correctly.

Pooling in Phoenix for OrientDB database

I want to use Phoenix/Elixir with OrientDB. I decided to build a little demo app to get a good understanding of it.
As database driver I will use MarcoPolo and not use Ecto at all. MarcoPolo is very low level (binary driver) and doesn't support pooling.
Do I have to use pooling? Does Phoenix have a way to deal with this? Or do I have to implement it myself using something like Poolboy? Or something else?
I want to share the demo app to make life easier for others. So I want to go about it the right way. But maybe my approach is an overkill.
MarcoPolo is a non-blocking client which means that when a process asks the MarcoPolo connection to send a command to OrientDB, MarcoPolo sends the command to OrientDB right away but then doesn't wait for the response (which it then receives as an Erlang message because it uses :active on :gen_tcp). What this means in practice is that a single MarcoPolo connection should be capable of handling several client processes, thus eliminating the need for pooling if your application doesn't have to handle lots of requests to OrientDB.
In case you want to use pooling, the simplest solution is probably poolboy as you already figured out. I have no OrientDB-specific setup but you can find some information on how to setup a pool of connections to a db in the documentation for Redix (a Redis client for Elixir). The principles are the same. This is the section in the documentation for Redix that covers pooling.

Could Apache DBCP validate connections on each use?

Our production people are claiming that they are pretty sure that the Apache DBCP connection pooling system is validating a connection on each use, that is, every time before it issues a query via that connection. But the DBCP config at http://commons.apache.org/dbcp/configuration.html does not seem to provide any such option that could be obtained by default. It seems that the only two options are on getting a connection or on returning it.
The team claims that they determined this using a tool called DynaTrace.
Could someone throw some light on this please?
Thanks.
I've seen something similar when using testOnBorrow (which will issue a validation query every time a connection is requested from the pool) and requesting a new connection for every statement. Sharing the exported dynaTrace session (if you have access to it) could help with diagnosing this.

TCP connection management

I have this question asked in the Go mailing list, but I think it is more general to get better response from SO.
When work with Java/.Net platform, I never had to manage database connection manually as the drivers handle it. Now, when try to connect to a no sql db with very basic driver support, it is my responsibility to manage the connection. The driver let connect, close, reconnect to a tcp port, but not sure how should i manage it (see the link). Do i have to create a new connection for each db request? can I use other 3rd party connection pooling libraries?
thanks.
I don't know enough about MongoDB to answer this question directly, but do you know how MongoDB handles requests over TCP? For example, one problem with a single TCP connection can be that the db will handle each request serially, potentially causing high latency even though it may be bottlenecking on a single machine and could handle a higher capacity.
Are the machines all running on a local network? If so, the cost of opening a new connection won't be too high, and might even be insignificant from a performance perspective regardless.
My two cents: Do one TCP connection per request and just profile it and see what happens. It is very easy to add pooling later if you're DoSing yourself, but it may never be a problem. That'll work right now, and you won't have to mess around with a third party library that may cause more problems than it solves.
Also, TCP programming is really easy. Don't be intimidated by it, detecting a closed socket, and reconnecting synchronously or asynchronously is simple.
Most mongodb drivers (clients) will create and use a connection pool when connecting to the server. Each socket (connection) can do one operation at a time at the server; because of how data is read off the socket you can issue many requests and server will just get them one after another and return data as each one completes.
There is a Go mongo db driver but it doesn't seem to do connection pooling. http://github.com/mikejs/gomongo
In addition to the answers here: if you find you do need to do some kind of connection pooling redis.go is a decent example of a database driver that pools connections. Specifically, look at the Client.popCon and Client.pushCon methods in the source.