Release a Connection borrowed from ConnectionPool - scala

ScalikeJDBC's ConnectionPool docs page says:
Borrowing Connections
Simply just call #borrow method.
import scalikejdbc._
val conn: java.sql.Connection = ConnectionPool.borrow()
val conn: java.sql.Connection = ConnectionPool('named).borrow()
Be careful. The connection object should be released by yourself.
However there's no mention of how to do it.
I can always do Connection.close() but by 'releasing' Connection,
I understand that I'm supposed to return the Connection back to the ConnectionPool and not close it (otherwise the purpose of having a ConnectionPool would be defied).
My doubts are:
In general, what does 'releasing' a Connection (that has been borrowed from ConnectionPool) mean?
In ScalikeJDBC, how do I 'release' a Connection borrowed from ConnectionPool?

Calling close is fine. As per the Oracle docs: Closing a connection instance that was obtained from a pooled connection does not close the physical database connection.. The DBConnection in scalikejdbc just wraps the java.sql.Connection and delegates calls to close. The usual way of doing this with scalikejdbc is with the using function which is essentially an implementation of Java's try-with-resources.
See Closing JDBC Connections in Pool for a similar discussion on JDBC.

Upon a second look into the docs, ScalikeJdbc does provide a using method implementing the loan-pattern that automatically returns the connection to the ConnectionPool.
So you can borrow a connection, use it, and return it to the pool as follows:
import scalikejdbc.{ConnectionPool, using}
import java.sql.Connection
using(ConnectionPool.get("poolName").borrow()) { (connection: Connection) =>
// use connection (only once) here
}
// connection automatically returned to pool

Related

How can I convert my mgo sessions to mongo-go-driver clients using connection pooling?

Long, long ago, when we were using mgo.v2, we created some wrapper functions that copied the session, set the read pref and returned that for consumption by other libraries, e.g.
func NewMonotonicConnection() (conn *Connection, success bool) {
conn := &Connection{
session: baseSession.Copy(),
}
conn.session.SetMode(mongo.Monotonic, true)
return conn, true
}
We now just pass the default client (initialized using mongo.Connect and passed into a connection singleton) in an init function and then consumed like this:
func NewMonotonicConnection() (conn *Connection, success bool) {
conn = defaultConnection
return conn, true
}
My understanding is that to leverage connection pooling, you need to use the same client (which is contained in defaultConn), and session is now implicitly handled inside of the .All()/cursor teardown. Please correct me if I'm wrong here.
It would be nice if we could still set the readpref on these connections (e.g. set NearestMode on this connection before returning), but what's the community/standard way of doing that?
I know I could call mongo.Connect over and over again, but is that expensive?
I could create different clients - each client with a different readpref - but I was thinking that if a write occurred on that connection, it wouldn't ever go back to reading from a slave.
It looks like I *can create sessions explicitly, but I'm not certain I should or if there are any implications around managing those explicitly in the new driver.
There are a couple things I learned on this quest through the mongo-go-driver codebase that I thought I should share with the world before closing this question. If I'm wrong here - please correct me.
You should not call Connect() over and over if you want to leverage connection pooling. It looked like each time Connect() was called, a new socket was created. This means that there's a risk of socket exhaustion over time unless you are manually defer Close()-ing it each time.
In mongo-go-driver, sessions are automatically handled under the covers now when you make the call to execute the query (e.g. All()). You can* explicitly create and teardown a session, but you can't consume it using the singleton approach I proposed above without having to change all the caller functions.
This is because you can no longer call query operations on the session, you instead have to consume it using a WithSession function at the DB operation itself
I realized that writeconcern, readpref and readconcern can all be set at the:
client level (these are the defaults that everything will use if not overridden)
session level
database level
query level
So what I did is create Database options and overloaded *mongo.Database e.g.:
// Database is a meta-helper that allows us to wrap and overload
// the standard *mongo.Database type
type Database struct {
*mongo.Database
}
// NewEventualConnection returns a new instantiated Connection
// to the DB using the 'Nearest' read preference.
// Per https://github.com/go-mgo/mgo/blob/v2/session.go#L61
// Eventual is the same as Nearest, but may change servers between reads.
// Nearest: The driver reads from a member whose network latency falls within
// the acceptable latency window. Reads in the nearest mode do not consider
// whether a member is a primary or secondary when routing read operations;
// primaries and secondaries are treated equivalently.
func NewEventualConnection() (conn *Connection, success bool) {
conn = &Connection{
client: baseConnection.client,
dbOptions: options.Database().
SetReadConcern(readconcern.Local()).
SetReadPreference(readpref.Nearest()).
SetWriteConcern(writeconcern.New(
writeconcern.W(1))),
}
return conn, true
}
// GetDB returns an overloaded Database object
func (conn Connection) GetDB(dbname string) *Database {
dbByName := &Database{conn.client.Database(dbname, conn.dbOptions)}
}
This allows me to leverage connection pooling and maintain backwards compatibility with our codebase. Hopefully this helps someone else.

Using a separate ExecutionContext for Slick

I'm using Play 2.5 with Slick. The docs on this topic simply state that everything is managed by Slick and Play's Slick module. This example however, prints Dispatcher[akka.actor.default-dispatcher]:
class MyDbioImpl #Inject()(protected val dbConfigProvider: DatabaseConfigProvider)(implicit ec: ExecutionContext)
with HasDatabaseConfigProvider[JdbcProfile] {
import profile.api._
def selectSomeStuff(): Future[MyResult] = db.run {
println(ec)
[...]
}
}
Since the execution context is printed inside db.run, it seems like all of my database access will also be executed on the default execution context.
I found this answer to an older question which, at the time, solved the problem. But this solution is since deprecated, it is suggested to use dependency injection to acquire the application context. When I try to do this, I get an error saying that play.akka.actor.slick-context does not exist...
class MyDbioProvider #Inject()(actorSystem: ActorSystem,
protected val dbConfigProvider: DatabaseConfigProvider)
extends Provider[MyDbioImpl] {
override def get(): MyDbioImpl = {
val ec = actorSystem.dispatchers.lookup("play.akka.actor.slick-context")
new MyDbioImpl(dbConfigProvider)(ec)
}
}
Edit:
Is Slick's execution context a "normal" execution context which is defined in a config file somewhere? Where does the context switch take place? I assumed the entry point to the "database world" is at db.run.
According to Slick:
Every Database contains an AsyncExecutor that manages the thread pool
for asynchronous execution of Database I/O Actions. Its size is the
main parameter to tune for the best performance of the Database
object. It should be set to the value that you would use for the size
of the connection pool in a traditional, blocking application (see
About Pool Sizing in the HikariCP documentation for further
information). When using Database.forConfig, the thread pool is
configured directly in the external configuration file together with
the connection parameters. If you use any other factory method to get
a Database, you can either use a default configuration or specify a
custom AsyncExecutor.
Basically it says you don't need to create an isolated ExecutionContext since Slick already isolates a thread pool internally. Any call you make to Slick is non-blocking thus you should use the default ExecutionContext.
Slick's implementation of this can be seen in the BasicBackend.scala file: the runInContextSafe method. The code is as follows:
val promise = Promise[R]
val runnable = new Runnable {
override def run() = {
try {
promise.completeWith(runInContextInline(a, ctx, streaming, topLevel, stackLevel = 1))
} catch {
case NonFatal(ex) => promise.failure(ex)
}
}
}
DBIO.sameThreadExecutionContext.execute(runnable)
promise.future
As shown above, Promise is used here, and then its code is executed quickly using its internal thread pool, and the Future object of the Promise is returned. Therefore, when Await.result/ready is executed, the Promise here is probably already executed by Slick's internal thread, so it is enough to get the result, and it is possible to execute Await.result/ready in an environment such as Play. Of non-blocking.
For details, please refer to Scala's documentation on Future and Promise: https://docs.scala-lang.org/overviews/core/futures.html

PGPoolingDataSource does not honor default autocommit

It is documented to use datasource with getConnection function
https://jdbc.postgresql.org/documentation/94/ds-ds.html
When datasource has autocommit disabled, we have
ds.isDefaultAutoCommit
res0: Boolean = false
Getting connection:
val conn = ds.getConnection
Autocommit is enabled:
conn.getAutoCommit
res1: Boolean = true
Looking at the code we see that getConnection uses parent class function without changing commit mode
https://github.com/pgjdbc/pgjdbc/blob/master/pgjdbc/src/main/java/org/postgresql/ds/PGConnectionPoolDataSource.java
Is that just implementation anomaly/limitation or there is some other reasoning behind ?
It looks like a bug. The class PGPooledConnection (the handle to the connection in the pool) takes a boolean argument autoCommit in its constructor, but it doesn't do anything with it (like resetting the auto commit status before handing out the logical connection).
You should create an issue on their github.

HikariPool vs HikariDataSource

I'm going to use HikariCP instead of c3p0 in my WEB application. Seems, it's super. But for me the questionable place still exist in the HikariCP interface. It contain two classes - HikariPool and HikariDataSource that contain almost the similar functionality. Looking into sources I have detected that HikariDataSource is like the wrapper for HikariPool. For instance, please find below the interesting part of code:
HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:mysql://127.0.0.1:3306/mydb?user=aaa&password=xxx&autoReconnectForPools=true&autoReconnect=true&allowMultiQueries=true&useUnicode=true&characterEncoding=UTF-8");
config.setMaximumPoolSize(20);
config.setMinimumIdle(2);
HikariPool pool = new HikariPool(config);//using HikariPool class
// HikariDataSource pool = new HikariDataSource(config);// using HikariDataSource class
try (Connection conn = pool.getConnection();) {
// execute some query...
}
Both classes work perfectly.
So, the question is the following: which one is recommended to use mostly and why?
Thank you in advance,
Simon
correct way (API) is to always get connection from data source as:
HikariDataSource hds = new HikariDataSource(config);
hds.getConnection()
be protected by coding to API instead of implementation.
HikariPool is not data source. it is used by HikariDataSource.

Should I be creating a singleton MongoDriver object in Scala using the Reactivemongo driver?

I have the following singleton object which has a static method called connect which returns a DB connection. In classical synchronous programming I am understood to believe that you only ever want one instance of a connection however this seems at odds with the asynchronous model of the reactiveMongo driver which uses an underlying the Actor (Akka) model.
object MyMongoDriver {
def connect(uri: String) {
val driver = new MongoDriver
val connection: Try[MongoConnection] =
MongoConnection.parseURI(uri).map {
parsedURI => driver.connection(parsedURI)
}
}
}
What seems to be happening to me though is that one instance of MyMongoDriver is instantiated, and then multiple (as many as needed) connections are returned each time connect is called? I don't think I just introduced blocking, or have I? The rest of the asynchronous behavior I suspect continues to happen by design given reactivemongo is reactive. Is there a better way to handle connections?
As indicated in the documentation "A MongoDriver instance manages an actor system; A connection manages a pool of connections. In general, a MongoDriver or a MongoConnection is never instantiated more than once."
You have to use #Singleton to make a Mongodb driver singleton.
Please read this UserDAOMongo as an example, https://github.com/luongbalinh/play-mongo/blob/master/app/dao/mongo/impl/UserDAOMongo.scala