It is documented to use datasource with getConnection function
https://jdbc.postgresql.org/documentation/94/ds-ds.html
When datasource has autocommit disabled, we have
ds.isDefaultAutoCommit
res0: Boolean = false
Getting connection:
val conn = ds.getConnection
Autocommit is enabled:
conn.getAutoCommit
res1: Boolean = true
Looking at the code we see that getConnection uses parent class function without changing commit mode
https://github.com/pgjdbc/pgjdbc/blob/master/pgjdbc/src/main/java/org/postgresql/ds/PGConnectionPoolDataSource.java
Is that just implementation anomaly/limitation or there is some other reasoning behind ?
It looks like a bug. The class PGPooledConnection (the handle to the connection in the pool) takes a boolean argument autoCommit in its constructor, but it doesn't do anything with it (like resetting the auto commit status before handing out the logical connection).
You should create an issue on their github.
Related
I'm using Play 2.5 with Slick. The docs on this topic simply state that everything is managed by Slick and Play's Slick module. This example however, prints Dispatcher[akka.actor.default-dispatcher]:
class MyDbioImpl #Inject()(protected val dbConfigProvider: DatabaseConfigProvider)(implicit ec: ExecutionContext)
with HasDatabaseConfigProvider[JdbcProfile] {
import profile.api._
def selectSomeStuff(): Future[MyResult] = db.run {
println(ec)
[...]
}
}
Since the execution context is printed inside db.run, it seems like all of my database access will also be executed on the default execution context.
I found this answer to an older question which, at the time, solved the problem. But this solution is since deprecated, it is suggested to use dependency injection to acquire the application context. When I try to do this, I get an error saying that play.akka.actor.slick-context does not exist...
class MyDbioProvider #Inject()(actorSystem: ActorSystem,
protected val dbConfigProvider: DatabaseConfigProvider)
extends Provider[MyDbioImpl] {
override def get(): MyDbioImpl = {
val ec = actorSystem.dispatchers.lookup("play.akka.actor.slick-context")
new MyDbioImpl(dbConfigProvider)(ec)
}
}
Edit:
Is Slick's execution context a "normal" execution context which is defined in a config file somewhere? Where does the context switch take place? I assumed the entry point to the "database world" is at db.run.
According to Slick:
Every Database contains an AsyncExecutor that manages the thread pool
for asynchronous execution of Database I/O Actions. Its size is the
main parameter to tune for the best performance of the Database
object. It should be set to the value that you would use for the size
of the connection pool in a traditional, blocking application (see
About Pool Sizing in the HikariCP documentation for further
information). When using Database.forConfig, the thread pool is
configured directly in the external configuration file together with
the connection parameters. If you use any other factory method to get
a Database, you can either use a default configuration or specify a
custom AsyncExecutor.
Basically it says you don't need to create an isolated ExecutionContext since Slick already isolates a thread pool internally. Any call you make to Slick is non-blocking thus you should use the default ExecutionContext.
Slick's implementation of this can be seen in the BasicBackend.scala file: the runInContextSafe method. The code is as follows:
val promise = Promise[R]
val runnable = new Runnable {
override def run() = {
try {
promise.completeWith(runInContextInline(a, ctx, streaming, topLevel, stackLevel = 1))
} catch {
case NonFatal(ex) => promise.failure(ex)
}
}
}
DBIO.sameThreadExecutionContext.execute(runnable)
promise.future
As shown above, Promise is used here, and then its code is executed quickly using its internal thread pool, and the Future object of the Promise is returned. Therefore, when Await.result/ready is executed, the Promise here is probably already executed by Slick's internal thread, so it is enough to get the result, and it is possible to execute Await.result/ready in an environment such as Play. Of non-blocking.
For details, please refer to Scala's documentation on Future and Promise: https://docs.scala-lang.org/overviews/core/futures.html
ScalikeJDBC's ConnectionPool docs page says:
Borrowing Connections
Simply just call #borrow method.
import scalikejdbc._
val conn: java.sql.Connection = ConnectionPool.borrow()
val conn: java.sql.Connection = ConnectionPool('named).borrow()
Be careful. The connection object should be released by yourself.
However there's no mention of how to do it.
I can always do Connection.close() but by 'releasing' Connection,
I understand that I'm supposed to return the Connection back to the ConnectionPool and not close it (otherwise the purpose of having a ConnectionPool would be defied).
My doubts are:
In general, what does 'releasing' a Connection (that has been borrowed from ConnectionPool) mean?
In ScalikeJDBC, how do I 'release' a Connection borrowed from ConnectionPool?
Calling close is fine. As per the Oracle docs: Closing a connection instance that was obtained from a pooled connection does not close the physical database connection.. The DBConnection in scalikejdbc just wraps the java.sql.Connection and delegates calls to close. The usual way of doing this with scalikejdbc is with the using function which is essentially an implementation of Java's try-with-resources.
See Closing JDBC Connections in Pool for a similar discussion on JDBC.
Upon a second look into the docs, ScalikeJdbc does provide a using method implementing the loan-pattern that automatically returns the connection to the ConnectionPool.
So you can borrow a connection, use it, and return it to the pool as follows:
import scalikejdbc.{ConnectionPool, using}
import java.sql.Connection
using(ConnectionPool.get("poolName").borrow()) { (connection: Connection) =>
// use connection (only once) here
}
// connection automatically returned to pool
I wrote the following wrapper around play's Action that will use a function that takes both a session and a request. Here is the first version:
def ActionWithSession[A](bp: BodyParser[A])(f: Session => Request[A] => Result): Action[A] =
Action(bp) {
db.withSession {
session: DbSession =>
request => f(session)(request)
}
}
This version works well (the correct Result is returned to the browser), however each call will leak a database connection. After several calls, I started getting the following exceptions:
java.sql.SQLException: Timed out waiting for a free available connection.
When I change it to the version below (by moving the request => right after the Action, the connection leakage goes away, and it works.
def ActionWithSession[A](bp: BodyParser[A])(f: Session => Request[A] => Result): Action[A] =
Action(bp) { request =>
db.withSession {
session: DbSession =>
f(session)(request)
}
}
Why is the first version causing a connection to leak, and how the second version fixes that?
The first version of the code is not supposed to work. You should not return anything that holds a reference to the Session object from a withSession scope. Here you return a closure which holds such a reference. When the closure is later called by Play, the withSession scope has already been closed and the Session object is invalid. Admittedly, leaking the Session object in a closure happens very easily (and will be caught by Slick in the future).
Here is why it seems to work at first, but leaks the Connection: Session objects acquire a connection lazily. withSession blocks return (or close) the connection at the end of the block if one has been acquired. When you leak an unused Session object from the block however and use it for the first time after the block ended, it still lazily opens the connection, but nothing automatically closes it. We recognized this as undesired behavior a while ago, but didn't fix it yet. The fix we have in mind is disallowing Session object's to acquire connections once their .close method has been called. In your case this would have lead to an exception instead of a leaking connection.
See https://github.com/slick/slick/pull/107
The correct code is indeed the second version you posted, where the returned closure's body contains the whole withSession block, not just its result.
db.withSession receives a function that gets a Session in its first argument and executes it with some session it provides to it. The return value of db.withSession is whatever that function returns.
In the first version, the expression that is passed to withSession evaluates to a function request => f(session)(request), so db.withSession ends up: instantiating a session, instantiates a function object that is bound to that session, close the session (before the function it instantiated is called!), and returns this bound function. Now, Action got exactly what it wanted - a function that takes a Request[A] and gives a Result. However, at the time Play is going to execute this Action, the session is going to be lazily opened, but there is nothing that returns it pack to the pool.
The second version does it right, inside db.withSession we are actually calling f, rather than returning a functin that calls f. This ensures that the call to f is nested inside db.withSession and occurs while the session is acquired.
Hope this helps someone!
We are using Lift + Mapper (version 2.4) in our project. We as well are using transaction-per-request pattern S.addAround(DB.buildLoanWrapper()).
In one of our requests we need to have nested transaction which we found to be problematic. We found that one of possible 'hacks' is to start transaction in a separate thread (like in example below) because DB object uses ThreadLocal to manage current connection and transaction state info.
Is there any implementation that is better (more safer and without multi-threading) than the one bellow?
import net.liftweb.db.{DefaultConnectionIdentifier, DB}
import akka.dispatch.Future
/**
* Will create a new transaction if none is in progress and commit it upon completion or rollback on exceptions.
* If a transaction already exists, it has no effect, the block will execute in the context
* of the existing transaction. The commit/rollback is handled in this case by the parent transaction block.
*/
def inTransaction[T](f: ⇒ T): T = DB.use(DefaultConnectionIdentifier)(conn ⇒ f)
/**
* Causes a new transaction to begin and commit after the block’s execution,
* or rollback if an exception occurs. Invoking a transaction always cause a new one to be created,
* even if called in the context of an existing transaction.
*/
def transaction[T](f: ⇒ T): T = Future(DB.use(DefaultConnectionIdentifier)(conn ⇒ f)).get
Unfortunately there doesn't seem to be an existing API. You can ask about adding one on the google group. However there's nothing stopping you from doing something like:
DB.use(DefaultConnectionIdentidier){ sc =>
val conn: java.sql.Connection = sc.connection
// use regular JDBC mechanism here
}
I have an EJB 3.0 based applicattion deployed on JBoss 5.1. The global value for transaction timeout configured at ${JBOSS_HOME}/server/default/deploy/transaction-jboss-beans.xml on property transactionTimeout is fine for most of our EJB methods. However, we have some methods whose duration is expected to be much longer than the value set there. We'd like to override the timeout specifically for those methods.
We've tried to do it as explained here, i.e. let the global value with a sensible value and then try to override specifically some methods via deployment descriptor at jboss.xml or via jboss specific annotations within the method.
The methods are within stateless session beans container managed. I've even forced those methods to create a new transaction as in some places is said that the annotation only works if the transaction is created in that moment.
../..
import org.jboss.ejb3.annotation.TransactionTimeout;
../..
#Override
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
#TransactionTimeout(900)
public FileInfoObject setFileVariable(Desk desk, String variable, int maxBytes,
String mimeAccepted, FileWithStream file)
throws ParticipationFinishedException, PersistenceException {
../..
}
The expected behavior is that for this method the timeout should be 900.
The actual behavior is quite fine and is the following:
if global timeout > method timeout then method timeout is applied
if global timeout <= method timeout then global timeout is applied
It seems that the applied timeout is the minimum of both which is a real problem if what we want is to extend the timeout for a specific method overriding the global value.
Any ideas? Am I missing something?