How to enforce creating new transaction in Lift Mapper? - scala

We are using Lift + Mapper (version 2.4) in our project. We as well are using transaction-per-request pattern S.addAround(DB.buildLoanWrapper()).
In one of our requests we need to have nested transaction which we found to be problematic. We found that one of possible 'hacks' is to start transaction in a separate thread (like in example below) because DB object uses ThreadLocal to manage current connection and transaction state info.
Is there any implementation that is better (more safer and without multi-threading) than the one bellow?
import net.liftweb.db.{DefaultConnectionIdentifier, DB}
import akka.dispatch.Future
/**
* Will create a new transaction if none is in progress and commit it upon completion or rollback on exceptions.
* If a transaction already exists, it has no effect, the block will execute in the context
* of the existing transaction. The commit/rollback is handled in this case by the parent transaction block.
*/
def inTransaction[T](f: ⇒ T): T = DB.use(DefaultConnectionIdentifier)(conn ⇒ f)
/**
* Causes a new transaction to begin and commit after the block’s execution,
* or rollback if an exception occurs. Invoking a transaction always cause a new one to be created,
* even if called in the context of an existing transaction.
*/
def transaction[T](f: ⇒ T): T = Future(DB.use(DefaultConnectionIdentifier)(conn ⇒ f)).get

Unfortunately there doesn't seem to be an existing API. You can ask about adding one on the google group. However there's nothing stopping you from doing something like:
DB.use(DefaultConnectionIdentidier){ sc =>
val conn: java.sql.Connection = sc.connection
// use regular JDBC mechanism here
}

Related

Using a separate ExecutionContext for Slick

I'm using Play 2.5 with Slick. The docs on this topic simply state that everything is managed by Slick and Play's Slick module. This example however, prints Dispatcher[akka.actor.default-dispatcher]:
class MyDbioImpl #Inject()(protected val dbConfigProvider: DatabaseConfigProvider)(implicit ec: ExecutionContext)
with HasDatabaseConfigProvider[JdbcProfile] {
import profile.api._
def selectSomeStuff(): Future[MyResult] = db.run {
println(ec)
[...]
}
}
Since the execution context is printed inside db.run, it seems like all of my database access will also be executed on the default execution context.
I found this answer to an older question which, at the time, solved the problem. But this solution is since deprecated, it is suggested to use dependency injection to acquire the application context. When I try to do this, I get an error saying that play.akka.actor.slick-context does not exist...
class MyDbioProvider #Inject()(actorSystem: ActorSystem,
protected val dbConfigProvider: DatabaseConfigProvider)
extends Provider[MyDbioImpl] {
override def get(): MyDbioImpl = {
val ec = actorSystem.dispatchers.lookup("play.akka.actor.slick-context")
new MyDbioImpl(dbConfigProvider)(ec)
}
}
Edit:
Is Slick's execution context a "normal" execution context which is defined in a config file somewhere? Where does the context switch take place? I assumed the entry point to the "database world" is at db.run.
According to Slick:
Every Database contains an AsyncExecutor that manages the thread pool
for asynchronous execution of Database I/O Actions. Its size is the
main parameter to tune for the best performance of the Database
object. It should be set to the value that you would use for the size
of the connection pool in a traditional, blocking application (see
About Pool Sizing in the HikariCP documentation for further
information). When using Database.forConfig, the thread pool is
configured directly in the external configuration file together with
the connection parameters. If you use any other factory method to get
a Database, you can either use a default configuration or specify a
custom AsyncExecutor.
Basically it says you don't need to create an isolated ExecutionContext since Slick already isolates a thread pool internally. Any call you make to Slick is non-blocking thus you should use the default ExecutionContext.
Slick's implementation of this can be seen in the BasicBackend.scala file: the runInContextSafe method. The code is as follows:
val promise = Promise[R]
val runnable = new Runnable {
override def run() = {
try {
promise.completeWith(runInContextInline(a, ctx, streaming, topLevel, stackLevel = 1))
} catch {
case NonFatal(ex) => promise.failure(ex)
}
}
}
DBIO.sameThreadExecutionContext.execute(runnable)
promise.future
As shown above, Promise is used here, and then its code is executed quickly using its internal thread pool, and the Future object of the Promise is returned. Therefore, when Await.result/ready is executed, the Promise here is probably already executed by Slick's internal thread, so it is enough to get the result, and it is possible to execute Await.result/ready in an environment such as Play. Of non-blocking.
For details, please refer to Scala's documentation on Future and Promise: https://docs.scala-lang.org/overviews/core/futures.html

Facing issues with sqlalchemy+postgresql session management

I am using sqlalchemy with PostgreSQL and the Pyramid web framework. Here is my models.py:
engine = create_engine(db_url, pool_recycle=3600,
isolation_level="READ UNCOMMITTED")
Session = scoped_session(sessionmaker(bind=engine))
session = Session()
Base = declarative_base()
Base.metadata.bind = engine
class _BaseMixin(object):
def save(self):
session.add(self)
session.commit()
def delete(self):
session.delete(self)
session.commit()
I am inheriting both the Base and _BaseMixin for my models. For example:
class MyModel(Base, _BaseMixin):
__tablename__ = 'MY_MODELS'
id = Column(Integer, primary_key=True, autoincrement=True)
The reason is that I would like to do something like
m = MyModel()
m.save()
I am facing weird issues with session all the time. Sample error messages include
InvalidRequestError: This session is in 'prepared' state; no further SQL can be emitted within this transaction.
InvalidRequestError: A transaction is already begun. Use subtransactions=True to allow subtransactions.
All I want to do is to commit what I have in the memory into the DB. But intermittently SQLAlchemy throws errors like described above and fails to commit.
Is there something wrong with my approach?
tl;dr The problems is that you're sharing one Session object between threads. It fails, because Session object is not thread-safe itself.
What happens?
You create a Session object, which is bound to current thread (let's call it Thread-1). Then you closure it inside your _BaseMixin. Incoming requests are handled in the different threads (let's call them Thread-2 and Thread-3). When request is handling, you call model.save(), which uses Session object created in Thread-1 from Thread-2 or Thread-3. Multiple requests can run concurrently, which together with thread-unsafe Session object gives you totally indeterministic behaviour.
How to handle?
When using scoped_session(), each time you create new object with Session() it will be bound to current thread. Furthermore if there is a session bound to current thread it will return you existing session instead of creating new one.
So you can move session = Session() from the module level to your save() and delete() methods. It will ensure, that you are always using session from current thread.
class _BaseMixin(object):
def save(self):
session = Session()
session.add(self)
session.commit()
def delete(self):
session = Session()
session.delete(self)
session.commit()
But it looks like duplication and also doesn't make sense to create Session object (it will always return the same object inside current thread). So SA provides you ability to implicitly access session for current thread. It produces clearer code.
class _BaseMixin(object):
def save(self):
Session.add(self)
Session.commit()
def delete(self):
Session.delete(self)
Session.commit()
Please, also note that for normal applications you never want to create Session objects explicitly. But want to use implicit access to thread-local session by using Session.some_method().
Further reading
Contextual/Thread-local Sessions.
When do I construct a Session, when do I commit it, and when do I close it?.

Slick and play controller testing

have a play controller method
def insertDepartment = Action(parse.json) { request =>
MyDataSourceProvider.db.withSession{ implicit session =>
val departmentRow = DepartmentRow(1, Option("Department1"))
departmentService.insert(departmentRow)
}
}
note MyDataSourceProvider.db is providing slick.driver.PostgresDriver.simple.Database and creating a withSession provides an implicit session to departmentService.insert
when I test departmentService session is provided by a text fixture as mentioned in this post. sessionWrapper is a simple function which creates a session, provides that session to a test block and rolls back data after test finishes.
sessionWrapper { implicit session =>
val departmentRow = DepartmentRow(1, Option("Department1"))
departmentService.insert(departmentRow)
}
This works nicely and as expected by not polluting database when service tests run. tests should not persist anything in the db but rollback after executing successfully.
now when testing play controller need a way to use sessionWrapper. to be able to roll back controller tests in a similar fashion to service tests.
note MyDataSourceProvider.db.withSession in controller insertDepartment.
wrapping controller test with sessionWrapper has no significance since controller def isn't accepting any implicit session but using one from MyDataSourceProvider.db.withSession
what's the best way to handle this? tried creating a trait controller, to be able to inject impl for a trait so mixin can be different for a test and real code but haven't found a way to "pass" session for test and not for production code. Any ideas?
Since Slick is blocking, you don't need Action.async. I wonder why that even compiles, because I don't see a future there, but I am not that familiar with Play.
There are several alternatives for what you can do:
My favorite: not use transaction rollback for testing, but use a test database, which you re-create for each test.
pull out
val departmentRow = DepartmentRow(1, Option("Department1"))
departmentService.insert(departmentRow)
into a method and only test that method, not the controller
use sessionWrapper in the controller and let it check for a configuration flag, that tells it if it is in test mode and should do a rollback, or if it is in production mode.

Lightweight eventing Plain Futures or Akka

Have a few use cases as follow.
1)createUser API call is made via front end. Once this call succeeds, meaning data is saved to db successfully, return success to the frond end. API contract ends there between front and backend.
2)Now backend needs to generate and fire CreateUser event which creates user into third party app (for the sake of example we can say it'll createUser into an external entitlement system). This is fully asynchronous and background type process where client is neither aware of it nor waiting for this API's success or failure. But all calls to this CreateUser event must be logged along with it's failure or success for auditing and remediation(in case of failure) purposes.
First approach is that we design Future based async APIs for these async events (rest of the app is uses Futures, async heavily), log incoming events and success/failure of result into db.
Second approach is that we use Akka and have individual actor for these events (e.g. CreateUser is one example). Which may look something like
class CreateUserActor extends Actor {
def receive = {
case CreateUserEvent(user, role) =>
val originalSender = sender
val res = Future {
blocking {
//persist CreateUserEvent to db
SomeService.createUser(user, role)
}
}
res onComplete {
case Success(u) => //persist success to db
case Failure(e) => //persist failure to db
}
}
Third approach Use Akka Persistence so persisting of events can happen out of the box with event sourcing journaling. however second persistence of event's success or failure will be manual(write code for it). Though this third approach may look promising it may not pay off well since now we're relying on Akka persistence for persisting events, second requirement of persisting success/failure of event is still manual, and now have to maintain one more storage(persisted journal etc) so not sure if we're buying much here?
Second approach will require to write persisting code for both cases (incoming events and results of the events).
First approach may not look very promising.
Although it may sound like it I didn't intend to create a question that may sound like "Opinion based" but trying to cater in genuine advise with its pros/cons on the mentioned approaches or anything else that may fit in well here.
FYI: This particular application is a play application running on a play server so using Actors isn't an issue.
Since this is a Play application you could use the Akka event stream to publish events without needing a reference to the backend worker actor.
For example, with the following in actors/Subscriber.scala:
package actors
import akka.actor.Actor
import model._
class Subscriber extends Actor {
context.system.eventStream.subscribe(self, classOf[DomainEvent])
def receive = {
case event: DomainEvent =>
println("Received DomainEvent: " + event)
}
}
... and something like this in model/events.scala:
package model
trait DomainEvent
case class TestEvent(message: String) extends DomainEvent
... your controller could publish a TestEvent like this:
object Application extends Controller {
import akka.actor.Props
import play.libs.Akka
Akka.system.actorOf(Props(classOf[actors.Subscriber])) // Create the backend actor
def index = Action {
Akka.system.eventStream.publish(model.TestEvent("message")) // publish an event
Ok(views.html.index("Hi!"))
}
}

Avoiding Squeryl transaction in Play! controllers

I am learning Play! and I have followed the To do List tutorial. Now, I would like to use Squeryl in place of Anorm, so I tried to translate the tutorial, and actually it works.
Still, there is one little thing that irks me. Here is the relevant part of my model
def all: Iterable[Task] = from(tasks) {s => select(s)}
and the corresponding action in the controller to list all tasks
def tasks = Action {
inTransaction {
Ok(views.html.index(Task.all, taskForm))
}
}
The view contains, for instance
<h1>#tasks.size task(s)</h1>
What I do not like is that, unlike in the methods to update or delete tasks, I had to manage the transaction inside the controller action.
If I move inTransaction to the all method, I get an exception,
[RuntimeException: No session is bound to current thread, a session must be created via Session.create and bound to the thread via 'work' or 'bindToCurrentThread' Usually this error occurs when a statement is executed outside of a transaction/inTrasaction block]
because the view tries to obtain the size of tasks, but the transaction is already closed at that point.
Is there a way to use Squeryl transaction only in the model and not expose these details up to the controller level?
Well. It's because of lazy evaluations on Iterable that require session bound (size() method).
This might work if you turn Iterable into List or Vector (IndexedSeq) I suppose.
from(tasks)(s => select(s)).toIndexedSeq //or .toList