Slick Connection Pool on per requests - scala

How can use slick connection pool ?
For example :
with this config :
database {
dataSourceClass = org.postgresql.ds.PGSimpleDataSource
driver = org.postgresql.Driver
properties = {
url = "jdbc:postgresql://172.17.0.2/sampleDB"
user = "user"
password = "userpass"
}
minConnections = 10
maxConnections = 20
numThreads = 10
}
I have only one client and this client with web browser request to get all persons from API .
now slick generate 10 connection to database .
second step client refresh browser and slick generate new 10 connection to database without using previous connections .
and then new refresh in browser and slick generate another 10 connection to database . (Now I have about 30 connection on DB with only one client!)
Why ? This is normal ?
Why maxConnections not work ?
I must close connection after requests ?
Or forget some configuration about that ?
Update
This is my sample API :
trait PersonsApi extends DatabaseConfig with JsonMapper {
val getAllPersons = (path("persons") & get) {
complete(db.run(PersonDao.findAll))
}
val getPersonById = (path("persons" / IntNumber) & get) {
num => complete(db.run(PersonDao.findById(num)))
}
val personsApi =
getAllPersons ~
getPersonById
}
This is my example entity class (DAO Pattern) :
class PersonTable(tag: Tag) extends Table[Person](tag, "persons") {
def id = column[Long]("id", O.AutoInc, O.PrimaryKey)
def name = column[String]("name")
def family = column[String]("family")
override def * : ProvenShape[Person] = (id.?, name, family) <> (Person.tupled, Person.unapply)
}
object PersonDao extends BaseDao {
def findAll = personTable.result
def findById(id: Long) = personTable.filter(_.id === id).result
}
This DatabaseConfig interface :
trait DatabaseConfig extends Config {
val driver = slick.driver.PostgresDriver
import driver.api._
def db = Database.forConfig("database")
}
Note : I don't use play framework .

Your configuration seems to be fine. It's impossible to say without further code samples from your application but my bet is you are creating your db on each and every request to your application.
Just make sure that this code:
Database.forConfig("database")
is executed once perhaps by:
putting it as a Singleton injected dependency or
by using play-slick and it's way of dealing with Slick configuration (if you are using Play which is, again, not possible to say from your question, though I assumed it as you mentioned web requests).
EDIT (after question update):
And we have an answer. Each time you call db method new Database object (together with connection pool is created). Just move it as I suggested above (created once per application lifecycle). The easiest way possible (not necessarily the best one) would be to change this line:
def db = Database.forConfig("database")
to this:
lazy val db = Database.forConfig("database")
Above would immediately solve your problem (assuming that there is only one instance of PersonsApi created in your application.
Other solution (better perhaps) would be to create something like this:
object DatabaseConfig extends Config {
val driver = slick.driver.PostgresDriver
import driver.api._
lazy val db = Database.forConfig("database")
}
and then change your API to this:
trait PersonsApi extends JsonMapper {
val getAllPersons = (path("persons") & get) {
complete(DatabaseConfig.db.run(PersonDao.findAll))
}
val getPersonById = (path("persons" / IntNumber) & get) {
num => complete(DatabaseConfig.db.run(PersonDao.findById(num)))
}
val personsApi =
getAllPersons ~
getPersonById
}

Related

Slick: Updates not available when fetched just after

I was trying out this slick example and when I try to create an entry and then fetch that right after, I don't get the record. I modified the test case which is here as below.
val response = create(BankProduct("car loan", 1)).flatMap(getById)
whenReady(response) { p =>
assert(p.get === BankProduct("car loan", 1))
}
The above fails because the created BankProduct cannot be fetched immediately.
It is using h2 db for this and below is the configuration.
trait H2DBComponent extends DBComponent {
val logger = LoggerFactory.getLogger(this.getClass)
val driver = slick.driver.H2Driver
import driver.api._
val randomDB = "jdbc:h2:mem:test" + UUID.randomUUID().toString() + ";"
val h2Url = randomDB + "MODE=MySql;DATABASE_TO_UPPER=false;INIT=runscript from 'src/test/resources/schema.sql'\\;runscript from 'src/test/resources/schemadata.sql'"
val db: Database = {
logger.info("Creating test connection")
Database.forURL(url = h2Url, driver = "org.h2.Driver")
}
}
private[repo] trait BankProductTable extends BankTable { this: DBComponent =>
import driver.api._
private[BankProductTable] class BankProductTable(tag: Tag) extends Table[BankProduct](tag, "bankproduct") {
val id = column[Int]("id", O.PrimaryKey, O.AutoInc)
val name = column[String]("name")
val bankId = column[Int]("bank_id")
def bank = foreignKey("bank_product_fk", bankId, bankTableQuery)(_.id)
def * = (name, bankId, id.?) <> (BankProduct.tupled, BankProduct.unapply)
}
protected val bankProductTableQuery = TableQuery[BankProductTable]
protected def bankProductTableAutoInc = bankProductTableQuery returning bankProductTableQuery.map(_.id)
}
I don't understand why this is happening and how to avoid this?
I tried adding the propery autoCommit also but it didn't work either.
Appreciate any help to clarify this ambiguity.
This might be due to in-memory database content being lost after create call closes its connection. According to docs:
By default, closing the last connection to a database closes the
database. For an in-memory database, this means the content is lost.
To keep the database open, add ;DB_CLOSE_DELAY=-1 to the database URL.
To keep the content of an in-memory database as long as the virtual
machine is alive, use jdbc:h2:mem:test;DB_CLOSE_DELAY=-1.
However, after adding DB_CLOSE_DELAY=-1, there will be errors due to
runscript from 'src/test/resources/schemadata.sql'
which is executed on each connection, thus refactoring is neccessary such that database is populated only once on initialization.

Insert into postgres using slick in a non blocking way

class Employee(tag: Tag) extends Table[table_types.user](tag, "EMPLOYEE") {
def employeeID = column[Int]("EMPLOYEE_ID")
def empName = column[String]("NAME")
def startDate = column[String]("START_DATE")
def * = (employeeID, empName, startDate)
}
object employeeHandle {
def insert(emp:Employee):Future[Any] = {
val dao = new SlickPostgresDAO
val db = dao.db
val insertdb = DBIO.seq(employee += (emp))
db.run(insertdb)
}
}
Insert into database a million employee records
object Hello extends App {
val employees = List[*1 million employee list*]
for(employee<-employees) {
employeeHandle.insert(employee)
*Code to connect to rest api to confirm entry*
}
}
However when I run the above code I soon run out of connections to Postgres. How can I do it in parallel (in a non blocking way) but at the same time ensure I don't run out of connections to postgres.
I think you don't need to do it in parallel; I don't see how it can solve it. Instead you could simply create connection before you start that loop and pass it to employeeHandle.insert(db, employee).
Something like (I don't know scala):
object Hello extends App {
val dao = new SlickPostgresDAO
val db = dao.db
val employees = List[*1 million employee list*]
for(employee<-employees) {
employeeHandle.insert(db, employee)
*Code to connect to rest api to confirm entry*
}
}
Almost all examples of slick insert I have come across uses blocking to fullfil the results. It would be nice to have one that doesn't.
My take on it:
object Hello extends App {
val employees = List[*1 million employee list*]
val groupedList = employees.grouped(10).toList
insertTests()
def insertTests(l: List[List[Employee]] = groupedList): Unit = {
val ins = l.head
val futures = ins.map { no => employeeHandle.insert(employee)}
val seq = Future.sequence(futures)
Await.result(seq, Duration.Inf)
if(l.nonEmpty) insertTests(l.tail)
}
}
Also the connection parameter in insert handle should be outside
object employeeHandle {
val dao = new SlickPostgresDAO
val db = dao.db
def insert(emp:Employee):Future[Any] = {
val insertdb = DBIO.seq(employee += (emp))
db.run(insertdb)
}
}

Testing Play + Slick app

I've a simple CRUD app built with Scala Play 2.4.3 and Play-slick 1.1.0 (slick 3.1.0) that uses a MySQL database for persistent storage.
I was trying to create the tests for my app and I saw 2 main options:
mocking database access, that as far as I've seen, requires some code changes
make tests use an alternative database (probably, in memory H2).
What's the best approach (vantages and desavantages)?
I prefer the second approach, but I'm finding some difficulties in setting up the tests.
What do I need to do? First, I think that I need to do the tests run with a FakeApplication, right? Do I need any sbt dependency to be able to do that?
After that, how do I specify to use the H2 database?
I had the same struggle and I came up with a solution like this(using second approach):
Create a context for DAO to use:
trait BaseContext{
def dbName: String
val dbConfig = DatabaseConfigProvider.get[JdbcProfile](dbName)
val db = dbConfig.db
val profile = dbConfig.driver
val tables = new Tables { // this is generated by Schema Code Generator
override val profile: JdbcProfile = dbConfig.driver
}
}
#Singleton
class AppContext extends BaseContext{
def dbName = "mysql" // name in your conf right after "slick.dbs"
}
#Singleton
class TestingContext extends BaseContext{
def dbName = "h2"
}
Then create a module to bind the injection, and don't forget to enable it in conf using play.modules.enabled += "your.Module":
class ContextModule(environment: Environment, configuration: Configuration) extends AbstractModule {
override def configure(): Unit = {
if (configuration.getString("app.mode").contains("test")) {
bind(classOf[BaseContext])
.to(classOf[TestingContext])
} else {
bind(classOf[BaseContext])
.to(classOf[AppContext])
}
}
}
And inject it to every DAO you've created:
class SomeDAO #Inject()(context: BaseContext){
val dbConfig = context.dbConfig
val db = context.db
val tables = context.tables
import tables.profile.api._
def otherStuff....
// you can call db.run(...), tables.WhateverYourTableIs, tables.TableRowCaseClass, ...
}
And final step, your configuration file. In my case I used app.mode to mark the environment, and I use separate .conf for different environment. Of cause, in these conf you must have the correct DB configuration. Here's the sample:
app.mode = "test"
# Database configuration
slick.dbs = {
# for unit test
h2 {
driver = "slick.driver.H2Driver$"
db = {
url = "jdbc:h2:mem:test;MODE=MYSQL"
driver = "org.h2.Driver"
keepAliveConnection = true
}
}
}
I'm pretty sure my solution is not a elegant one, but it deliver the goods. :)
Any better solution is welcomed!
my solution was to add step(Play.start(fakeApp)) in the beginning of each spec, and step(Play.stop(fakeApp)) in the end of each spec.
Also:
def fakeApp: FakeApplication = {
FakeApplication(additionalConfiguration =
Map(
"slick.dbs.default.driver" -> "slick.driver.H2Driver$",
"slick.dbs.default.db.driver" -> "org.h2.Driver",
"slick.dbs.default.db.url" -> "jdbc:h2:mem:play"
))
}
This was needed because I'm using play-slick, which requires configurations like:
slick.dbs.default.driver = "slick.driver.MySQLDriver$"
slick.dbs.default.db.driver = "com.mysql.jdbc.Driver"
slick.dbs.default.db.url = "jdbc:mysql://localhost/database"
slick.dbs.default.db.user = "user"
slick.dbs.default.db.password = "password"
more info on the docs

Best Practise of Using Connection Pool in Slick 3.0.0 Together with Play Framework

I followed the documentation of Slick 3.0.0-RC1, using Typesafe Config as database connection configuration. Here is my conf:
database = {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://localhost:5432/postgre"
user = "postgre"
}
I established a file Locale.scala as:
package models
import slick.driver.PostgresDriver.api._
import scala.concurrent.Future
case class Locale(id: String, name: String)
class Locales(tag: Tag) extends Table[Locale](tag, "LOCALES") {
def id = column[String]("ID", O.PrimaryKey)
def name = column[String]("NAME")
def * = (id, name) <> (Locale.tupled, Locale.unapply)
}
object Locales {
private val locales = TableQuery[Locales]
val db = Database.forConfig("database")
def count: Future[Int] =
try db.run(locales.length.result)
finally db.close
}
Then I got confused that when and where the proper time is to create Database object using
val db = Database.forConfig("database")
If I create db like this, there will be as many Database objects as my models. So what is the best practice to get this work?
You can create an Object DBLocator and load it using lazy operator so that its loaded only on demand.
You can always invoke the method defined in DBLocator class to get an instance of Session.

Why multiple mongodb connecions with Casbah?

I have to manage multiple databases connection to MongoDb, using casbah scala client. I have an approximation that works but open hundreds of connections.
I want to save a Map[String, MongoDB] that saves a connection for each database (which is the key. I'm using this in Spark Streaming with a two nodes cluster, so I think that is a serialization issue but I don't know how to fix it.
Take a look to my class.
abstract class AbstractMongoDAO(#transient val config: Config) extends Closeable with Serializable {
#transient private val mongoConfig = config.getConfig(CONFIG_KEY)
private val host = mongoConfig.getString(CONFIG_KEY_HOST)
#transient private var _mongoClient: MongoClient = MongoClient(host)
private var _dbs: mutable.HashMap[String, MongoDB] = mutable.HashMap()
protected def dbs(): mutable.HashMap[String, MongoDB] ={
if(_dbs == null)
_dbs = mutable.HashMap()
_dbs
}
def mongoClient: MongoClient = {
if (_mongoClient == null) {
_mongoClient = MongoClient(host)
}
_mongoClient
}
def db(dbName: String):MongoDB = {
if (dbs.get(dbName) == None) {
_dbs += (dbName -> mongoClient.getDB(dbName))
}
_dbs.get(dbName).get
}
override def close() = {
Option(_mongoClient).foreach(_.close())
}
}
private object AbstractMongoDAO {
val CONFIG_KEY = "mongo"
val CONFIG_KEY_HOST = "host"
}
And then I have another class that extends AbstractMongoDao
class MongoDAO (override val config : Config)
extends AbstractMongoDAO(config) with Serializable
And I get a db connection with this simple code. appName is a variable database.
val _db = db(appName)
What I'm doing wrong?
Casbah is built on top of official Java driver. MongoClient represents an internal pool of db connections to a MongoDB cluster. If you use the same cluster and only change database name and not the host, you don't need to create multiple MongoClients, one would be enough for the whole application.
To configure MongoClient check this documentation and corresponding options. If you have multiple DB hosts or still want to use multiple MongoClients then you can build your options and create MongoClient like this:
val options = MongoClientOptions.builder()
.connectionsPerHost(1)
// add other options if needed
.build();
val _mongoClient = MongoClient(host, options)
In your case since only db name neeeds to change and not the db host I would change the method that gets db to this:
def db(dbName: String):MongoDB =
mongoClient.getDB(dbName) // db will be created in Mongo on the fly if not exist
And you don't need the map anymore.