Doobie. Set connection timeout - scala

How to set connection timeout using Doobie?
For now, I am creating a new hikari transactor, and then configuring it:
def buildTransactor(driver: String, uri: String,
user: String, pwd: String,
timeout: Long) = for {
ce <- ExecutionContexts.fixedThreadPool[Task](10)
te <- ExecutionContexts.cachedThreadPool[Task]
xa <- HikariTransactor.newHikariTransactor[Task](
driver, uri, user, pwd, ce, te)
_ <- configure(xa, timeout) // Configure transactor
} yield xa
def configure(xa: HikariTransactor[Task], timeout: Long) = Resource.liftF(
xa.configure(ds => Task(ds.setConnectionTimeout(timeout)))
)
I am not sure it is ok. Docs says nothing.

Related

Scala integration tests for Caliban GraphQL subscriptions

We have a Caliban GraphQL application, using it with Play framework. It is well covered with integration tests for queries and mutations, now we're about to add some integration tests for subscriptions and wondering how to do it correctly.
For queries/mutations testing we're using usual FakeRequest, sending it to our router that extends Caliban's PlayRouter, it works very good. Is there any similar way to test websockets/subscriptions?
There is very short amount of information in the Internet about websocket testing in Play and no information at all about GraphQL subscription testing.
Will be grateful for any ideas!
Ok, I managed it. There are couple of rules to follow:
Use websocket header "WebSocket-Protocol" -> "graphql-ws"
After connection is established, send GraphQLWSRequest of type "connection_init"
After receiving response "connection_ack", send GraphQLWSRequest of type "start" with subscription query as payload
After those steps server is listening and you can send your mutation queries.
Some draft example:
import caliban.client.GraphQLRequest
import caliban.client.ws.GraphQLWSRequest
import io.circe.syntax.EncoderOps
import play.api.libs.json.{JsValue, Json}
import play.api.test.Helpers.{POST, contentAsJson, contentAsString, contentType, route, status, _}
import org.awaitility.Awaitility
def getWS(subscriptionQuery: String, postQuery: String): JsValue = {
lazy val port = Helpers.testServerPort
val initRequest = prepareWSRequest("connection_init")
val startRequest = prepareWSRequest("start", Some(GraphQLRequest(subscriptionQuery, Map())))
Helpers.running(TestServer(port, app)) {
val headers = new java.util.HashMap[String, String]()
headers.put("WebSocket-Protocol", "graphql-ws")
val queue = new ArrayBlockingQueue[String](1)
lazy val ws = new WebSocketClient(new URI(s"ws://localhost:$port/ws/graphql"), headers) {
override def onOpen(handshakedata: ServerHandshake): Unit =
logger.info("Websocket connection established")
override def onClose(code: Port, reason: String, remote: Boolean): Unit =
logger.info(s"Websocket connection closed, reason: $reason")
override def onError(ex: Exception): Unit =
logger.error("Error handling websocket connection", ex)
override def onMessage(message: String): Unit = {
val ttp = (Json.parse(message) \ "type").as[JsString].value
if (ttp != "connection_ack" && ttp != "ka") queue.put(message)
}
}
ws.connectBlocking()
Future(ws.send(initRequest))
.flatMap(_ => Future(ws.send(startRequest)))
.flatMap(_ => post(query = postQuery)) // post is my local method, it sends usual FakeRequest
Awaitility.await().until(() => queue.peek() != null)
Json.parse(queue.take())
}
def prepareWSRequest(ttp: String, payload: Option[GraphQLRequest] = None) =
GraphQLWSRequest(ttp, None, payload).asJson.noSpaces
}

add wait period before each retry in my scala code

I have a spark connector notebook, "Export Tables To Database", that write spark table data to an Azure SQL database. I have a master notebook that calls that spark connector notebook to write many tables in parallel. If a copy fails, I have a retry portion in the master notebook that retry the export. However, it is causing duplicates in my database because the original failed one doesn't cancel the connection immediately. I want to add a wait period before each retry. How do I do that?
////these next four class and functions are for exporting data directly to the Azure SQL database via the spark connectors.
// the next two functions are for retry purpose. if exporting a table faile, it will retry
def tryNotebookRun (path: String, timeout: Int, parameters: Map[String, String] = Map.empty[String, String]): Try[Any] = {
Try(
if (parameters.nonEmpty){
dbutils.notebook.run(path, timeout, parameters)
}
else{
dbutils.notebook.run(path, timeout)
}
)
}
def runWithRetry(path: String, timeout: Int, parameters: Map[String, String] = Map.empty[String, String], maxRetries: Int = 3) = {
var numRetries = 0
// I want to add a wait period here
while (numRetries < maxRetries){
tryNotebookRun(path, timeout, parameters) match {
case Success(_) => numRetries = maxRetries
case Failure(_) => numRetries = numRetries + 1
}
}
}
case class NotebookData(path: String, timeout: Int, parameters: Map[String, String] = Map.empty[String, String])
def parallelNotebooks(notebooks: Seq[NotebookData]): Future[Seq[Any]] = {
val numNotebooksInParallel = 5
// This code limits the number of parallel notebooks.
implicit val ec = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(numNotebooksInParallel))
val ctx = dbutils.notebook.getContext()
Future.sequence(
notebooks.map { notebook =>
Future {
dbutils.notebook.setContext(ctx)
runWithRetry(notebook.path, notebook.timeout, notebook.parameters)
}
.recover {
case NonFatal(e) => s"ERROR: ${e.getMessage}"
}
}
)
}
////create a sequence of tables to be writed out in parallel
val notebooks = Seq(
NotebookData("Export Tables To Database", 0, Map("client"->client, "scope"->scope, "secret"->secret, "schema"->"test", "dbTable"->"table1")),
NotebookData("Export Tables To Database", 0, Map("client"->client, "scope"->scope, "secret"->secret, "schema"->"test", "dbTable"->"table2"))
)
val res = parallelNotebooks(notebooks)
Await.result(res, 3000000 seconds) // this is a blocking call.
res.value
adding Thread.sleep was the solution
def runWithRetry(path: String, timeout: Int, parameters: Map[String, String] = Map.empty[String, String], maxRetries: Int = 2) = {
var numRetries = 0
while (numRetries < maxRetries){
tryNotebookRun(path, timeout, parameters) match {
case Success(_) => numRetries = maxRetries
case Failure(_) => {
Thread.sleep(30000)
numRetries = numRetries + 1
}
}
}
}

How to create shared JDBC connection to use on executors?

I have created spark jdbc singleton connection in driver & planning to use connection in executors. I get below exception. org.apache.spark.SparkException: Task not serializable
Inside spark main class:
object ExecutorConnection {
private var connection: Connection = null
val url = prop.getProperty("url")
val user = prop.getProperty("user")
val pwd = prop.getProperty("password")
val driver = prop.getProperty("driver")
Class.forName(driver)
def getConnection(url: String, username: String, password: String): Connection = synchronized {
if (connection == null) {
connection = DriverManager.getConnection(url, username, password)
Class.forName(driver)
connection.setAutoCommit(false)
}
connection
}
lazy val createConnection = getConnection(url, user, pwd)
}
I have multiple dataframes(df1,df2,df3) with different schema , where im planning to create connection in driver level & serialize the connection & use it for all dataframes.
df1.rdd.repartition(2).mapPartitions((d) => Iterator(d)).foreach { partition =>
val conn = ExecutorConnection.createConnection
var ps: PreparedStatement = null
partition.grouped(1).foreach(batch => {
batch.foreach { x =>
{
ps = conn.prepareStatement(SqlString)
ps.addBatch()
conn.commit()
}
}
})
}
Use Dataset.foreachPartition:
foreachPartition(f: (Iterator[T]) ⇒ Unit): Unit
Applies a function f to each partition of this Dataset.
This trick with Scala object is exactly how you get the connection once per task (and I think per executor also).
df1.foreachPartition { vs =>
// use the connection here
}
Use Guava for a cache.
Re:
where im planning to create connection in driver level & serialize the connection
It does not work that way.
You have to create connections on executor else you will keep getting this exception.

Play can't find database

So, for 3 days now I have had various problems with Play, I am new to the framework, but I don't get what is happening. So, after I was unable to use Slick, due to some futures never returning a value, I decided to switch to Anorm. It worked, until I decided to add a second repository... after which, I am now unable to load my page because I keep getting this:
#769g71c3d - Internal server error, for (GET) [/] ->
play.api.UnexpectedException: Unexpected exception[ProvisionException: Unable to provision, see the following errors:
1) Error injecting constructor, java.lang.IllegalArgumentException: Could not find database for pmf
at dao.ActivityTypeRepository.<init>(ActivityTypeDAO.scala:13)
at dao.ActivityTypeRepository.class(ActivityTypeDAO.scala:13)
while locating dao.ActivityTypeRepository
for the 1st parameter of services.ActivityTypeServiceImpl.<init>(ActivityTypeService.scala:17)
while locating services.ActivityTypeServiceImpl
while locating services.ActivityTypeService
The database is input correctly, I can connect to it via terminal and via datagrip... Has anyone ever had a similar issue?
As requested, here is my configuration:
slick.dbs.default.profile = "slick.jdbc.PostgresProfile$"
slick.dbs.default.db.driver = "org.postgresql.Driver"
slick.dbs.default.db.url = "jdbc:postgresql://localhost:5432/pmf"
These are my classes:
#Singleton
class ActivityTypeRepository #Inject()(dbapi: DBApi)(implicit ec: ExecutionContext) {
private val db = dbapi.database(RepositorySettings.dbName)
private[dao] val activityTypeMapping = {
get[Int]("activityType.id") ~
get[String]("activityType.name") map {
case id ~ name => ActivityType(id, name)
}
}
def listAll: Future[Seq[ActivityType]] = Future {
db.withConnection { implicit conn =>
SQL("SELECT * FROM ActivityType").as(activityTypeMapping *)
}
}
}
#Singleton
class UserRepository #Inject()(dbApi: DBApi)(implicit ec: ExecutionContext){
private val db = dbApi.database(RepositorySettings.dbName)
private[dao] val userMapping = {
get[Option[Long]]("users.id") ~
get[String]("users.email") ~
get[Option[String]]("users.name") ~
get[Option[String]]("users.surname") map {
case id ~ email ~ name ~ surname => User(id, email, name, surname)
}
}
def add(user: User): Future[Option[Long]] = Future {
db.withConnection { implicit conn =>
SQL"INSERT INTO users(id, email, name, surname) VALUES (${user.id}, ${user.email}, ${user.name}, ${user.surname})".executeInsert()
}
}
def find(id: Long): Future[Option[User]] = Future {
db.withConnection { implicit conn =>
SQL"SELECT * FROM User WHERE id = $id".as(userMapping *).headOption
}
}
def findByEmail(email: String): Future[Option[User]] = Future {
db.withConnection { implicit conn =>
SQL"SELECT * FROM User WHERE email = $email".as(userMapping *).headOption
}
}
def listAll: Future[Seq[User]] = Future {
db.withConnection { implicit conn =>
SQL("SELECT * FROM User").as(userMapping *)
}
}
}
EDIT:
Added to application.conf
db {
default.driver = org.postgresql.Driver
default.url = "jdbc:postgresql://localhost:5432/pmf_visualizations"
}
but no change.

Slick: create database

Is there a way to get slick to create the database if it doesn't already exist?
Database.forURL("jdbc:mysql://127.0.0.1/database", driver = "com.mysql.jdbc.Driver", user = "root") withSession {
// create tables, insert data
}
"database" doesn't exist, so I want slick to create it for me. Any ideas? Thanks.
The answer above is relevant to Slick 2.x where withSession is deprecated,
so this is how it is done with Slick 3.0.0 API :
import scala.concurrent.Await
import scala.concurrent.duration._
import org.postgresql.util.PSQLException
import slick.driver.PostgresDriver
import slick.driver.PostgresDriver.api._
object SlickPGUtils {
private val actionTimeout = 10 second
private val driver = "org.postgresql.Driver"
def createDb(host: String, port: Int, dbName: String, user: String, pwd: String) = {
val onlyHostNoDbUrl = s"jdbc:postgresql://$host:$port/"
using(Database.forURL(onlyHostNoDbUrl, user = user, password = pwd, driver = driver)) { conn =>
Await.result(conn.run(sqlu"CREATE DATABASE #$dbName"), actionTimeout)
}
}
def dropDb(host: String, port: Int, dbName: String, user: String, pwd: String) = {
val onlyHostNoDbUrl = s"jdbc:postgresql://$host:$port/"
try {
using(Database.forURL(onlyHostNoDbUrl, user = user, password = pwd, driver = driver)) { conn =>
Await.result(conn.run(sqlu"DROP DATABASE #$dbName"), actionTimeout)
}
} catch {
// ignore failure due to db not exist
case e:PSQLException => if (e.getMessage.equals(s""""database "$dbName" does not exist""")) {/* do nothing */}
case e:Throwable => throw e // escalate other exceptions
}
}
private def using[A <: {def close() : Unit}, B](resource: A)(f: A => B): B =
try {
f(resource)
} finally {
Try {
resource.close()
}.failed.foreach(err => throw new Exception(s"failed to close $resource", err))
}
}
You can connect to the database engine using only "jdbc:mysql://localhost/" as JDBC URL and then issue a pure SQL create database query:
import scala.slick.driver.MySQLDriver.simple._
import scala.slick.jdbc.{StaticQuery => Q}
object Main extends App
{
Database.forURL("jdbc:mysql://localhost/", driver = "com.mysql.jdbc.Driver") withSession {
implicit session =>
Q.updateNA("CREATE DATABASE `dataBaseName`").execute
.
.
.
}
}