I have written this code and I am trying to combine two futures obtained from separate SQL operations.
package com.example
import tables._
import scala.concurrent.{Future, Await}
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration.Duration
import slick.backend.DatabasePublisher
import slick.driver.H2Driver.api._
object Hello {
def main(args: Array[String]): Unit = {
val db = Database.forConfig("h2mem1")
try {
val people = TableQuery[Persons]
val setupAction : DBIO[Unit] = DBIO.seq(
people.schema.create
)
val setupFuture : Future[Unit] = db.run(setupAction)
val populateAction: DBIO[Option[Int]] = people ++= Seq(
(1, "test1", "user1"),
(2, "test2", "user2"),
(3, "test3", "user3"),
(4, "test4", "user4")
)
val populateFuture : Future[Option[Int]] = db.run(populateAction)
val combinedFuture : Future[Option[Int]] = setupFuture >> populateFuture
val r = combinedFuture.flatMap { results =>
results.foreach(x => println(s"Number of rows inserted $x"))
}
Await.result(r, Duration.Inf)
}
finally db.close
}
}
But I get an error when I try to compile this code
[error] /Users/abhi/ScalaProjects/SlickTest2/src/main/scala/Hello.scala:29:
value >> is not a member of scala.concurrent.Future[Unit]
[error] val combinedFuture : Future[Option[Int]] = setupFuture >>
populateFuture
[error] ^
[error] one error found
[error] (compile:compileIncremental) Compilation failed
The same code works, If I nest the populateFuture inside the map function of the setupFuture. But I don't want to write nested code because it will become very messy once there are more steps to do.
So I need a way to combine all futures into a single future and then execute it.
Edit:: I also tried combining the two actions
val combinedAction = setupAction.andThen(populateAction)
val fut1 = combinedAction.map{result =>
result.foreach{x =>println(s"number or rows inserted $x")}
}
Await.result(fut1, Duration.Inf)
but got error
/Users/abhi/ScalaProjects/SlickTest/src/main/scala/com/example/Hello.scala:31: type mismatch;
[error] found : scala.concurrent.Future[Option[Int]]
[error] required: PartialFunction[scala.util.Try[Unit],?]
[error] val combinedAction = setupAction.andThen(populateAction)
[error] ^
[error] one error found
[error] (compile:compileIncremental) Compilation failed
[error] Total time: 3 s, completed Jun 26, 2015 3:50:51 PM
Mohitas-MBP:SlickTest abhi$
According to http://slick.typesafe.com/doc/3.0.0/api/index.html#slick.dbio.DBIOAction, andThen() is what you are looking for:
val combinedAction = setupAction.andThen(populateAction)
val results = db.run(combinedAction)
populateAction will only run after setupAction completed successfully. This is crucial in your case since slick is fully non-blocking. The code you have now will cause problems at runtime. Both actions in your code will run asynchronously at the same time. There is no way to determine which action is executed first. But because populateAction depends on setupAction, you must ensure setupAction is executed first. Therefore use andThen.
Related
I am trying to create webapp with http4s that is based on Http4sServlet.
The following code does not compile:
import cats.effect._
import org.http4s.servlet.BlockingServletIo
import org.http4s.servlet.Http4sServlet
import scala.concurrent.ExecutionContext.global
import org.http4s.implicits._
class UserSvcServlet
extends Http4sServlet[IO](service = UserSvcServer.start
, servletIo = BlockingServletIo(4096, Blocker.liftExecutionContext(global)))(IOApp)
the error message:
[error] /home/developer/scala/user-svc/src/main/scala/io/databaker/UserSvcServlet.scala:12:54: Cannot find implicit value for ConcurrentEffect[[+A]cats.effect.IO[A]].
[error] Building this implicit value might depend on having an implicit
[error] s.c.ExecutionContext in scope, a Scheduler, a ContextShift[[+A]cats.effect.IO[A]]
[error] or some equivalent type.
[error] extends Http4sServlet[IO]( service = UserSvcServer.stream
[error] ^
[error] /home/developer/scala/user-svc/src/main/scala/io/databaker/UserSvcServlet.scala:13:36: Cannot find an implicit value for ContextShift[[+A]cats.effect.IO[A]]:
[error] * import ContextShift[[+A]cats.effect.IO[A]] from your effects library
[error] * if using IO, use cats.effect.IOApp or build one with cats.effect.IO.contextShift
[error] , servletIo = BlockingServletIo(4096, Blocker.liftExecutionContext(global)))
[error] ^
[error] two errors found
[error] (Compile / compileIncremental) Compilation failed
[error] Total time: 1 s, completed May 29, 2020, 8:45:00 PM
The UserSvcServer is implemented as follows:
import org.http4s.HttpApp
import cats.effect.{ConcurrentEffect, ContextShift, Timer}
import org.http4s.implicits._
import org.http4s.server.middleware.Logger
object UserSvcServer {
def start[F[_] : ConcurrentEffect](implicit T: Timer[F], C: ContextShift[F]): HttpApp[F] = {
val helloWorldAlg = HelloWorld.impl[F]
val httpApp = UserSvcRoutes.helloWorldRoutes[F](helloWorldAlg).orNotFound
Logger.httpApp(true, true)(httpApp)
}
}
How can I import ContextShift implicitly?
Context shift is just cats' wrapper over ExecutionContext. You can create one explicitly as stated in docs:
implicit val cs: ContextShift[IO] = IO.contextShift(ExecutionContext.global)
You usually create one context shift at the entry point of your app (probably in the main method).
If your app uses IOApp from cats-effect it would have already implicit for contextShift in scope. It will use execution context, which would have number of threads equal to available processors of your computer.
If you want to use created contextShift "deeper" inside your application you can pass it as an implicit parameter:
def doSomething(implicit cs: ContextShift[IO]): IO[Unit] = ???
So in order to make your code work, you need to make sure that method or class calls constructor of UserSvcServlet has implicit for contextShift:
(implicit cs: ContextShift[IO]).
You could also put it in separate object:
object AppContextShift {
implicit val cs: ContextShift[IO] = IO.contextShift(ExecutionContext.global)
implicit val t: Timer[IO] = IO.timer(ExecutionContext.global) //you will probably also need timer eventually
}
Then you can import it when contextShift is needed:
import AppContextShift._
By the way, using a global execution context for Blocker is not good idea.
Blocked is used for blocking operations and using it with ExecutionContext.global might lead to thread starvation in your app.
The most common approach is to use blocker created from cached thread pool:
Blocker.liftExecutorService(Executors.newCachedThreadPool())
I'm trying to create a function to check if a string is a date. However, the following function got the error.
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import java.sql._
import scala.util.{Success, Try}
def validateDate(date: String): Boolean = {
val df = new java.text.SimpleDateFormat("yyyyMMdd")
val test = Try[Date](df.parse(date))
test match {
case Success(_) => true
case _ => false
}
}
Error:
[error] C:\Users\user1\IdeaProjects\sqlServer\src\main\scala\main.scala:14: type mismatch;
[error] found : java.util.Date
[error] required: java.sql.Date
[error] val test = Try[Date](df.parse(date))
[error] ^
[error] one error found
[error] (compile:compileIncremental) Compilation failed
[error] Total time: 2 s, completed May 17, 2017 1:19:33 PM
Is there a simpler way to validate if a string is a date without create a function?
The function is used to validate the command line argument.
if (args.length != 2 || validateDate(args(0))) { .... }
Try[Date](df.parse(date)) You are not interested in type here because you ignore it. So simply omit type parameter. Try(df.parse(date)).
Your function could be shorter. Try(df.parse(date)).isSuccess instead pattern matching.
If your environment contains java 8 then use java.time package always.
import scala.util.Try
import java.time.LocalDate
import java.time.format.DateTimeFormatter
// Move creation of formatter out of function to reduce short lived objects allocation.
val df = DateTimeFormatter.ofPattern("yyyy MM dd")
def datebleStr(s: String): Boolean = Try(LocalDate.parse(s,df)).isSuccess
use this: import java.util.Date
In Scala Slick, a database schema can be created with the following:
val schema = coffees.schema ++ suppliers.schema
db.run(DBIO.seq(
schema.create
))
From the bottom of this documentation page http://slick.typesafe.com/doc/3.0.0/schemas.html
However, if the database schema already exists then this throws an exception.
Is there a normal way or right way to create the schema IF AND ONLY IF it does not already exist?
In Slick 3.3.0 createIfNotExists and dropIfExists schema methods were added. So:
db.run(coffees.schema.createIfNotExists)
Googled this question and tried several solutions from answers until figured it out.
This is what I do for multiple tables, with slick 3.1.1 and Postgres
import slick.driver.PostgresDriver.api._
import slick.jdbc.meta.MTable
import scala.concurrent.Await
import scala.concurrent.duration.Duration
import scala.concurrent.ExecutionContext.Implicits.global
val t1 = TableQuery[Table1]
val t2 = TableQuery[Table2]
val t3 = TableQuery[Table3]
val tables = List(t1, t2, t3)
val existing = db.run(MTable.getTables)
val f = existing.flatMap( v => {
val names = v.map(mt => mt.name.name)
val createIfNotExist = tables.filter( table =>
(!names.contains(table.baseTableRow.tableName))).map(_.schema.create)
db.run(DBIO.sequence(createIfNotExist))
})
Await.result(f, Duration.Inf)
With Slick 3.0, Mtable.getTables is a DBAction so something like this would work:
val coffees = TableQuery[Coffees]
try {
Await.result(db.run(DBIO.seq(
MTable.getTables map (tables => {
if (!tables.exists(_.name.name == coffees.baseTableRow.tableName))
coffees.schema.create
})
)), Duration.Inf)
} finally db.close
As JoshSGoman comment points out about the answer of Mike-s, the table is not created. I managed to make it work by slightly modifying the first answer's code :
val coffees = TableQuery[Coffees]
try {
def createTableIfNotInTables(tables: Vector[MTable]): Future[Unit] = {
if (!tables.exists(_.name.name == events.baseTableRow.tableName)) {
db.run(coffees.schema.create)
} else {
Future()
}
}
val createTableIfNotExist: Future[Unit] = db.run(MTable.getTables).flatMap(createTableIfNotInTables)
Await.result(createTableIfNotExist, Duration.Inf)
} finally db.close
With the following imports :
import slick.jdbc.meta.MTable
import slick.driver.SQLiteDriver.api._
import scala.concurrent.{Await, Future}
import scala.concurrent.duration.Duration
import scala.concurrent.ExecutionContext.Implicits.global
why don't you simply check the existence before create?
val schema = coffees.schema ++ suppliers.schema
db.run(DBIO.seq(
if (!MTable.getTables.list.exists(_.name.name == MyTable.tableName)){
schema.create
}
))
cannot use createIfNotExists on schema composed of 3 tables with composite primary key on one of the tables. Here, the 3rd table has a primary key composed from the the primary key of each of the 1st and 2nd table. I get an error on this schema when .createIfNotExists is encountered a 2nd time. I am using slick 3.3.1 on scala 2.12.8.
class UserTable(tag: Tag) extends Table[User](tag, "user") {
def id = column[Long]("id", O.AutoInc, O.PrimaryKey)
def name = column[String]("name")
def email = column[Option[String]]("email")
def * = (id.?, name, email).mapTo[User]
}
val users = TableQuery[UserTable]
lazy val insertUser = users returning users.map(_.id)
case class Room(title: String, id: Long = 0L)
class RoomTable(tag: Tag) extends Table[Room](tag, "room") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def title = column[String]("title")
def * = (title, id).mapTo[Room]
}
val rooms = TableQuery[RoomTable]
lazy val insertRoom = rooms returning rooms.map(_.id)
case class Occupant(roomId: Long, userId: Long)
class OccupantTable(tag: Tag) extends Table[Occupant](tag, "occupant") {
def roomId = column[Long]("room")
def userId = column[Long]("user")
def pk = primaryKey("room_user_pk", (roomId, userId) )
def * = (roomId, userId).mapTo[Occupant]
}
val occupants = TableQuery[OccupantTable]
I can successfully create schema and add user, room and occupant at first. On the second usage of .createIfNotExists as follows below, I get an error on duplicate primary key:
println("\n2nd run on .createIfNotExists using different values for users, rooms and occupants")
val initdup = for {
_ <- users.schema.createIfNotExists
_ <- rooms.schema.createIfNotExists
_ <- occupants.schema.createIfNotExists
curlyId <- insertUser += User(None, "Curly", Some("curly#example.org"))
larryId <- insertUser += User(None, "Larry")
moeId <- insertUser += User(None, "Moe", Some("moe#example.org"))
shedId <- insertRoom += Room("Shed")
_ <- occupants += Occupant(shedId, curlyId)
_ <- occupants += Occupant(shedId, moeId)
} yield ()
The exception is as below:
2nd run on .createIfNotExists using different values for users, rooms and occupants
[error] (run-main-2) org.h2.jdbc.JdbcSQLException: Constraint "room_user_pk" already exists; SQL statement:
[error] alter table "occupant" add constraint "room_user_pk" primary key("room","user") [90045-197]
[error] org.h2.jdbc.JdbcSQLException: Constraint "room_user_pk" already exists; SQL statement:
[error] alter table "occupant" add constraint "room_user_pk" primary key("room","user") [90045-197]
[error] at org.h2.message.DbException.getJdbcSQLException(DbException.java:357)
[error] at org.h2.message.DbException.get(DbException.java:179)
[error] at org.h2.message.DbException.get(DbException.java:155)
[error] at org.h2.command.ddl.AlterTableAddConstraint.tryUpdate(AlterTableAddConstraint.java:110)
[error] at org.h2.command.ddl.AlterTableAddConstraint.update(AlterTableAddConstraint.java:78)
[error] at org.h2.command.CommandContainer.update(CommandContainer.java:102)
[error] at org.h2.command.Command.executeUpdate(Command.java:261)
[error] at org.h2.jdbc.JdbcPreparedStatement.execute(JdbcPreparedStatement.java:249)
[error] at slick.jdbc.JdbcActionComponent$SchemaActionExtensionMethodsImpl$$anon$6.$anonfun$run$7(JdbcActionComponent.scala:292)
[error] at slick.jdbc.JdbcActionComponent$SchemaActionExtensionMethodsImpl$$anon$6.$anonfun$run$7$adapted(JdbcActionComponent.scala:292)
[error] at slick.jdbc.JdbcBackend$SessionDef.withPreparedStatement(JdbcBackend.scala:425)
[error] at slick.jdbc.JdbcBackend$SessionDef.withPreparedStatement$(JdbcBackend.scala:420)
[error] at slick.jdbc.JdbcBackend$BaseSession.withPreparedStatement(JdbcBackend.scala:489)
[error] at slick.jdbc.JdbcActionComponent$SchemaActionExtensionMethodsImpl$$anon$6.$anonfun$run$6(JdbcActionComponent.scala:292)
[error] at slick.jdbc.JdbcActionComponent$SchemaActionExtensionMethodsImpl$$anon$6.$anonfun$run$6$adapted(JdbcActionComponent.scala:292)
[error] at scala.collection.Iterator.foreach(Iterator.scala:941)
[error] at scala.collection.Iterator.foreach$(Iterator.scala:941)
[error] at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
[error] at scala.collection.IterableLike.foreach(IterableLike.scala:74)
[error] at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
[error] at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
[error] at slick.jdbc.JdbcActionComponent$SchemaActionExtensionMethodsImpl$$anon$6.run(JdbcActionComponent.scala:292)
[error] at slick.jdbc.JdbcActionComponent$SchemaActionExtensionMethodsImpl$$anon$6.run(JdbcActionComponent.scala:290)
[error] at slick.jdbc.JdbcActionComponent$SimpleJdbcProfileAction.run(JdbcActionComponent.scala:28)
[error] at slick.jdbc.JdbcActionComponent$SimpleJdbcProfileAction.run(JdbcActionComponent.scala:25)
[error] at slick.basic.BasicBackend$DatabaseDef$$anon$3.liftedTree1$1(BasicBackend.scala:276)
[error] at slick.basic.BasicBackend$DatabaseDef$$anon$3.run(BasicBackend.scala:276)
[error] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[error] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[error] at java.lang.Thread.run(Thread.java:748)
[error] Nonzero exit code: 1
[error] (Compile / run) Nonzero exit code: 1
Additionally, I can use .createIfNotExists more than once on schema where all tables are created with O.PrimaryKey convention.
Am I able to do something to massage code? Is there a workaround so that .createIfNotExists is still usable on composite primary key case?
testOnly play.api.weibo.StatusesShowBatchSpec
[error] Could not create an instance of play.api.weibo.StatusesShowBatchSpec
[error] caused by java.lang.Exception: Could not instantiate class play.api.weibo.StatusesShowBatchSpec: null
[error] org.specs2.reflect.Classes$class.tryToCreateObjectEither(Classes.scala:93)
[error] org.specs2.reflect.Classes$.tryToCreateObjectEither(Classes.scala:211)
[error] org.specs2.specification.SpecificationStructure$$anonfun$createSpecificationEither$2.apply(BaseSpecification.scala:119)
[error] org.specs2.specification.SpecificationStructure$$anonfun$createSpecificationEither$2.apply(BaseSpecification.scala:119)
...
The spec
package play.api.weibo
import org.junit.runner.RunWith
import org.specs2.runner.JUnitRunner
class StatusesShowBatchSpec extends ApiSpec {
"'statuses show batch' api" should {
"read statuses" in {
val api = StatusesShowBatch(
accessToken = testAdvancedToken,
ids = "3677163356078857")
val res = awaitApi(api)
res.statuses must have size (1)
}
}
}
See full code here https://github.com/jilen/play-weibo/tree/spec2_error
Full stacktrace
https://gist.github.com/jilen/9050548
In the ApiSpec class you have a few variables which might be null at instantiation time:
val cfg = ConfigFactory.load("http.conf")
val testToken = cfg.getString("token.normal")
val testAdvancedToken = cfg.getString("token.advanced")
implicit val http = new SprayHttp {
val config = new SprayHttpConfig {
val system = ActorSystem("test")
val gzipEnable = true
}
val context = config.system.dispatcher
}
You can turn those vals into lazy vals to avoid this situation:
lazy val cfg = ConfigFactory.load("http.conf")
lazy val testToken = cfg.getString("token.normal")
lazy val testAdvancedToken = cfg.getString("token.advanced")
implicit lazy val http = new SprayHttp {
lazy val config = new SprayHttpConfig {
val system = ActorSystem("test")
val gzipEnable = true
}
val context = config.system.dispatcher
}
I was getting a very similar error using specs2 version 2.3.10 on Scala 2.10. Upgrading to 2.3.13 makes the error messages much more informative and provides an extra stacktrace to the root cause. This newer version was released very recently (8 days before this post!), so hopefully you're able to accommodate an update...
Some of my issues ended up being related the val vs. lazy val problem like in the accepted answer; however, I'm now able to pinpoint the exact line that these errors are occurring on in addition to debugging other initialization problems as well.
I am a newbie to scala and casbah. I am trying to create a mongo replicaset connection using casbah. This is my code. I am pretty sure about my mongo replica setup being correct. When I create a connection through ruby, it works great. Im missing something silly here.
When I googled, I got this documentation and which is what I am using for reference.
http://api.mongodb.org/scala/casbah/current/scaladoc/com/mongodb/casbah/MongoConnection$.html
import com.mongodb.casbah.Imports._
object MongoAnalysisDB {
def main(args: Array[String]) = {
//that connection
val addresses = List("127.0.0.1:27018", "127.0.0.1:27019", "127.0.0.1:27020")
val mongoConn = MongoConnection(replicaSetSeeds: addresses)
val mongoDB = mongoConn("vimana-sandbox-dup")
val mongoColl = mongoConn("vimana-sandbox-dup")("utilization.metrics.cycledowntime")
//that query
val loadEvent = MongoDBObject("period" -> "PT1H")
val cursor = mongoColl.find(loadEvent)
val mtcevent = mongoColl.findOne(loadEvent)
//that document
println(mtcevent)
}
}
I get the following error.
[info] Compiling 1 Scala source to /home/deepak/scala-mongo-oplog-watcher/target/scala-2.9.1/classes...
[error] /home/deepak/scala-mongo-oplog-watcher/src/main/scala/reader.scala:6: ')' expected but '(' found.
[error] val mongoConn = MongoConnection(replicaSetSeeds: List("127.0.0.1:27018", "127.0.0.1:27019", "127.0.0.1:27020"))
[error] ^
[error] /home/deepak/scala-mongo-oplog-watcher/src/main/scala/reader.scala:6: ';' expected but ')' found.
[error] val mongoConn = MongoConnection(replicaSetSeeds: List("127.0.0.1:27018", "127.0.0.1:27019", "127.0.0.1:27020"))
[error] ^
[error] two errors found
[error] {file:/home/deepak/scala-mongo-oplog-watcher/}default-b16d47/compile:compile: Compilation failed
Wrapping up the ip string and port into ServerAddress worked.
import com.mongodb._
import com.mongodb.casbah.Imports._
object MongoAnalysisDB {
def main(args: Array[String]) = {
//that connection
val addresses = List(new ServerAddress("127.0.0.1" , 27018), new ServerAddress("127.0.0.1" , 27019), new ServerAddress( "127.0.0.1" , 27020 ))
val mongoConn = MongoConnection(addresses)
val mongoDB = mongoConn("vimana-sandbox-dup")
val mongoColl = mongoConn("vimana-sandbox-dup")("utilization.metrics.cycledowntime")
//that query
val loadEvent = MongoDBObject("period" -> "PT1H")
val cursor = mongoColl.find(loadEvent)
val mtcevent = mongoColl.findOne(loadEvent)
//that document
println(mtcevent)
}
}
See also:
http://mongodb.github.io/casbah/3.1/reference/connecting/#connecting-to-replicasets-mongos
You can also use an RS connection using MongoClientURI
val mongoClient = MongoClient(MongoClientURI("mongodb://localhost:27018,localhost:27019,localhost:27020"))