When you try to query by MappedTo[Long]:
for {
game <- GamblrGame.table
bet <- GamblrBet.table if game.id === bet.game
} yield (game, bet)
you get:
[error] /Volumes/Home/dev/gamblr/test/BotTest.scala:27: Cannot perform option-mapped operation
[error] with type: (slicky.Slicky.ID, slicky.fields.FK[models.GamblrGame]) => R
[error] for base type: (slicky.Slicky.ID, slicky.Slicky.ID) => Boolean
[error] bet <- GamblrBet.table if game.id === bet.game
How should I use mapped columns in queries?
The FK:
case class FK[E <: IdEntity[E]](id: ID)(implicit tag: TypeTag[E])
extends MappedTo[Long]
The game column in GamblrBet.table:
def game = column[FK[GamblrGame]]("GAME")
Related
I'm not sure what is wrong here.
The following code block is throwing error:
(for {
(e,r) <- tblDetail.joinLeft(tblMaster).on((e,r) => r.col1 === e.col3)
} yield (e.id)
Error
No matching Shape found.
[error] Slick does not know how to map the given types.
[error] Possible causes: T in Table[T] does not match your * projection,
[error] you use an unsupported type in a Query (e.g. scala List),
[error] or you forgot to import a driver api into scope.
[error] Required level: slick.lifted.FlatShapeLevel
[error] Source type: (slick.lifted.Rep[Int], slick.lifted.Rep[String],...)
[error] Unpacked type: T
[error] Packed type: G
[error] (e,r) <- tblDetail.joinLeft(tblMaster).on((e,r) => r.col1 === e.col3)
I checked the Slick Tables for tblDetail and tblMaster they seemed to be fine.
tblMaster
class TblMaster(tag:Tag)
extends Table[(Int,String,...)](tag, "tbl_master") {
def id = column[Int]("id")
def col3 = column[String]("col3")
def * = (id,col3)
}
tblDetail
class TblDetail(tag:Tag)
extends Table[Entity](tag, "tbl_detail") {
def id = column[Int]("id")
def col1 = column[String]("col1")
def * : ProvenShape[Entity] = (id,col1) <>
((Entity.apply _).tupled, Entity.unapply)
}
Any help would be appreciable.
I am adding two additional fields to the Person table: a Date and a String. I built the Person tabel and mapped it with Play Slick by following olivebh's tutorial.
However, I get the following erros from the Slick data model trait Tables:
dao/Tables.scala:85: ambiguous implicit values:
[error] both value e3 of type slick.jdbc.GetResult[String]
[error] and value e1 of type slick.jdbc.GetResult[String]
[error] match expected type slick.jdbc.GetResult[String]
[error] ProjectRow.tupled((<<[Int], <<[String], <<[Date], <<[String]))
which refers to the following line:
implicit def GetResultPersonRow(implicit e0: GR[Int], e1: GR[String], e2: GR[Date], e3: GR[String]): GR[ProjectRow] = GR {
prs =>
import prs._
PersonRow.tupled((<<[Int], <<[String], <<[Date], <<[String]))
}
where the "int, string, date, string" represent the "id, name, birthdate, language" fields respectively. Everything worked fine by following the tutorial that covers "id, name" as an example. But as soon as I added birthdate and language, I got the error quoted above.
Also, when creating the prototypes for the table rows:
class Person(_tableTag: Tag) extends Table[PersonRow](_tableTag, "person") {
def * = (personId, name, birthdate, language) <>(PersonRow.tupled, PersonRow.unapply)
def ? = (Rep.Some(personId), Rep.Some(name), Rep.Some(birthdate), Rep.Some(language)).shaped.<>({ r => import r._; _1.map(_ => ProjectRow.tupled((_1.get, _2.get, _3.get, _4.get))) }, (_: Any) => throw new Exception("Inserting into ? projection not supported."))
val personId: Rep[Int] = column[Int]("person_id", O.AutoInc, O.PrimaryKey)
val name: Rep[String] = column[String]("name", O.Length(50, varying = true))
val birthdate: Rep[Date] = column[Date]("birthdate", O.Length(50, varying = true))
val language: Rep[String] = column[String]("language", O.Length(50, varying = true))
I get the following errors:
No matching Shape found.
[error] Slick does not know how to map the given types.
[error] Possible causes: T in Table[T] does not match your * projection. Or you use an unsupported type in a Query (e.g. scala List).
[error] Required level: slick.lifted.FlatShapeLevel
[error] Source type: (slick.lifted.Rep[Int], slick.lifted.Rep[String], slick.lifted.Rep[java.util.Date], slick.lifted.Rep[String])
[error] Unpacked type: (Int, String, java.util.Date, String)
[error] Packed type: Any
[error] def * = (personId, name, birthdate, language) <>(PersonRow.tupled, PersonRow.unapply)
and also:
dao/Tables.scala:94: could not find implicit value for parameter od: slick.lifted.OptionLift[Tables.this.driver.api.Rep[java.util.Date],O]
[error] def ? = (Rep.Some(personId), Rep.Some(name), Rep.Some(birthdate), Rep.Some(language)).shaped.<>({ r => import r._; _1.map(_ => PersonRow.tupled((_1.get, _2.get, _3.get, _4.get))) }, (_: Any) => throw new Exception("Inserting into ? projection not supported."))
dao/Tables.scala:94: not found: value _1
[error] def ? = (Rep.Some(personId), Rep.Some(name), Rep.Some(birthdate), Rep.Some(language)).shaped.<>({ r => import r._; _1.map(_ => PersonRow.tupled((_1.get, _2.get, _3.get, _4.get))) }, (_: Any) => throw new Exception("Inserting into ? projection not supported."))
Any help in understanding these errors and therefore how I could change the Slick data model trait in order for it to properly handle two additional Date and String fields, would be greatly appreciated. Thank you so much!
Slick can't handle java.util.Date because databases understand only java.sql.Date via JDBC drivers. You can make your own mapper, so that Slick can know how to read/write java.util.Date/java.sql.Date. But, Java 8 now has a better API for handling date/time/calendars.
implicit val localDateTimeColumnType = MappedColumnType.base[LocalDateTime, Timestamp](
ldt => Timestamp.valueOf(ldt),
t => t.toLocalDateTime
)
See this question also. Btw, thanks for reading my article, hope it was useful! :)
In Scala Slick, a database schema can be created with the following:
val schema = coffees.schema ++ suppliers.schema
db.run(DBIO.seq(
schema.create
))
From the bottom of this documentation page http://slick.typesafe.com/doc/3.0.0/schemas.html
However, if the database schema already exists then this throws an exception.
Is there a normal way or right way to create the schema IF AND ONLY IF it does not already exist?
In Slick 3.3.0 createIfNotExists and dropIfExists schema methods were added. So:
db.run(coffees.schema.createIfNotExists)
Googled this question and tried several solutions from answers until figured it out.
This is what I do for multiple tables, with slick 3.1.1 and Postgres
import slick.driver.PostgresDriver.api._
import slick.jdbc.meta.MTable
import scala.concurrent.Await
import scala.concurrent.duration.Duration
import scala.concurrent.ExecutionContext.Implicits.global
val t1 = TableQuery[Table1]
val t2 = TableQuery[Table2]
val t3 = TableQuery[Table3]
val tables = List(t1, t2, t3)
val existing = db.run(MTable.getTables)
val f = existing.flatMap( v => {
val names = v.map(mt => mt.name.name)
val createIfNotExist = tables.filter( table =>
(!names.contains(table.baseTableRow.tableName))).map(_.schema.create)
db.run(DBIO.sequence(createIfNotExist))
})
Await.result(f, Duration.Inf)
With Slick 3.0, Mtable.getTables is a DBAction so something like this would work:
val coffees = TableQuery[Coffees]
try {
Await.result(db.run(DBIO.seq(
MTable.getTables map (tables => {
if (!tables.exists(_.name.name == coffees.baseTableRow.tableName))
coffees.schema.create
})
)), Duration.Inf)
} finally db.close
As JoshSGoman comment points out about the answer of Mike-s, the table is not created. I managed to make it work by slightly modifying the first answer's code :
val coffees = TableQuery[Coffees]
try {
def createTableIfNotInTables(tables: Vector[MTable]): Future[Unit] = {
if (!tables.exists(_.name.name == events.baseTableRow.tableName)) {
db.run(coffees.schema.create)
} else {
Future()
}
}
val createTableIfNotExist: Future[Unit] = db.run(MTable.getTables).flatMap(createTableIfNotInTables)
Await.result(createTableIfNotExist, Duration.Inf)
} finally db.close
With the following imports :
import slick.jdbc.meta.MTable
import slick.driver.SQLiteDriver.api._
import scala.concurrent.{Await, Future}
import scala.concurrent.duration.Duration
import scala.concurrent.ExecutionContext.Implicits.global
why don't you simply check the existence before create?
val schema = coffees.schema ++ suppliers.schema
db.run(DBIO.seq(
if (!MTable.getTables.list.exists(_.name.name == MyTable.tableName)){
schema.create
}
))
cannot use createIfNotExists on schema composed of 3 tables with composite primary key on one of the tables. Here, the 3rd table has a primary key composed from the the primary key of each of the 1st and 2nd table. I get an error on this schema when .createIfNotExists is encountered a 2nd time. I am using slick 3.3.1 on scala 2.12.8.
class UserTable(tag: Tag) extends Table[User](tag, "user") {
def id = column[Long]("id", O.AutoInc, O.PrimaryKey)
def name = column[String]("name")
def email = column[Option[String]]("email")
def * = (id.?, name, email).mapTo[User]
}
val users = TableQuery[UserTable]
lazy val insertUser = users returning users.map(_.id)
case class Room(title: String, id: Long = 0L)
class RoomTable(tag: Tag) extends Table[Room](tag, "room") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def title = column[String]("title")
def * = (title, id).mapTo[Room]
}
val rooms = TableQuery[RoomTable]
lazy val insertRoom = rooms returning rooms.map(_.id)
case class Occupant(roomId: Long, userId: Long)
class OccupantTable(tag: Tag) extends Table[Occupant](tag, "occupant") {
def roomId = column[Long]("room")
def userId = column[Long]("user")
def pk = primaryKey("room_user_pk", (roomId, userId) )
def * = (roomId, userId).mapTo[Occupant]
}
val occupants = TableQuery[OccupantTable]
I can successfully create schema and add user, room and occupant at first. On the second usage of .createIfNotExists as follows below, I get an error on duplicate primary key:
println("\n2nd run on .createIfNotExists using different values for users, rooms and occupants")
val initdup = for {
_ <- users.schema.createIfNotExists
_ <- rooms.schema.createIfNotExists
_ <- occupants.schema.createIfNotExists
curlyId <- insertUser += User(None, "Curly", Some("curly#example.org"))
larryId <- insertUser += User(None, "Larry")
moeId <- insertUser += User(None, "Moe", Some("moe#example.org"))
shedId <- insertRoom += Room("Shed")
_ <- occupants += Occupant(shedId, curlyId)
_ <- occupants += Occupant(shedId, moeId)
} yield ()
The exception is as below:
2nd run on .createIfNotExists using different values for users, rooms and occupants
[error] (run-main-2) org.h2.jdbc.JdbcSQLException: Constraint "room_user_pk" already exists; SQL statement:
[error] alter table "occupant" add constraint "room_user_pk" primary key("room","user") [90045-197]
[error] org.h2.jdbc.JdbcSQLException: Constraint "room_user_pk" already exists; SQL statement:
[error] alter table "occupant" add constraint "room_user_pk" primary key("room","user") [90045-197]
[error] at org.h2.message.DbException.getJdbcSQLException(DbException.java:357)
[error] at org.h2.message.DbException.get(DbException.java:179)
[error] at org.h2.message.DbException.get(DbException.java:155)
[error] at org.h2.command.ddl.AlterTableAddConstraint.tryUpdate(AlterTableAddConstraint.java:110)
[error] at org.h2.command.ddl.AlterTableAddConstraint.update(AlterTableAddConstraint.java:78)
[error] at org.h2.command.CommandContainer.update(CommandContainer.java:102)
[error] at org.h2.command.Command.executeUpdate(Command.java:261)
[error] at org.h2.jdbc.JdbcPreparedStatement.execute(JdbcPreparedStatement.java:249)
[error] at slick.jdbc.JdbcActionComponent$SchemaActionExtensionMethodsImpl$$anon$6.$anonfun$run$7(JdbcActionComponent.scala:292)
[error] at slick.jdbc.JdbcActionComponent$SchemaActionExtensionMethodsImpl$$anon$6.$anonfun$run$7$adapted(JdbcActionComponent.scala:292)
[error] at slick.jdbc.JdbcBackend$SessionDef.withPreparedStatement(JdbcBackend.scala:425)
[error] at slick.jdbc.JdbcBackend$SessionDef.withPreparedStatement$(JdbcBackend.scala:420)
[error] at slick.jdbc.JdbcBackend$BaseSession.withPreparedStatement(JdbcBackend.scala:489)
[error] at slick.jdbc.JdbcActionComponent$SchemaActionExtensionMethodsImpl$$anon$6.$anonfun$run$6(JdbcActionComponent.scala:292)
[error] at slick.jdbc.JdbcActionComponent$SchemaActionExtensionMethodsImpl$$anon$6.$anonfun$run$6$adapted(JdbcActionComponent.scala:292)
[error] at scala.collection.Iterator.foreach(Iterator.scala:941)
[error] at scala.collection.Iterator.foreach$(Iterator.scala:941)
[error] at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
[error] at scala.collection.IterableLike.foreach(IterableLike.scala:74)
[error] at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
[error] at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
[error] at slick.jdbc.JdbcActionComponent$SchemaActionExtensionMethodsImpl$$anon$6.run(JdbcActionComponent.scala:292)
[error] at slick.jdbc.JdbcActionComponent$SchemaActionExtensionMethodsImpl$$anon$6.run(JdbcActionComponent.scala:290)
[error] at slick.jdbc.JdbcActionComponent$SimpleJdbcProfileAction.run(JdbcActionComponent.scala:28)
[error] at slick.jdbc.JdbcActionComponent$SimpleJdbcProfileAction.run(JdbcActionComponent.scala:25)
[error] at slick.basic.BasicBackend$DatabaseDef$$anon$3.liftedTree1$1(BasicBackend.scala:276)
[error] at slick.basic.BasicBackend$DatabaseDef$$anon$3.run(BasicBackend.scala:276)
[error] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[error] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[error] at java.lang.Thread.run(Thread.java:748)
[error] Nonzero exit code: 1
[error] (Compile / run) Nonzero exit code: 1
Additionally, I can use .createIfNotExists more than once on schema where all tables are created with O.PrimaryKey convention.
Am I able to do something to massage code? Is there a workaround so that .createIfNotExists is still usable on composite primary key case?
There are some examples for use SQL over Spark Streaming in foreachRDD(). But if I want to use SQL in tranform():
case class AlertMsg(host:String, count:Int, sum:Double)
val lines = ssc.socketTextStream("localhost", 8888)
lines.transform( rdd => {
if (rdd.count > 0) {
val t = sqc.jsonRDD(rdd)
t.registerTempTable("logstash")
val sqlreport = sqc.sql("SELECT host, COUNT(host) AS host_c, AVG(lineno) AS line_a FROM logstash WHERE path = '/var/log/system.log' AND lineno > 70 GROUP BY host ORDER BY host_c DESC LIMIT 100")
sqlreport.map(r => AlertMsg(r(0).toString,r(1).toString.toInt,r(2).toString.toDouble))
} else {
rdd
}
}).print()
I got such error:
[error] /Users/raochenlin/Downloads/spark-1.2.0-bin-hadoop2.4/logstash/src/main/scala/LogStash.scala:52: no type parameters for method transform: (transformFunc: org.apache.spark.rdd.RDD[String] => org.apache.spark.rdd.RDD[U])(implicit evidence$5: scala.reflect.ClassTag[U])org.apache.spark.streaming.dstream.DStream[U] exist so that it can be applied to arguments (org.apache.spark.rdd.RDD[String] => org.apache.spark.rdd.RDD[_ >: LogStash.AlertMsg with String <: java.io.Serializable])
[error] --- because ---
[error] argument expression's type is not compatible with formal parameter type;
[error] found : org.apache.spark.rdd.RDD[String] => org.apache.spark.rdd.RDD[_ >: LogStash.AlertMsg with String <: java.io.Serializable]
[error] required: org.apache.spark.rdd.RDD[String] => org.apache.spark.rdd.RDD[?U]
[error] lines.transform( rdd => {
[error] ^
[error] one error found
[error] (compile:compile) Compilation failed
Seems only if I use sqlreport.map(r => r.toString) can be a correct usage?
dstream.transform take a function transformFunc: (RDD[T]) ⇒ RDD[U]
In this case, the if must result in the same type on both evaluations of the condition, which is not the case:
if (count == 0) => RDD[String]
if (count > 0) => RDD[AlertMsg]
In this case, remove the optimization of if rdd.count ... sothat you have an unique transformation path.
This might be even more Scala than Play related, but I have this issue within some Play code.
I have
def index = Action.async { implicit request =>
val content1Future = Blogs.getAllBlogs(request)
val sidebar1Future = Blogs.getAllBlogsOverview(request)
for {
sidebar1 <- sidebar1Future
content1 <- content1Future
sidebar1Body <- Pagelet.readBody(sidebar1)
content1Body <- Pagelet.readBody(content1)
} yield {
val sidebarcontents = List(sidebar1Body)
val contents = List(content1Body)
Ok(views.html.main("Index", contents, sidebarcontents)).withHeaders(("Cache-Control", "no-cache"))
}
and it works fine.
Now, I try to extract the for-comprehention into a separate function dealing with 'sidebar' and I want to pass in the 'content'.
Like
def index = Action.async { implicit request =>
standardMain(Blogs.getAllBlogs(request))
}
def standardMain(content1Future: => Future[Result])(implicit request: Request[_]) : Future[Result] = {
val sidebar1Future = Blogs.getAllBlogsOverview(request)
for {
sidebar1 <- sidebar1Future
content1 <- content1Future
sidebar1Body <- Pagelet.readBody(sidebar1)
content1Body <- Pagelet.readBody(content1)
} yield {
val sidebarcontents = List(sidebar1Body)
val contents = List(content1Body)
Ok(views.html.main("Index", contents, sidebarcontents)).withHeaders(("Cache-Control", "no-cache"))
}
}
But then I get
[info] Compiling 1 Scala source to D:\Dropbox\Playground\PlayWorld\play-with-forms\target\scala-2.11\classes...
[error] D:\Dropbox\Playground\PlayWorld\play-with-forms\app\controllers\Application.scala:91: type mismatch;
[error] found : scala.concurrent.Future[play.api.mvc.Result]
[error] required: play.api.libs.iteratee.Iteratee[Array[Byte],?]
[error] content1 <- content1Future
[error] ^
[error] D:\Dropbox\Playground\PlayWorld\play-with-forms\app\controllers\Application.scala:90: type mismatch;
[error] found : play.api.libs.iteratee.Iteratee[Array[Byte],Nothing]
[error] required: scala.concurrent.Future[play.api.mvc.Result]
[error] sidebar1 <- sidebar1Future
[error] ^
[error] two errors found
[error] (compile:compile) Compilation failed
[error] application -
I tried many different calls, but somehow I do not know how to get an play.api.libs.iteratee.Iteratee[Array[Byte],?].
How can I achive this. Thanks.