I'm trying to run in Slick 3.1 a transaction that contains two updates. The second update is plain SQL using Slick's sqlu command. This is my attempt:
val table = TableQuery[TableDB]
val update1 = table.filter(f => f.name === name).update(rec)
val update2 = sqlu"UPDATE table2 SET field = 1 WHERE field = 2"
val action = (for {
_ <- update1
_ <- update2 // <-- compilation error here
} yield ()).transactionally
val future = db.run(action.asTry)
// ... rest of the code
Slick complains in update2 line with the following messages
Implicit conversion found: ⇒
augmentString(): scala.collection.immutable.StringOps
type mismatch; found : scala.collection.immutable.IndexedSeq[Unit]
required: slick.dbio.DBIOAction[?,?,?]
Is it possible to make this work in a single database transaction?
Related
I am trying to implement simple db query with optional pagination. My try:
def getEntities(limit: Option[Int], offset: Option[Int]) = {
// MyTable is a slick definition of the db table
val withLimit = limit.fold(MyTable)(l => MyTable.take(l)) // Error here.
// Mytable and MyTable.take(l)
// has various types
val withOffset = offset.fold(withLimit)(o => withLimit.drop(o))
val query = withOffset.result
db.run(query)
}
The problem is an error:
type mismatch:
found: slick.lifted.Query
required: slick.lifted.TableQuery
How to make this code runnable? And maybe a little bit prettier?
My current fix to get Query from TableQuery is to add .filter(_ => true), but IMHO this is not nice:
val withLimit = limit.fold(MyTable.filter(_ => true))(l => MyTable.take(l))
Try to replace
val MyTable = TableQuery[SomeTable]
with
val MyTable: Query[SomeTable, SomeTable#TableElementType, Seq] = TableQuery[SomeTable]
i.e. to specify the type (statically upcast TableQuery to Query).
I am using slick 3 and I am trying to perform some integration tests with some inserts, some code that uses the db and then I want to rollback all the insert or deletion at the end of the test itself but I cannot find any documentation about it.
Is it possible?
How can I achieve it?
You need to use . transactionally around the DBIOAction
eg
val a = (for {
ns <- coffees.filter(_.name.startsWith("ESPRESSO")).map(_.name).result
_ <- DBIO.seq(ns.map(n => coffees.filter(_.name === n).delete): _*)
} yield ()).transactionally
val f: Future[Unit] = db.run(a)
For more see
http://slick.typesafe.com/doc/3.1.1/dbio.html#transactions-and-pinned-sessions
I can advice to drop and create table schema before and after test using BeforeAndAfter scala-test trait with next code:
def createTable(): Future[Unit] = {
db.run(DBIO.seq(
MTable.getTables.map(tables =>
if (!tables.exists(_.name.name == table.baseTableRow.tableName))
db.run(table.schema.create)
)
))
}
def dropTable(): Future[Unit] = db.run(table.schema.drop)
Whenever I get an update request for a given id , I am trying to update the masterId and the updatedDtTm columns in a DB table( I don't want to update my createdDtTm). The following is my code :
case class Master(id:Option[Long] = None,masterId:String,createdDtTm:Option[java.util.Date],
updatedDtTm:Option[java.util.Date])
/**
* This is my Slick Mapping table
* with the default projection
*/
`class MappingMaster(tag:Tag) extends
Table[Master](tag,"master") {
implicit val DateTimeColumnType = MappedColumnType.base[java.util.Date, java.sql.Timestamp](
{
ud => new Timestamp(ud.getTime)
}, {
sd => new java.util.Date(sd.getTime)
})
def id = column[Long]("id",O.PrimaryKey,O.AutoInc)
def masterId = column[String]("master_id")
def createdDtTm = column[java.util.Date]("created_dttm")
def updatedDtTm = column[java.util.Date]("updated_dttm")
def * = (id.? , masterId , createdDtTm.? , updatedDtTm.?) <>
((Master.apply _).tupled , Master.unapply _) }
/**
* Some where in the DAO update call
*/
db.run(masterRecords.filter(_.id === id).map(rw =>(rw.masterId,rw.updatedDtTm)).
update(("new_master_id",new Date()))
// I also tried the following
db.run(masterRecords.filter(_id === id).map(rw => (rw.masterId,rw.updatedDtTm).shaped[(String,java.util.Date)]).update(("new_master_id",new Date()))
The documentation of Slick states that inorder to update multiple columns one needs to use the map to get the corresponding columns and then update them.
The problem here is the following - the update method seems to be accepting a value of Nothing.
I also tried the following which was doing the same thing as above:
val t = for {
ms <- masterRecords if (ms.id === "1234")
} yield (ms.masterId , ms.updateDtTm)
db.run(t.update(("new_master_id",new Date())))
When I compile the code , it gives me the following Compilation Exception :
Slick does not know how to map the given types.
[error] Possible causes: T in Table[T] does not match your * projection. Or you use an unsupported type in a Query (e.g. scala List).
[error] Required level: slick.lifted.FlatShapeLevel
[error] Source type: (slick.lifted.Rep[String], slick.lifted.Rep[java.util.Date])
[error] Unpacked type: (String, java.util.Date)
[error] Packed type: Any
[error] db.run(masterRecords.filter(_id === id).map(rw => (rw.masterId,rw.updatedDtTm).shaped[(String,java.util.Date)]).update(("new_master_id",new Date()))
I am using Scala 2.11 with Slick 3.0.1 and IntelliJ as the IDE. Really appreciate if you can throw some light on this.
Cheers,
Sathish
(Replaces original answer) It seems the implicit has to be in scope for the queries, this compiles:
case class Master(id:Option[Long] = None,masterId:String,createdDtTm:Option[java.util.Date],
updatedDtTm:Option[java.util.Date])
implicit val DateTimeColumnType = MappedColumnType.base[java.util.Date, java.sql.Timestamp](
{
ud => new Timestamp(ud.getTime)
}, {
sd => new java.util.Date(sd.getTime)
})
class MappingMaster(tag:Tag) extends Table[Master](tag,"master") {
def id = column[Long]("id",O.PrimaryKey,O.AutoInc)
def masterId = column[String]("master_id")
def createdDtTm = column[java.util.Date]("created_dttm")
def updatedDtTm = column[java.util.Date]("updated_dttm")
def * = (id.? , masterId , createdDtTm.? , updatedDtTm.?) <> ((Master.apply _).tupled , Master.unapply _)
}
private val masterRecords = TableQuery[MappingMaster]
val id: Long = 123
db.run(masterRecords.filter(_.id === id).map(rw =>(rw.masterId,rw.updatedDtTm)).update("new_master_id",new Date()))
val t = for {
ms <- masterRecords if (ms.id === id)
} yield (ms.masterId , ms.updatedDtTm)
db.run(t.update(("new_master_id",new Date())))
The following working code from Slick 2.1 returns a single integer (which in this example, happens to be the result of running a function called "foobar"):
def getFoobar(): Int = DB.withSession {
val query = Q.queryNA[Int]("select foobar()")
query.first
}
How would one port this to Slick 3.0? According to the Slick 3.0 docs, the query would have to be converted to an DBIOAction. So this is what I've tried:
import driver.api._
...
def getFoobar(): Future[Int] = {
val query = sql"select foobar()".as[Int]
db.run(query)
}
but this results in the following compilation error:
[error] found : slick.profile.SqlStreamingAction[Vector[Int],Int,slick.dbio.Effect]#ResultAction [Int,slick.dbio.NoStream,slick.dbio.Effect]
[error] required: MyDAO.this.driver.api.DBIO[Seq[Int]]
It appears that the sql interpolator is yielding a SqlStreamingAction rather than a DBIO, as db.run is expecting.
What would be the correct way to write this in the new Slick 3 API?
I used something similar and it worked for me
import slick.driver.MySQLDriver.api._
def get(id : String) : Future[Channel] = {
implicit val getChannelResult = GetResult(r => Channel(r.<<, r.<<, r.<<, r.<<, r.<<))
val query = sql"select * from Channel where id = $id".as[Channel]
db.run(myq.headOption)
}
The db.run(DBIOAction[T,NoStream,Nothing]) command would accept all types of actions like sqlstreamingaction , StreamingDriverAction , DriverAction etc.
I guess the problem lies with the driver or db configuration. So the error
[error] required: MyDAO.this.driver.api.DBIO[Seq[Int]]
Can you just paste the driver and db configuration steps, so that we can get a deeper look into the code to identify the actual error step
I am using Scala and Slick and I am trying to execute simple query with two conditions
import JiraData._
import org.scala_tools.time.Imports._
import scala.slick.driver.PostgresDriver.simple._
val today = new DateTime()
val yesterday = today.plusDays(-1)
implicit val session = Database.forURL("jdbc:postgresql://localhost/jira-performance-manager",
driver = "org.postgresql.Driver",
user = "jira-performance-manager",
password = "jira-performance-manager").withSession {
implicit session =>
val activeUsers = users.filter(_.active === true)
for (activeUser <- activeUsers) {
val activeUserWorkogs = worklogs.filter(x => x.username === activeUser.name && x.workDate === yesterday)
}
}
But I receive error:
Error:(20, 95) value === is not a member of scala.slick.lifted.Column[org.scala_tools.time.Imports.DateTime]
Note: implicit value session is not applicable here because it comes after the application point and it lacks an explicit result type
val activeUserWorkogs = worklogs.filter(x => x.username === activeUser.name && x.workDate === yesterday)
^
What's wrong here? How can I get list of results filtered by two conditions?
scala-tools.time uses JodaDateTime. See https://github.com/jorgeortiz85/scala-time/blob/master/src/main/scala/org/scala_tools/time/Imports.scala . Slick does not have built-in support for Joda. There is Slick Joda mapper: https://github.com/tototoshi/slick-joda-mapper . Or it is easy to add yourself: http://slick.typesafe.com/doc/2.1.0/userdefined.html#using-custom-scalar-types-in-queries
As a side-note: Something like
for (activeUser <- activeUsers) {
val activeUserWorkogs = worklogs.filter(...)
looks like going into the wrong direction. It will run another query for each active user. Better is to use a join or run a single accumulated query for the work logs of all active users.