I'm new to SLICK (2.1) and am lost in creating my first query using union. Because the parameters are provided from external (via a web interface) eventually, I set them as optional. Please see the comment in the code below. How to create an appropriate query?
My actual class is more complex which I simplified for the sake of this question.
case class MyStuff(id: Int, value: Int, text: String)
class MyTable (tag: Tag) extends Table[MyStuff](tag, "MYSTUFF"){
def id = column[Int]("ID", O NotNull)
def value = column[Int]("VALUE", O NotNull)
def text = column[String]("TEXT", O NotNull)
def * =
(id,
value,
text).shaped <> ((MyStuff.apply _).tupled, MyStuff.unapply)
}
object myTable extends TableQuery(new MyTable(_)){
def getStuff(ids: Option[List[Int]], values: Option[List[Int]])(implicit session: Session): Option[List[MyStuff]] = {
/*
1) If 'ids' are given, retrieve all matching entries, if any.
2) If 'values' are given, retrieve all matching entries (if any), union with the results of the previous step, and remove duplicate entries.
4) If neither 'ids' nor 'values' are given, retrieve all entries.
*/
}
}
getStuff is called like this:
db: Database withSession { implicit session => val myStuff = myTable.getStuff(...) }
You can use inset if the values are Some, otherwise a literal false and only filter when something is not None.
if(ids.isDefined || values.isDefined)
myTable.filter(row =>
ids.map(row.id inSet _).getOrElse(slick.lifted.LiteralColumn(false))
) union myTable.filter(row =>
values.map(row.value inSet _).getOrElse(slick.lifted.LiteralColumn(false))
)
else myTable
If I understand you correctly you want to build a filter at runtime from the given input. You can look at the extended docs for 3.0 (http://slick.typesafe.com/doc/3.0.0-RC1/queries.html#sorting-and-filtering) at "building criteria using a "dynamic filter" e.g. from a webform". This part of the docs is also valid for version 2.1.
Related
I have a database for objects called Campaigns containing three fields :
Id (int, not nullable)
Version (int, not nullable)
Stuff (Text, nullable)
Let's call CampaignsRow the corresponding slick entity class
When I select line from Campaigns, I don't always need to read stuff, which contains big chunks of text.
However, I'd very much like to work in the codebase with the class CampaignsRow instead of a tuple, and so to be able to sometimes just drop the Stuff column, while retaining the original type
Basically, I'm trying to write the following function :
//Force dropping the Stuff column from the current Query
def smallCampaign(campaigns: Query[Campaigns, CampaignsRow, Seq]): Query[Campaigns, CampaignsRow, Seq] = {
val smallCampaignQuery = campaigns.map {
row => CampaignsRow(row.id, row.version , None : Option[String])
}
smallCampaignQuery /* Fails because the type is now wrong, I have a Query[(Rep[Int], Rep[Int], Rep[Option[String]), (Int, Int, Option[String], Seq] */
}
Any idea how to do this ? I suspect this has to do with Shape in slick, but I can't find a resource to start understanding this class, and the slick source code is proving too complex for me to follow.
You're actually already doing almost what you want in def *, the default mapping. You can use the same tools in the map method. Your two tools are mapTo and <>.
As you've found, there is the mapTo method which you can only use if your case class exactly matches the shape of the tuple, so if you wanted a special case class just for this purpose:
case class CampaignLite(id: Int, version: Int)
val smallCampaignQuery = campaigns.map {
row => (row.id, row.version).mapTo[CampaignLite]
}
As you want to reuse your existing class, you can write your own convert functions instead of using the standard tupled and unapply and pass those to <>:
object CampaignRow {
def tupleLite(t: (Int, Int)) = CampaignRow(t._1, t._2, None)
def unapplyLite(c: CampaignRow) = Some((c.id, c.version))
}
val smallCampaignQuery = campaigns.map {
row => (row.id, row.version) <> (CampaignRow.tupleLite, CampaignRow.unapplyLite)
}
This gives you the most flexibility, as you can do whatever you like in your convert functions, but it's a bit more wordy.
As row is an instance of the Campaigns table you could always define it there alongside *, if you need to use it regularly.
class Campaigns ... {
...
def * = (id, version, stuff).mapTo[CampaignRow]
def liteMapping = (id, version) <> (CampaignRow.tupleLite, CampaignRow.unapplyLite)
}
val liteCampaigns = campaigns.map(_.liteMapping)
Reference: Essential Slick 3, section 5.2.1
If I understand your requirement correctly, you could consider making CampaignRow a case class that models your Campaigns table class by having Campaigns extend Table[CampaignRow] and providing the bidirectional mapping for the * projection:
case class CampaignRow(id: Int, version: Int, stuff: Option[String])
class Campaigns(tag: Tag) extends Table[CampaignRow](tag, "CAMPAIGNS") {
// ...
def * = (id, version, stuff) <> (CampaignRow.tupled, CampaignRow.unapply)
}
You should then be able to do something like below:
val campaigns = TableQuery[CampaignRow]
val smallCampaignQuery = campaigns.map( _.copy(stuff = None) )
For a relevant example, here's a Slick doc.
I'm trying to write a simple Active Record style DAO in Slick and I'm finding it very hard going.
I'd like to insert a row using the case class representation, set its "updated" column using database time and return the new row values as a case class.
i.e. the DAO should look like this:
class FooDao {
def db: JdbcBackend.Database
def insert(foo: FooRow): Future[FooRow] = {
???
}
}
// This class is auto-generated by slick codegen
case class FooRow(
id: Int,
created: Option[java.sql.Timestamp]
updated: Option[java.sql.Timestamp],
x1: String,
x2: String,
x3: String)
I want the returned FooRow object to have the correct ID and created timestamp that was assigned by the database.
I'll also want an update method that updates a FooRow, using the database clock for the updated column and returning a new FooRow
Things I have tried:
Using "returning"
I can insert the case class and get the generated id using the returning helper as described at Slick 3.0 Insert and then get Auto Increment Value, i.e.
class FooDao {
def db: JdbcBackend.Database
def insert(foo: FooRow): Future[FooRow] = {
db.run((Foos returning Foos.map(_.id)) += fooRow)
.map(id => fooRow.copy(id = id))
}
}
This works but I can't see any way to use CURRENT_TIMESTAMP() in the insert statement to get the updated column filled in. The types just don't allow it.
Using sqlu
I can write the SQL that I'm aiming for using sqlu, but then I can't see any way to get the generated key out of Slick. The JDBC API allows it via getGeneratedKeys() but I can't see how to access it here:
class FooDao {
def db: JdbcBackend.Database
def insert(foo: FooRow): Future[FooRow] = {
val sql: DBIO[Int] = sqlu"""
INSERT INTO foos
(created,
updated,
x1,
x2,
x3)
VALUES (current_timestamp(),
current_timestamp(),
$foo.x1,
$foo.x2,
$foo.x3)
"""
db.run(sql)
.map(rowCount => ???)
}
}
Using active-slick
The https://github.com/strongtyped/active-slick project looks like nearly what I need, but it doesn't appear to support updated/created columns.
Using database default values / triggers
I suppose I could implement the created column using a database default value, but how would I prevent Slick from passing in an explicit value in its insert statement and overruling that?
I suppose I could implement the updated column using a database trigger, but I'd prefer to keep my logic in the Scala layer and keep the database simple.
Using the web-server clock instead of the DB clock
The closest I can get to this at the moment is to use the web-server clock instead of the DB clock:
class FooDao {
def db: JdbcBackend.Database
def insert(foo: FooRow): Future[FooRow] = {
val now = Some(Timestamp.from(Instant.now()))
val toInsert = fooRow.copy(
created = now,
updated = now)
db.run((Foos returning Foos.map(_.id)) += toInsert)
.map(id => toInsert.copy(id = id))
}
}
However, I'd much prefer to use the database clock for these columns, in case different web servers have different times.
Any ideas would be gratefully received. I'm strongly considering abandoning Slick and using JOOQ, where this would be easy.
I'm experimenting with scalikejdbc (trying to move from Slick), and I'm stuck on creating my schema from the entities (read: case classes).
// example Slick equivalent
case class X(id: Int, ...)
class XTable(tag: Tag) extends Table[X] (tag, "x") {
def id = column[Int]("id")
... //more columns
def * = (id, ...) <> (X.tupled, X.unapply)
}
val xTable = TableQuery[XTable]
db.run(xtable.schema.create) //creates in the DB a table named "x", with "id" and all other columns
It seemed like using SQLSyntaxSupport could be a step in the right direction, with something like
// scalikejdbc
case class X (id: Int, ...)
object X extends SQLSyntaxSupport[X] {
def apply (x: ResultName[X])(rs: WrappedResultSet): X = new X(id = rs.get(x.id, ...))
}
X.table.??? // what to do now?
but could not figure out the next step.
What I'm looking for is the opposite of the tool described under [Reverse-engineering]: http://scalikejdbc.org/documentation/reverse-engineering.html
Any help/ideas, in particular directions to a relevant part of the documentation, will be appreciated
You can use the the statements method to get the SQL code, like for
most other SQL-based Actions. Schema Actions are currently the only
Actions that can produce more than one statement.
schema.create.statements.foreach(println)
schema.drop.statements.foreach(println)
http://slick.typesafe.com/doc/3.0.0/schemas.html
I have a Slick 3.0 table definition similar to the following:
case class Simple(a: String, b: Int, c: Option[String])
trait Tables { this: JdbcDriver =>
import api._
class Simples(tag: Tag) extends Table[Simple](tag, "simples") {
def a = column[String]("a")
def b = column[Int]("b")
def c = column[Option[String]]("c")
def * = (a, b, c) <> (Simple.tupled, Simple.unapply)
}
lazy val simples = TableQuery[Simples]
}
object DB extends Tables with MyJdbcDriver
I would like to be able to do 2 things:
Get a list of the column names as Seq[String]
For an instance of Simple, generate a Seq[String] that would correspond to how the data would be inserted into the database using a raw query (e.g. Simple("hello", 1, None) becomes Seq("'hello'", "1", "NULL"))
What would be the best way to do this using the Slick table definition?
First of all it is not possible to trick Slick and change the order on the left side of the <> operator in the * method without changing the order of values in Simple, the row type of Simples, i.e. what Ben assumed is not possible. The ProvenShape return type of the * projection method ensures that there is a Shape available for translating between the Column-based type in * and the client-side type and if you write def * = (c, b, a) <> Simple.tupled, Simple.unapply) having Simple defined as case class Simple(a: String, b: Int, c: Option[String]), Slick will complain with an error "No matching Shape found. Slick does not know how to map the given types...". Ergo, you can iterate over all the elements of an instance of Simple with its productIterator.
Secondly, you already have the definition of the Simples table in your code and querying metatables to get the same information you already have is not sensible. You can get all you column names with a one-liner simples.baseTableRow.create_*.map(_.name). Note that the * projection of the table also defines the columns generated when you create the table schema. So the columns not mentioned in the projection are not created and the statement above is guaranteed to return exactly what you need and not to drop anything.
To recap briefly:
To get a list of the column names of the Simples table as Seq[String] use
simples.baseTableRow.create_*.map(_.name).toSeq
To generate a Seq[String] that would correspond to how the data
would be inserted into the database using a raw query for aSimple,
an instance of Simple use aSimple.productIterator.toSeq
To get the column names, try this:
db.run(for {
metaTables <- slick.jdbc.meta.MTable.getTables("simples")
columns <- metaTables.head.getColumns
} yield columns.map {_.name}) foreach println
This will print
Vector(a, b, c)
And for the case class values, you can use productIterator:
Simple("hello", 1, None).productIterator.toVector
is
Vector(hello, 1, None)
You still have to do the value mapping, and guarantee that the order of the columns in the table and the values in the case class are the same.
I'm following the Slick documentation example for autoincrementing fields and I'm having trouble creating a mapped projection that ... well, only has one column.
case class UserRole(id: Option[Int], role: String)
object UserRoles extends Table[UserRole]("userRole") {
def id = column[Int]("ID", O.PrimaryKey, O.AutoInc)
def role = column[String]("ROLE")
// ...
def * = id.? ~ role <> (UserRole, UserRole.unapply _)
// NEXT LINE ERRORS OUT
def forInsert = role <> ({t => UserRole(None, t._1)}, {(r: UserRole) => Some((r.role))}) returning id
}
The error is "value <> is not a member of scala.slick.lifted.Column[String]"
I also thought it'd be more efficient to design my schema like so:
case class UserRole(role: String)
object UserRoles extends Table[UserRole]("userRole") {
def role = column[Int]("ROLE", O.PrimaryKey)
// ...
def * = role <> (UserRole, UserRole.unapply _)
}
But then I start getting the same error as above, too. "value <> is not a member of scala.slick.lifted.Column[String]"
What am I really doing? Do I just not have a projection anymore because I only have one column? If so, what should I be doing?
This is a known issue with Slick; mapped projections do not work with a single column. See https://github.com/slick/slick/issues/40
Luckily, you don't need a mapped projection for your code to work. Just omit everything after and including the <>. See scala slick method I can not understand so far for a great explanation of projections. It includes the information you need to get going.