From what I've read, there is a way to work with nested classes to solve the problem of tables with more than 22 fields. It looks like this (with a simple table):
case class UserRow(id:Int, address1:Address, address2:Address)
case class Address(street:String,city:String)
class User(tag:Tag) extends Table[UserRow](tag, "User"){
def id = column[Int]("id", O.PrimaryKey)
def street1 = column[String]("STREET1")
def city1 = column[String]("CITY1")
def street2 = column[String]("STREET2")
def city2 = column[String]("CITY2")
def * = (id, address1, address2) <> (UserRow.tupled, UserRow.unapply)
def address1 = (street1, city1) <> (Address.tupled, Address.unapply)
def address2 = (street2, city2) <> (Address.tupled, Address.unapply)
}
What I've realized is that plain SQL - which requires implicit values - doesn't work with this solution or at least I haven't been able to make it works.
I thought I could define the implicit values in the same way as the nested classes, like this:
implicit val getAddressResult = GetResult(r => Address(r.<<, r.<<))
implicit val getUserResult = GetResult(r => UserRow(r.<<, r.<<, r.<<))
But it doesn't work. It compiles but when running, it says that the user table is not found.
I'm very new in Scala and Slick so I could have misunderstood some information or have some wrong concepts. What am I doing wrong?
UPDATE
This is what I'm doing in the test:
user.ddl.create
user += UserRow(0, Address("s11", "c11"), Address("s12", "c12"))
user += UserRow(1, Address("s21", "c21"), Address("s22", "c22"))
user += UserRow(2, Address("s31", "c31"), Address("s32", "c32"))
println(user.list)
val sqlPlain = sql"SELECT * FROM user".as[UserRow]
println(sqlPlain)
println(sqlPlain.list)
All of it works until the last sentence where I get the error "Table "USER" not found". Also the exact same test works perfect for a non nested case class.
UPDATE 2
As cvogt has correctly indicated to me, I was misunderstanding the error reported and it wasn't related to the implicit GetResult values. His answer is correct as well as my first approach.
Pass the PositionedResult r to the corresponding GetResult objects:
implicit val getAddressResult = GetResult(r => Address(r.<<, r.<<))
implicit val getUserResult =
GetResult(r => UserRow(r.<<, getAddressResult(r), getAddressResult(r)))
Related
I have the following case class:
case class Block(
id: Option[Int] = None,
blockId: Int,
name: String,
location: Option[Point] = None,
geometry: Option[Geometry] = None,
)
In postgres i have a table SubBlock contient
id : int,
block_id: Int,
name: String,
geom_location: geography,
sub_block_geom: geography
And I define a function to return a subBlock nearest of a specified point
override def getNearestSubBlock(point: Point): Future[SubBlock] = {
val query = sql"""SELECT sub_block_id,block_id,name,ST_AsText(geom_location),sub_block_geom from now.sub_block order by ST_Distance(geom_location, ST_MakePoint(${point.getX()}, ${point.getY()})::geography) limit 1""".as[SubBlock].head
db.run(query)
}
implicit val getSubBlock = GetResult(r => SubBlock(r.nextIntOption(), r.nextInt(), r.nextString(), Option(Location.location2Point(Location.fromWKT(r.nextString()))), Option(new WKTReader().read(r.nextString())))
And my request return the right result, but after I got « Exception in thread "main" java.lang.NullPointerException « because the sub_block_geom is null in my database, so I think that the solution is to change implicit val getSubBlock or to write query with filter, sortedBy , … and I don’t know how to do that
Well... I am not too sure about your problem, as a lot of required details are missing. But from what I can see, you just need to properly handle possibility of null in your getSubBlock.
implicit val getSubBlock = GetResult(r => {
val id = r.nextIntOption()
val blockId = r.nextInt()
val location: Option[Point] = r.nextStringOption().map(s => Location.location2Point(Location.fromWKT(s)))
val geometry: Option[Geometry] = r.nextStringOption().map(s => new WKTReader().read(s)))
SubBlock(id, blockId, location, geometry)
}
object FingerprintsModel extends FingerprintDAO {
// Fingerprint class definition
class FingerprintsTable(tag: Tag) extends Table[Fingerprint](tag, "fingerprints") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def customerId = column[String]("customer_id", O.NotNull)
def template_one = column[Array[Byte]]("template_one", O.NotNull)
def template_two = column[Array[Byte]]("template_two", O.NotNull)
def created = column[DateTime]("created", O.NotNull)
def updated = column[Option[DateTime]]("updated")
def * = (id, customerId, template_one, template_two) <> (Fingerprint.tupled, Fingerprint.unapply _)
def fingerprint = foreignKey("CUSTOMER", customerId, CustomersModel.customers)(_.id)
}
and this is my insert statement:
FingerprintsModel.fingerprints.map(fi => (fi.customerId, fi.template_one, fi.template_two, fi.created))
.insert((id, fingerprint.template_one, fingerprint.template_two, new DateTime()))
Summary
There are two main modifications you need:
You will want a TableQuery[FingerprintsTable] to call insert (or += or ++=on); and
To get back the IDs inserted you need to use the returning method in Slick.
Worked example
It's hard to tell from the code you posted exactly what you have in mind. It would be helpful next time to simplify your example first.
I've assumed your model is something like this:
case class Fingerprint(
id: Long,
customerId: String,
template_one: Array[Byte]
)
I've left out one of the byte arrays and the created and updated fields as they don't seem relevant to the question. In other words, I've simplified.
The FingerprintTable seems ok. I'm ignoring the foreign key as that doesn't seem relevant. Oh, the O.NotNull are now deprecated (in Slick 3 at least). You can leave them off because your columns are not Option values.
What we need is the table query, which I'd add inside FingerprintsModel:
lazy val fingerprints = TableQuery[FingerprintsTable]
lazy val fingerprintsWithID = fingerprints returning fingerprints.map(_.id)
You could use fingerprints to insert data. But you've asked for the IDs back, so you want to use fingerprintsWithID.
Putting it all together (again, using Slick 3 here):
object FingerprintExample extends App {
import FingerprintsModel._
val testData = Seq(
Fingerprint(0L, "Alice", Array(0x01, 0x02)),
Fingerprint(0L, "Bob", Array(0x03, 0x04))
)
// A program that will create the schema, and insert the data, returning the IDs
val program = for {
_ <- fingerprints.schema.create
ids <- fingerprintsWithID ++= testData
} yield ids
// Run the program using an in-memory database
val db = Database.forConfig("h2mem1")
val future = db.run(program)
val result = Await.result(future, 10 seconds)
println(s"The result is: $result")
}
Produces:
The result is: List(1, 2)
Let's say I have a table:
object Suppliers extends Table[(Int, String, String, String)]("SUPPLIERS") {
def id = column[Int]("SUP_ID", O.PrimaryKey)
def name = column[String]("SUP_NAME")
def state = column[String]("STATE")
def zip = column[String]("ZIP")
def * = id ~ name ~ state ~ zip
}
Table's database name
The table's database name can be accessed by going: Suppliers.tableName
This is supported by the Scaladoc on AbstractTable.
For example, the above table's database name is "SUPPLIERS".
Columns' database names
Looking through AbstractTable, getLinearizedNodes and indexes looked promising. No column names in their string representations though.
I assume that * means "all the columns I'm usually interested in." * is a MappedProjection, which has this signature:
final case class MappedProjection[T, P <: Product](
child: Node,
f: (P) ⇒ T,
g: (T) ⇒ Option[P])(proj: Projection[P])
extends ColumnBase[T] with UnaryNode with Product with Serializable
*.getLinearizedNodes contains a huge sequence of numbers, and I realized that at this point I'm just doing a brute force inspection of everything in the API for possibly finding the column names in the String.
Has anybody also encountered this problem before, or could anybody give me a better understanding of how MappedProjection works?
It requires you to rely on Slick internals, which may change between versions, but it is possible. Here is how it works for Slick 1.0.1: You have to go via the FieldSymbol. Then you can extract the information you want like how columnInfo(driver: JdbcDriver, column: FieldSymbol): ColumnInfo does it.
To get a FieldSymbol from a Column you can use fieldSym(node: Node): Option[FieldSymbol] and fieldSym(column: Column[_]): FieldSymbol.
To get the (qualified) column names you can simply do the following:
Suppliers.id.toString
Suppliers.name.toString
Suppliers.state.toString
Suppliers.zip.toString
It's not explicitly stated anywhere that the toString will yield the column name, so your question is a valid one.
Now, if you want to programmatically get all the column names, then that's a bit harder. You could try using reflection to get all the methods that return a Column[_] and call toString on them, but it wouldn't be elegant. Or you could hack a bit and get a select * SQL statement from a query like this:
val selectStatement = DB withSession {
Query(Suppliers).selectStatement
}
And then parse our the column names.
This is the best I could do. If someone knows a better way then please share - I'm interested too ;)
Code is based on Lightbend Activator "slick-http-app".
slick version: 3.1.1
Added this method to the BaseDal:
def getColumns(): mutable.Map[String, Type] = {
val columns = mutable.Map.empty[String, Type]
def selectType(t: Any): Option[Any] = t match {
case t: TableExpansion => Some(t.columns)
case t: Select => Some(t.field)
case _ => None
}
def selectArray(t:Any): Option[ConstArray[Node]] = t match {
case t: TypeMapping => Some(t.child.children)
case _ => None
}
def selectFieldSymbol(t:Any): Option[FieldSymbol] = t match {
case t: FieldSymbol => Some(t)
case _ => None
}
val t = selectType(tableQ.toNode)
val c = selectArray(t.get)
for (se <- c.get) {
val col = selectType(se)
val fs = selectFieldSymbol(col.get)
columns += (fs.get.name -> fs.get.tpe)
}
columns
}
this method gets the column names (real names in DB) + types form the TableQ
used imports are:
import slick.ast._
import slick.util.ConstArray
Let's say that I have this method that runs a fairly basic query using Slick's plain SQL:
object Data {
case class User(user: String, password: String)
implicit val getUserResult = GetResult(r => User(r.<<, r.<<))
def getUser(user: String, password: String): Option[User] = DB.withSession {
sql"""
SELECT "user",
"password"
FROM "user"
WHERE "user" = $user AND
"password" = $password
""".as[User].firstOption
}
}
What if I have a different query from the same table that has over 100 columns:
SELECT * FROM "user"
In this case there would be a whole lot of typing concerning these two lines:
case class User(user: String, password: String, something: Int, ...)
implicit val getUserResult = GetResult(r => User(r.<<, r.<<, r.<<, ...))
Is it possible to somehow automate these two lines without manually mapping 100 columns? Auto inferring types or even if every column is returned as a string type would be a good alternative.
If specifics are required, my stack is Play Framework 2.2.1, Scala 2.10.3, Java 8 64Bit, PostgreSQL 9.3
The function you give to GetResult receives a PositionedResult as its argument. Work with it as you like.
If you define
implicit val getListStringResult = scala.slick.jdbc.GetResult[List[String]](
prs => (1 to prs.numColumns).map(_ => prs.nextString).toList
)
you can then say
sql"...".as[List[String]].firstOption
With a class and table definition looking like this:
case class Group(
id: Long = -1,
id_parent: Long = -1,
label: String = "",
description: String = "")
object Groups extends Table[Group]("GROUPS") {
def id = column[Long]("ID", O.PrimaryKey, O.AutoInc)
def id_parent = column[Long]("ID_PARENT")
def label = column[String]("LABEL")
def description = column[String]("DESC")
def * = id ~ id_parent ~ label ~ design <> (Group, Group.unapply _)
def autoInc = id_parent ~ label ~ design returning id into {
case ((_, _, _), id) => id
}
}
To update a record, I can do this:
def updateGroup(id: Long) = Groups.where(_.id === id)
def updateGroup(g: Group)(implicit session: Session) = updateGroup(g.id).update(g)
But I can't get updates to work using for expressions:
val findGById = for {
id <- Parameters[Long]
g <- Groups; if g.id === id
} yield g
def updateGroupX(g: Group)(implicit session: Session) = findGById(g.id).update(g)
----------------------------------------------------------------------------^
Error: value update is not a member of scala.slick.jdbc.MutatingUnitInvoker[com.exp.Group]
I'm obviously missing something in the documentation.
The update method is supplied by the type UpdateInvoker. An instance of that type can be implicitly created from a Query by the methods productQueryToUpdateInvoker and/or tableQueryToUpdateInvoker (found in the BasicProfile), if they are in scope.
Now the type of your findById method is not a Query but a BasicQueryTemplate[Long, Group]. Looking at the docs, I can find no way from a BasicQueryTemplate (which is a subtype of StatementInvoker) to an UpdateInvoker, neither implicit nor explicit. Thinking about it, that makes kinda sense to me, since I understand a query template (invoker) to be something that has already been "compiled" from an abstract syntax tree (Query) to a prepared statement rather early, before parameterization, whereas an update invoker can only be built from an abstract syntax tree, i.e. a Query object, because it needs to analyze the query and extract its parameters/columns. At least that's the way it appears to work at present.
With that in mind, a possible solution unfolds:
def findGById(id: Long) = for {
g <- Groups; if g.id === id
} yield g
def updateGroupX(g: Group)(implicit session: Session) = findGById(g.id).update(g)
Where findById(id: Long) has the type Query[Groups, Group] which is converted by productQueryToUpdateInvoker to an UpdateInvoker[Group] on which the update method can finally be called.
Hope this helped.
Refer to http://madnessoftechnology.blogspot.ru/2013/01/database-record-updates-with-slick-in.html
I stuck with the updating today, and this blog post helped me much. Also refer to the first comment under the post.