i'm trying to map my class with slick so i can persist them.
My business objects are defined this way
case class Book(id : Option[Long], author : Author, title : String, readDate : Date, review : String){}
case class Author(id : Option[Long], name : String, surname : String) {}
Then I defined the "table" class for authors:
class Authors(tag : Tag) extends Table[Author](tag,"AUTHORS") {
def id = column[Option[Long]]("AUTHOR_ID", O.PrimaryKey, O.AutoInc)
def name = column[String]("NAME")
def surname = column[String]("SURNAME")
def * = (id, name, surname) <> ((Author.apply _).tupled , Author.unapply)
}
And for Books:
class Books (tag : Tag) extends Table[Book](tag, "BOOKS") {
implicit val authorMapper = MappedColumnType.base[Author, Long](_.id.get, AuthorDAO.DAO.findById(_))
def id = column[Option[Long]]("BOOK_ID", O.PrimaryKey, O.AutoInc)
def author = column[Author]("FK_AUTHOR")
def title = column[String]("TITLE")
def readDate = column[Date]("DATE")
def review = column[Option[String]]("REVIEW")
def * = (id, author, title, readDate, review) <> ((Book.apply _).tupled , Book.unapply)
}
But when I compile i get this error
Error:(24, 51) No matching Shape found.
Slick does not know how to map the given types.
Possible causes: T in Table[T] does not match your * projection. Or you use an unsupported type in a Query (e.g. scala List).
Required level: scala.slick.lifted.FlatShapeLevel
Source type: (scala.slick.lifted.Column[Option[Long]], scala.slick.lifted.Column[model.Author], scala.slick.lifted.Column[String], scala.slick.lifted.Column[java.sql.Date], scala.slick.lifted.Column[Option[String]])
Unpacked type: (Option[Long], model.Author, String, java.sql.Date, String)
Packed type: Any
def * = (id, author, title, readDate, review) <> ((Book.apply _).tupled , Book.unapply)
^
and also this one:
Error:(24, 51) not enough arguments for method <>: (implicit evidence$2: scala.reflect.ClassTag[model.Book], implicit shape: scala.slick.lifted.Shape[_ <: scala.slick.lifted.FlatShapeLevel, (scala.slick.lifted.Column[Option[Long]], scala.slick.lifted.Column[model.Author], scala.slick.lifted.Column[String], scala.slick.lifted.Column[java.sql.Date], scala.slick.lifted.Column[Option[String]]), (Option[Long], model.Author, String, java.sql.Date, String), _])scala.slick.lifted.MappedProjection[model.Book,(Option[Long], model.Author, String, java.sql.Date, String)].
Unspecified value parameter shape.
def * = (id, author, title, readDate, review) <> ((Book.apply _).tupled , Book.unapply)
^
What's the mistake here?
What am I not getting about slick?
Thank you in advance!
Slick is not an ORM so there's no auto mapping from a foreign key to an entity, the question has been asked many times on SO (here, here just to name two).
Let's assume for a moment that what you are trying to do is possible:
implicit val authorMapper =
MappedColumnType.base[Author, Long](_.id.get, AuthorDAO.DAO.findById(_))
So you are telling the projection to use the row id and fetch the entity related to that id, there are three problems in your case, first you don't handle failures (id.get), second you primary key is optional (which shouldn't be).
The third problem is that slick will fetch each entity in a separate way, what I mean by this is, you execute some query and get 100 books, slick will make 100 other queries only to fetch the related entity, performance wise is suicide, you are completely bypassing the SQL layer (joins) which has the best performance only to have the possibility of shortening your DAOs.
Fortunately this doesn't seem to be possible, mappers are used for non supported types by slick (for example different date formats without having to explicitly using functions) or to inject format conversion when fetching/inserting rows, have a look at the documentation on how to use joins (depending on your version).
Ende Neu's answer is more knowledgeable and relevant to the use case described in the question, and probably a more proper and correct answer.
The following is merely an observation I made which may have helped tmnd91 by answering the question:
What's the mistake here?
I noticed that:
case class Book( ... review : String){}
does not match with:
def review = column[Option[String]]("REVIEW")
It should be:
def review = column[String]("REVIEW")
Related
I have a database for objects called Campaigns containing three fields :
Id (int, not nullable)
Version (int, not nullable)
Stuff (Text, nullable)
Let's call CampaignsRow the corresponding slick entity class
When I select line from Campaigns, I don't always need to read stuff, which contains big chunks of text.
However, I'd very much like to work in the codebase with the class CampaignsRow instead of a tuple, and so to be able to sometimes just drop the Stuff column, while retaining the original type
Basically, I'm trying to write the following function :
//Force dropping the Stuff column from the current Query
def smallCampaign(campaigns: Query[Campaigns, CampaignsRow, Seq]): Query[Campaigns, CampaignsRow, Seq] = {
val smallCampaignQuery = campaigns.map {
row => CampaignsRow(row.id, row.version , None : Option[String])
}
smallCampaignQuery /* Fails because the type is now wrong, I have a Query[(Rep[Int], Rep[Int], Rep[Option[String]), (Int, Int, Option[String], Seq] */
}
Any idea how to do this ? I suspect this has to do with Shape in slick, but I can't find a resource to start understanding this class, and the slick source code is proving too complex for me to follow.
You're actually already doing almost what you want in def *, the default mapping. You can use the same tools in the map method. Your two tools are mapTo and <>.
As you've found, there is the mapTo method which you can only use if your case class exactly matches the shape of the tuple, so if you wanted a special case class just for this purpose:
case class CampaignLite(id: Int, version: Int)
val smallCampaignQuery = campaigns.map {
row => (row.id, row.version).mapTo[CampaignLite]
}
As you want to reuse your existing class, you can write your own convert functions instead of using the standard tupled and unapply and pass those to <>:
object CampaignRow {
def tupleLite(t: (Int, Int)) = CampaignRow(t._1, t._2, None)
def unapplyLite(c: CampaignRow) = Some((c.id, c.version))
}
val smallCampaignQuery = campaigns.map {
row => (row.id, row.version) <> (CampaignRow.tupleLite, CampaignRow.unapplyLite)
}
This gives you the most flexibility, as you can do whatever you like in your convert functions, but it's a bit more wordy.
As row is an instance of the Campaigns table you could always define it there alongside *, if you need to use it regularly.
class Campaigns ... {
...
def * = (id, version, stuff).mapTo[CampaignRow]
def liteMapping = (id, version) <> (CampaignRow.tupleLite, CampaignRow.unapplyLite)
}
val liteCampaigns = campaigns.map(_.liteMapping)
Reference: Essential Slick 3, section 5.2.1
If I understand your requirement correctly, you could consider making CampaignRow a case class that models your Campaigns table class by having Campaigns extend Table[CampaignRow] and providing the bidirectional mapping for the * projection:
case class CampaignRow(id: Int, version: Int, stuff: Option[String])
class Campaigns(tag: Tag) extends Table[CampaignRow](tag, "CAMPAIGNS") {
// ...
def * = (id, version, stuff) <> (CampaignRow.tupled, CampaignRow.unapply)
}
You should then be able to do something like below:
val campaigns = TableQuery[CampaignRow]
val smallCampaignQuery = campaigns.map( _.copy(stuff = None) )
For a relevant example, here's a Slick doc.
When using tableQuery.ddl.create, it creates only the columns in the projection. However in our use case there are columns which are used ONLY for filtering and/or ordering, so they are not part of the projection, but they need to be created:
case class CacheEntry(source: String, destination: String, hops: Int)
class CacheTable(tag: Tag) extends Table[CacheEntry](tag, "cache") {
def source = column[String]("source")
def destination = column[String]("dest")
def hops = column[Int]("hops")
def timestamp = column[LocalDateTime]("ts", O.DBType("timestamp default now"))
def * = (source, destination, hops) <> ((CacheEntry.apply _).tupled, CacheEntry.unapply)
}
How can we convince Slick to create the timstamp column with TableQuery[CacheTable].ddl.create?
Are we approaching this in the wrong way? We definately do NOT want the ts to show up in the CacheEntry (we could live with it in this case, but we have more complicated cases where this is not desirable)
You could define something like a case class TimestampedEE](entry: E, timestamp: LocalDateTime), and change your CacheTable to a Table[Timestamped[CacheEntry]]. The def * projection is probably going to look ugly (if don't rely on some shapeless magic), but that's one way to do it.
I'm new to SLICK (2.1) and am lost in creating my first query using union. Because the parameters are provided from external (via a web interface) eventually, I set them as optional. Please see the comment in the code below. How to create an appropriate query?
My actual class is more complex which I simplified for the sake of this question.
case class MyStuff(id: Int, value: Int, text: String)
class MyTable (tag: Tag) extends Table[MyStuff](tag, "MYSTUFF"){
def id = column[Int]("ID", O NotNull)
def value = column[Int]("VALUE", O NotNull)
def text = column[String]("TEXT", O NotNull)
def * =
(id,
value,
text).shaped <> ((MyStuff.apply _).tupled, MyStuff.unapply)
}
object myTable extends TableQuery(new MyTable(_)){
def getStuff(ids: Option[List[Int]], values: Option[List[Int]])(implicit session: Session): Option[List[MyStuff]] = {
/*
1) If 'ids' are given, retrieve all matching entries, if any.
2) If 'values' are given, retrieve all matching entries (if any), union with the results of the previous step, and remove duplicate entries.
4) If neither 'ids' nor 'values' are given, retrieve all entries.
*/
}
}
getStuff is called like this:
db: Database withSession { implicit session => val myStuff = myTable.getStuff(...) }
You can use inset if the values are Some, otherwise a literal false and only filter when something is not None.
if(ids.isDefined || values.isDefined)
myTable.filter(row =>
ids.map(row.id inSet _).getOrElse(slick.lifted.LiteralColumn(false))
) union myTable.filter(row =>
values.map(row.value inSet _).getOrElse(slick.lifted.LiteralColumn(false))
)
else myTable
If I understand you correctly you want to build a filter at runtime from the given input. You can look at the extended docs for 3.0 (http://slick.typesafe.com/doc/3.0.0-RC1/queries.html#sorting-and-filtering) at "building criteria using a "dynamic filter" e.g. from a webform". This part of the docs is also valid for version 2.1.
In many slick examples in which the type parameter of the table is a case class and the table has an auto-incrementing primary key, I have seen an Option used in the case class for the id field:
case class Item(id : Option[Long], name : String)
object Items extends Table[Item]("item"){
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def name = column[String]("name")
def * = id.? ~ name <> (Item.apply _, Item.unapply _)
}
This kind of makes sense to me, because the id field will have no meaningful value until the object is inserted into the table. However, database queries will always give me back Items that have the id set to something and it gets extremely tedious always folding or pattern matching on something that I know will not be None. I could just put a 0L in the id field when I create a new Item, but this doesn't seem like a good choice.
How is this typically dealt with? Are those the only two options?
This related question has some possible answers in it. The question itself is about a much more specific issue with postgres though.
Update: see Rikards comment below
You could just call .get in the places where you know id is not None. Which is what people usually do I suppose.
An alternative would be having two different classes. One with an id field and one without. Or an ID trait which you only mix in for classes that have an ID.
trait WithID{ def id : Int }
case class Person(name: String)
// create a person with id
(newid:Int,name:String) => new Person(name) with WithID{ def id = newid }
The mapping you have to provide to Slick will be more verbose, but the usage code will be simpler and type-safe. I believe #nafg has an abstraction for this in https://github.com/nafg/slick-additions but I may be mistaken.
I'm following the Slick documentation example for autoincrementing fields and I'm having trouble creating a mapped projection that ... well, only has one column.
case class UserRole(id: Option[Int], role: String)
object UserRoles extends Table[UserRole]("userRole") {
def id = column[Int]("ID", O.PrimaryKey, O.AutoInc)
def role = column[String]("ROLE")
// ...
def * = id.? ~ role <> (UserRole, UserRole.unapply _)
// NEXT LINE ERRORS OUT
def forInsert = role <> ({t => UserRole(None, t._1)}, {(r: UserRole) => Some((r.role))}) returning id
}
The error is "value <> is not a member of scala.slick.lifted.Column[String]"
I also thought it'd be more efficient to design my schema like so:
case class UserRole(role: String)
object UserRoles extends Table[UserRole]("userRole") {
def role = column[Int]("ROLE", O.PrimaryKey)
// ...
def * = role <> (UserRole, UserRole.unapply _)
}
But then I start getting the same error as above, too. "value <> is not a member of scala.slick.lifted.Column[String]"
What am I really doing? Do I just not have a projection anymore because I only have one column? If so, what should I be doing?
This is a known issue with Slick; mapped projections do not work with a single column. See https://github.com/slick/slick/issues/40
Luckily, you don't need a mapped projection for your code to work. Just omit everything after and including the <>. See scala slick method I can not understand so far for a great explanation of projections. It includes the information you need to get going.