Defining projection to map to nested case classes - scala

I have these case classes:
case class PolicyHolder(id : String, firstName : String, lastName : String)
case class Policy(address : Future[Address], policyHolder : Future[PolicyHolder], created : RichDateTime, duration : RichDuration )
I then have a slick schema defined for Policy
class PolicyDAO(tag: Tag) extends Table[Policy](tag, "POLICIES") with DbConfig {
def address = column[String]("ADDRESS", O.PrimaryKey)
def policyHolder = foreignKey("POLICY_HOLDER_FK", address, TableQuery[PolicyHolderDAO])(_.id)
def created = column[RichDateTime]("CREATED")
def duration = column[String]("DURATION")
def * = (address, policyHolder, created, duration) <> (Policy.apply, Policy.unapply)
}
What is the best way for me to define this projection correctly to map the policyHolder field inside of my Policy case class from the foreign key value to an actual instance of the PolicyHolder case class.

Our solution to this problem is to place the foreign key id in the case class, and to then use a lazy val or a def (the latter possibly being backed by a cache) to retrieve the record using the key. This is assuming that your PolicyHolders are stored in a separate table - if they're denormalized but you want to treat them as separate case classes then you can have the lazy val / def in Policy construct a new case class instead of retrieving the record using the foreign key.
class PolicyDAO(tag: Tag) extends Table[Policy](tag, "POLICIES") with DbConfig {
def address = column[String]("ADDRESS", O.PrimaryKey)
def policyHolderId = column[String]("POLICY_HOLDER_ID")
def created = column[RichDateTime]("CREATED")
def duration = column[String]("DURATION")
def * = (address, policyHolderId, created, duration) <> (Policy.apply, Policy.unapply)
}
case class Policy(address : Future[Address], policyHolderId : Future[String], created : RichDateTime, duration : RichDuration ) {
lazy val policyHolder = policyHolderId.map(id => PolicyHolderDAO.get(id))
}
We also used a common set of create/update/delete methods to account for the nesting, so that when a Policy is committed its inner PolicyHolder will also be committed; we used a CommonDAO class that extended Table and had the prototypes for the create/update/delete methods, and then all DAOs extended CommonDAO instead of Table and overrode create/update/delete as necessary.
Edit: To cut down on errors and reduce the amount of boilerplate we had to write, we used Slick's code generation tool - this way the CRUD operations could be automatically generated from the schema

Related

Custom column (Object) type in Slick

it seems that I can't find anywhere how to properly use custom column types in Slick and I've been struggling for a while. Slick documentation
suggests MappedColumnType but I found it useable only for simple use-cases like primitive type wrappers (or it's probably just me not knowing how to use it properly).
Let's say that I have Jobs table in my DB described by JobsTableDef class. In that table, I have columns companyId and responsibleUserId which are Foreign keys for Company and User objects in their respective tables (CompaniesTableDef, UsersTableDef).
class JobsTableDef(tag: Tag) extends Table[Job] (tag, "jobs") {
def id = column[Long]("id", O.AutoInc, O.PrimaryKey)
def title = column[String]("title")
def companyId = column[Long]("companyId")
def responsibleUserId = column[Long]("responsibleUserId")
def companyFK = foreignKey("COMPANY_ID_FK", companyId, companies)(i => i.id)
def responsibleUserFK = foreignKey("RESPONSIBLE_USER_FK", responsibleUserId, users)(i => i.id)
val companies = TableQuery[CompaniesTableDef]
val users = TableQuery[UsersTableDef]
override def * = (id, title, companyId, responsibleUserId) <> (Job.tupled, Job.unapply)
}
class CompaniesTableDef(tag: Tag) extends Table[Company] (tag, "companies") {
def id = column[Long]("id", O.AutoInc, O.PrimaryKey)
def name = column[String]("name")
def about = column[String]("about")
override def * = (id, name, about) <> (Company.tupled, Company.unapply)
}
class UsersTableDef(tag: Tag) extends Table[User] (tag, "users"){
def id = column[Long]("id", O.AutoInc, O.PrimaryKey)
def username = column[String]("username", O.Unique)
override def * = (id, username) <> (User.tupled, User.unapply)
}
What I would like to achieve is to automatically 'deserialize' Company and User represented by their IDs in Jobs table. For example:
class JobsTableDef(tag: Tag) extends Table[Job] (tag, "jobs") {
def id = column[Long]("id", O.AutoInc, O.PrimaryKey)
def title = column[String]("title")
def company = column[Company]("companyId")
def responsibleUser = column[User]("responsibleUserId")
def companyFK = foreignKey("COMPANY_ID_FK", companyId, companies)(i => i.id.?)
def responsibleUserFK = foreignKey("RESPONSIBLE_USER_FK", responsibleUserId, users)(i => i.id.?)
val companies = TableQuery[CompaniesTableDef]
val users = TableQuery[UsersTableDef]
override def * = (id, title, company, responsibleUser) <> (Job.tupled, Job.unapply)
}
given that my Job class is defined like this:
case class Job(
id: Long,
title: String,
company: Company,
responsibleUser: User,
)
Currently, I'm doing it in old-fashioned way of getting Job from the DB, reading companyId and responsibleUserId, then querying the DB again and manually constructing another Job object (of course, I could also join tables, get the data as tuple and then construct Job object). I seriously doubt that this is the way go. Is there a smarter and more elegant way to instruct Slick to automagically fetch linked objects from another tables?
EDIT: I'm using Play 2.6.12 with Slick 3.2.2
After a couple of days of deeper investigation, I've concluded that it's currently impossible in Slick. What I was looking for could be described as auto joining tables described through custom column types. Slick indeed supports custom column types (embodied through MappedColumnType, as described in docs) but it works only for relatively simple types which aren't composed of another objects deserialized from DB (at least automatically, you could always try to fetch another object from the DB and then Await.result() the resulting Future object, but I guess that's not a good practice).
So, to answer myself, 'auto joining' in Slick isn't possible, falling back to manual joins with manual object construction.

Slick schema/design guidelines

Suppose I have such schema (simplified), using slick 2.1:
// Banks
case class Bank(id: Int, name: String)
class Banks(tag: Tag) extends Table[Bank](tag, "banks") {
def id = column[Int]("id", O.PrimaryKey)
def name = column[String]("name")
def * = (id, name) <> (Bank.tupled, Bank.unapply)
}
lazy val banks = TableQuery[Banks]
Each bank has, say, 1:1 BankInfo, which I keep in separate table:
// Bank Info
case class BankInfo(bank_id: Int, ...)
class BankInfos(tag: Tag) extends Table[BankInfo](tag, "bank_infos") {
def bankId = column[Int]("bank_id", O.PrimaryKey)
...
}
lazy val bankInfos = TableQuery[BankInfos]
And each bank has associated 1:M BankItems:
// Bank Item
case class BankItem(id: Int, bank_id: Int, ...)
class BankItems(tag: Tag) extends Table[BankItem](tag, "bank_items") {
def id = column[Int]("id", O.PrimaryKey)
def bankId = column[Int]("bank_id")
...
}
lazy val bankItems = TableQuery[BankItems]
So, if I used ORM, I would have had convenient accessors for associated data, something like bank.info or bank.items. I've read "Migrating from ORMs", and I understand that Slick doesn't support full relation mapping, though I've seen example where foreignKey was used.
Basically, where should I place my code to access related data (I want to access all BankItems for some Bank, and its BankInfo). Should it be implemented in case classes, or in Table classes, or elsewhere? Can someone give me a practical advice on what is a "standard practice" in this case?

Using Couchbase as cache over Slick

I'm trying to use Couchbase as a cache layer for a relational database that is accessed using Slick. The skeleton of my code that's relevant to the question is as follows:
class RdbTable[T <: Table[_]](implicit val bucket: CouchbaseBucket) {
type ElementType = T#TableElementType
private val table = TableQuery[T].baseTableRow
private def cacheAll(implicit session: Session) =
TableQuery[T].list foreach (elem => cache(elem))
private def cache(elem: ElementType) =
table.primaryKeys foreach (pk => bucket.set[ElementType](key(pk, elem), elem))
private def key(pk: PrimaryKey, elem: ElementType) = ???
.......
}
As you can see, I want to cache each element by all of its primary keys. For this purpose, I need to obtain the value of that key for the given element. But I don't see an obvious way to compute the value of a primary key (the column value, if single-column key; the tuple value, if multi-column).
Any suggestions on what to do? Note that the code MUST NOT know what the actual tables and their columns are. It must be completely general.
We're doing something similar, using Redis as the cache. Each of our records only has one primary key, but in some cases we need to include additional data with the cache key to avoid ambiguity (for example, we have a ManyToMany record that represents an association between two records; when we return a ManyToMany record we'll embed one (but not both) of the associated records, and so in the cache key we need to include the type of the associated record that we're returning).
abstract trait Record {
val cacheKey: CacheKey
}
trait ManyToManyRecord extends Record {
override val cacheKey: ManyToManyCacheKey
}
class CacheKey(recordType: String, key: Int) {
def getKey: String = recordType + ":" + key.toString
}
class ManyToManyCacheKey(recordType: String, key: Int, assocType: String) extends CacheKey {
def getKey: String = recordType + ":" + key.toString + ":" + assocType
}
All of our tables use an integer primary key called "id", so it's easy for us to figure out what the value of "key" is. If you're working with a more complicated schema and don't want to manually write out the "def key: String" (or whatever) definitions for all of your record / table types, then you could try using Slick code generation to automatically generate record / table classes / objects with "def key" created directly from the schema. However, the learning curve for Slick code generation (or any other code generation tool) is steep, so if this is your only use for it then you'd probably be better off generating "def key" by hand. (We generate somewhere between 20-30% of our code using the code generation tool, so the initial investment in learning how to use the tool has paid off)
Slick doesn't come with a built-in primary key extractor for entities. What you can do is use either interfaces, type classes or reflection. E.g. variants of the following:
Either make your entities implement a trait
trait HasPrimaryKey{
def primaryKey: Any
}
class RdbTable[T <: Table[_ <: HasPrimaryKey]](implicit val bucket: CouchbaseBucket) {
...
private def key(elem: ElementType) = elem.primaryKey
// and for each entity:
case class Person( ... ) extends HasPrimaryKey{
def primaryKey = ...
}
or a type class
trait KeyTypeClass[E,T <: Table[E]]{
def key(e: E): Any
}
class RdbTable[T <: Table[_]](implicit val bucket: CouchbaseBucket, keyTC: KeyTypeClass[T]) {
...
private def key(elem: ElementType) = keyTC(elem)
// and for each entity:
implicit val personKey = new KeyTypeClass[Person,PersonTable]{
def key(p: Person) = ...
}
or using reflection to iterate over the primary keys and pull the values out of corresponding fields of the entity.
Code generation as mentioned by Zim-Zam can help with the repetitive elements.

How to write class and tableclass mapping for slick2 instead of using case class?

I use case class to transform the class object to data for slick2 before, but current I use another play plugin, the plugin object use the case class, my class is inherent from this case class. So, I can not use case class as the scala language forbidden use case class to case class inherent.
before:
case class User()
class UserTable(tag: Tag) extends Table[User](tag, "User") {
...
def * = (...)<>(User.tupled,User.unapply)
}
it works.
But now I need to change above to below:
case class BasicProfile()
class User(...) extends BasicProfile(...){
...
def unapply(i:User):Tuple12[...]= Tuple12(...)
}
class UserTable(tag: Tag) extends Table[User](tag, "User") {
...
def * = (...)<>(User.tupled,User.unapply)
}
I do not know how to write the tupled and unapply(I am not my writing is correct or not) method like the case class template auto generated. Or you can should me other way to mapping the class to talbe by slick2.
Any one can give me an example of it?
First of all, this case class is a bad idea:
case class BasicProfile()
Case classes compare by their member values, this one doesn't have any. Also the name is not great, because we have the same name in Slick. May cause confusion.
Regarding your class
class User(...) extends BasicProfile(...){
...
def unapply(i:User):Tuple12[...]= Tuple12(...)
}
It is possible to emulate case classes yourself. Are you doing that because of the 22 field limit? FYI: Scala 2.11 supports larger case classes. We are doing what you are trying at Sport195, but there are several aspects to take care of.
apply and unapply need to be members of object User (the companion object of class User). .tupled is not a real method, but generated automatically by the Scala compiler. it turns a method like .apply that takes a list of arguments into a function that takes a single tuple of those arguments. As tuples are limited to 22 columns, so is .tupled. But you could of course auto-generated one yourself, may have to give it another name.
We are using the Slick code generator in combination with twirl template engine (uses # to insert expressions. The $ are inserted as if into the generated Scala code and evaluated, when the generated code is compiled/run.). Here are a few snippets that may help you:
Generate apply method
/** Factory for #{name} objects
#{indentN(2,entityColumns.map(c => "* #param "+c.name+" "+c.doc).mkString("\n"))}
*/
final def apply(
#{indentN(2,
entityColumns.map(c =>
colWithTypeAndDefault(c)
).mkString(",\n")
)}
) = new #{name}(#{columnsCSV})
Generate unapply method:
#{if(entityColumns.size <= 22)
s"""
/** Extractor for ${name} objects */
final def unapply(o: ${name}) = Some((${entityColumns.map(c => "o."+c.name).mkString(", ")}))
""".trim
else
""}
Trait that can be mixed into User to make it a Scala Product:
trait UserBase with Product{
// Product interface
def canEqual(that: Any): Boolean = that.isInstanceOf[#name]
def productArity: Int = #{entityColumns.size}
def productElement(n: Int): Any = Seq(#{columnsCSV})(n)
override def toString = #{name}+s"(${productIterator.toSeq.mkString(",")})"
...
case-class like .copy method
final def copy(
#{indentN(2,columnsCopy)}
): #{name} = #{name}(#{columnsCSV})
To use those classes with Slick you have several options. All are somewhat newer and not documented (well). The normal <> operator Slick goes via tuples, but that's not an option for > 22 columns. One option are the new fastpath converters. Another option is mapping via a Slick HList. No examples exist for either. Another option is going via a custom Shape, which is what we do. This will require you to define a custom shape for your User class and another class defined using Column types to mirror user within queries. Like this: http://slick.typesafe.com/doc/2.1.0/api/#scala.slick.lifted.ProductClassShape Too verbose to write by hand. We use the following template code for this:
/** class for holding the columns corresponding to #{name}
* used to identify this entity in a Slick query and map
*/
class #{name}Columns(
#{indent(
entityColumns
.map(c => s"val ${c.name}: Column[${c.exposedType}]")
.mkString(", ")
)}
) extends Product{
def canEqual(that: Any): Boolean = that.isInstanceOf[#name]
def productArity: Int = #{entityColumns.size}
def productElement(n: Int): Any = Seq(#{columnsCSV})(n)
}
/** shape for mapping #{name}Columns to #{name} */
object #{name}Implicits{
implicit object #{name}Shape extends ClassShape(
Seq(#{
entityColumns
.map(_.exposedType)
.map(t => s"implicitly[Shape[ShapeLevel.Flat, Column[$t], $t, Column[$t]]]")
.mkString(", ")
}),
vs => #{name}(#{
entityColumns
.map(_.exposedType)
.zipWithIndex
.map{ case (t,i) => s"vs($i).asInstanceOf[$t]" }
.mkString(", ")
}),
vs => new #{name}Columns(#{
entityColumns
.map(_.exposedType)
.zipWithIndex
.map{ case (t,i) => s"vs($i).asInstanceOf[Column[$t]]" }
.mkString(", ")
})
)
}
import #{name}Implicits.#{name}Shape
A few helpers we put into the Slick code generator:
val columnsCSV = entityColumns.map(_.name).mkString(", ")
val columnsCopy = entityColumns.map(c => colWithType(c)+" = "+c.name).mkString(", ")
val columnNames = entityColumns.map(_.name.toString)
def colWithType(c: Column) = s"${c.name}: ${c.exposedType}"
def colWithTypeAndDefault(c: Column) =
colWithType(c) + colDefault(c).map(" = "+_).getOrElse("")
def indentN(n:Int,code: String): String = code.split("\n").mkString("\n"+List.fill(n)(" ").mkString(""))
I know this may a bit troublesome to replicate, especially if you are new to Scala. I hope to to find the time get it into the official Slick code generator at some point.

How to model schema.org in Scala?

Schema.org is markup vocabulary (for the web) and defines a number of types in terms of properties (no methods). I am currently trying to model parts of that schema in Scala as internal model classes to be used in conjunction with a document-oriented database (MongoDB) and a web framework.
As can be seen in the definition of LocalBusiness, schema.org uses multiple inheritance to also include properties from the "Place" type. So my question is: How would you model such a schema in Scala?
I have come up with two solutions so far. The first one use regular classes to model a single inheritance tree and uses traits to mixin those additional properties.
trait ThingA {
var name: String = ""
var url: String = ""
}
trait OrganizationA {
var email: String = ""
}
trait PlaceA {
var x: String = ""
var y: String = ""
}
trait LocalBusinessA {
var priceRange: String = ""
}
class OrganizationClassA extends ThingA with OrganizationA {}
class LocalBusinessClassA extends OrganizationClassA with PlaceA with LocalBusinessA {}
The second version tries to use case classes. However, since case class inheritance is deprecated, I cannot model the main hierarchy so easily.
trait ThingB {
val name: String
}
trait OrganizationB {
val email: String
}
trait PlaceB {
val x: String
val y: String
}
trait LocalBusinessB {
val priceRange: String
}
case class OrganizationClassB(val name: String, val email: String) extends ThingB with OrganizationB
case class LocalBusinessClassB(val name: String, val email: String, val x: String, val y: String, val priceRange: String) extends ThingB with OrganizationB with PlaceB with LocalBusinessB
Is there a better way to model this? I could use composition similar to
case class LocalBusinessClassC(val thing:ThingClass, val place: PlaceClass, ...)
but then of course, LocalBusiness cannot be used when a "Place" is expected, for example when I try to render something on Google Maps.
What works best for you depends greatly on how you want to map your objects to the underlying datastore.
Given the need for multiple inheritance, and approach that might be worth considering would be to just use traits. This gives you multiple inheritance with the least amount of code duplication or boilerplating.
trait Thing {
val name: String // required
val url: Option[String] = None // reasonable default
}
trait Organization extends Thing {
val email: Option[String] = None
}
trait Place extends Thing {
val x: String
val y: String
}
trait LocalBusiness extends Organization with Place {
val priceRange: String
}
Note that Organization extends Thing, as does Place, just as in schema.org.
To instantiate them, you create anonymous inner classes that specify the values of all attributes.
object UseIt extends App {
val home = new Place {
val name = "Home"
val x = "-86.586104"
val y = "34.730369"
}
val oz = new Place {
val name = "Oz"
val x = "151.206890"
val y = "-33.873651"
}
val paulis = new LocalBusiness {
val name = "Pauli's"
override val url = "http://www.paulisbarandgrill.com/"
val x = "-86.713660"
val y = "34.755092"
val priceRange = "$$$"
}
}
If any fields have a reasonable default value, you can specify the default value in the trait.
I left fields without value as empty strings, but it probably makes more sense to make optional fields of type Option[String], to better indicate that their value is not set. You liked using Option, so I'm using Option.
The downside of this approach is that the compiler generates an anonymous inner class every place you instantiate one of the traits. This could give you an explosion of .class files. More importantly, though, it means that different instances of the same trait will have different types.
Edit:
In regards to how you would use this to load objects from the database, that depends greatly on how you access your database. If you use an object mapper, you'll want to structure your model objects in the way that the mapper expects them to be structured. If this sort of trick works with your object mapper, I'll be surprised.
If you're writing your own data access layer, then you can simply use a DAO or repository pattern for data access, putting the logic to build the anonymous inner classes in there.
This is just one way to structure these objects. It's not even the best way, but it demonstrates the point.
trait Database {
// treats objects as simple key/value pairs
def findObject(id: String): Option[Map[String, String]]
}
class ThingRepo(db: Database) {
def findThing(id: String): Option[Thing] = {
// Note that in this way, malformed objects (i.e. missing name) simply
// return None. Logging or other responses for malformed objects is left
// as an exercise :-)
for {
fields <- db.findObject(id) // load object from database
name <- field.get("name") // extract required field
} yield {
new Thing {
val name = name
val url = field.get("url")
}
}
}
}
There's a bit more to it than that (how you identify objects, how you store them in the database, how you wire up repository, how you'll handle polymorphic queries, etc.). But this should be a good start.