Persisting a recursive data model with SORM - scala

For my project, I would like to make a tree model; let's say it's about files and directories. But files can be in multiple directories at the same time, so more like the same way you add tags to email in gmail.
I want to build a model for competences (say java, scala, angular, etc) and put them in categories. In this case java and scala are languages, agila and scrum are ways of working, angular is a framework / toolkit and so forth. But then we want to group stuff flexibly, ie play, java and scala are in a 'backend' category and angular, jquery, etc are in a frontend category.
I figured I would have a table competences like so:
case class Competence (name: String, categories: Option[Category])
and the categories as follows:
case class Category ( name: String, parent: Option[Category] )
This will compile, but SORM will generate an error (from activator console):
scala> import models.DB
import models.DB
scala> import models.Category
import models.Category
scala> import models.Competence
import models.Competence
scala> val cat1 = new Category ( "A", None )
cat1: models.Category = Category(A,None)
scala> val sav1 = DB.save ( cat1 )
sorm.Instance$ValidationException: Entity 'models.Category' recurses at 'models.Category'
at sorm.Instance$Initialization$$anonfun$2.apply(Instance.scala:216)
at sorm.Instance$Initialization$$anonfun$2.apply(Instance.scala:216)
at scala.Option.map(Option.scala:146)
at sorm.Instance$Initialization.<init>(Instance.scala:216)
at sorm.Instance.<init>(Instance.scala:38)
at models.DB$.<init>(DB.scala:5)
at models.DB$.<clinit>(DB.scala)
... 42 elided
Although I want the beautiful simplicity of sorm, will I need to switch to Slick for my project to implement this? I had the idea that link tables would be implicitly generated by sorm. Or could I simply work around the problem by making a:
case class Taxonomy ( child: Category, parent: Category )
and then do parsing / formatting work on the JS side? It seems to make the simplicity of using sorm disappear somewhat.
To give some idea, what I want is to make a ajaxy page where a user can add new competences in a list on the left, and then link/unlink them to whatever category tag in the tree he likes.

I encountered the same question. I needed to define an interation between two operant, which can be chained(recursive). Like:
case class InteractionModel(
val leftOperantId: Int,
val operation: String ,
val rightOperantId: Int,
val next: InteractionModel)
My working around: change this case class into Json(String) and persist it as String, when retreiving it, convert it from Json. And since it's String, do not register it as sorm Entity.
import spray.json._
case class InteractionModel(
val leftOperantId: Int,
val operation: String ,
val rightOperantId: Int,
val next: InteractionModel) extends Jsonable {
def toJSON: String = {
val js = this.toJson(InteractionModel.MyJsonProtocol.ImJsonFormat)
js.compactPrint
}
}
//define protocol
object InteractionModel {
def fromJSON(in: String): InteractionModel = {
in.parseJson.convertTo[InteractionModel](InteractionModel.MyJsonProtocol.ImJsonFormat)
}
val none = new InteractionModel((-1), "", (-1), null) {
override def toJSON = "{}"
}
object MyJsonProtocol extends DefaultJsonProtocol {
implicit object ImJsonFormat extends RootJsonFormat[InteractionModel] {
def write(im: InteractionModel) = {
def recWrite(i: InteractionModel): JsObject = {
val next = i.next match {
case null => JsNull
case iNext => recWrite(i.next)
}
JsObject(
"leftOperantId" -> JsNumber(i.leftOperantId),
"operation" -> JsString(i.operation.toString),
"rightOperantId" -> JsNumber(i.rightOperantId),
"next" -> next)
}
recWrite(im)
}
def read(value: JsValue) = {
def recRead(v: JsValue): InteractionModel = {
v.asJsObject.getFields("leftOperantId", "operation", "rightOperantId", "next") match {
case Seq(JsNumber(left), JsString(operation), JsNumber(right), nextJs) =>
val next = nextJs match {
case JsNull => null
case js => recRead(js)
}
InteractionModel(left.toInt, operation, right.toInt, next)
case s => InteractionModel.none
}
}
recRead(value)
}
}
}
}

Related

DSL in scala using case classes

My use case has case classes something like
case class Address(name:String,pincode:String){
override def toString =name +"=" +pincode
}
case class Department(name:String){
override def toString =name
}
case class emp(address:Address,department:Department)
I want to create a DSL like below.Can anyone share the links about how to create a DSL and any suggestions to achieve the below.
emp.withAddress("abc","12222").withDepartment("HR")
Update:
Actual use case class may have more fields close to 20. I want to avoid redudancy of code
I created a DSL using reflection so that we don't need to add every field to it.
Disclamer: This DSL is extremely weakly typed and I did it just for fun. I don't really think this is a good approach in Scala.
scala> create an Employee where "homeAddress" is Address("a", "b") and "department" is Department("c") and that_s it
res0: Employee = Employee(a=b,null,c)
scala> create an Employee where "workAddress" is Address("w", "x") and "homeAddress" is Address("y", "z") and that_s it
res1: Employee = Employee(y=z,w=x,null)
scala> create a Customer where "address" is Address("a", "b") and "age" is 900 and that_s it
res0: Customer = Customer(a=b,900)
The last example is the equivalent of writing:
create.a(Customer).where("address").is(Address("a", "b")).and("age").is(900).and(that_s).it
A way of writing DSLs in Scala and avoid parentheses and the dot is by following this pattern:
object.method(parameter).method(parameter)...
Here is the source:
// DSL
object create {
def an(t: Employee.type) = new ModelDSL(Employee(null, null, null))
def a(t: Customer.type) = new ModelDSL(Customer(null, 0))
}
object that_s
class ModelDSL[T](model: T) {
def where(field: String): ValueDSL[ModelDSL2[T], Any] = new ValueDSL(value => {
val f = model.getClass.getDeclaredField(field)
f.setAccessible(true)
f.set(model, value)
new ModelDSL2[T](model)
})
def and(t: that_s.type) = new { def it = model }
}
class ModelDSL2[T](model: T) {
def and(field: String) = new ModelDSL(model).where(field)
def and(t: that_s.type) = new { def it = model }
}
class ValueDSL[T, V](callback: V => T) {
def is(value: V): T = callback(value)
}
// Models
case class Employee(homeAddress: Address, workAddress: Address, department: Department)
case class Customer(address: Address, age: Int)
case class Address(name: String, pincode: String) {
override def toString = name + "=" + pincode
}
case class Department(name: String) {
override def toString = name
}
I really don't think you need the builder pattern in Scala. Just give your case class reasonable defaults and use the copy method.
i.e.:
employee.copy(address = Address("abc","12222"),
department = Department("HR"))
You could also use an immutable builder:
case class EmployeeBuilder(address:Address = Address("", ""),department:Department = Department("")) {
def build = emp(address, department)
def withAddress(address: Address) = copy(address = address)
def withDepartment(department: Department) = copy(department = department)
}
object EmployeeBuilder {
def withAddress(address: Address) = EmployeeBuilder().copy(address = address)
def withDepartment(department: Department) = EmployeeBuilder().copy(department = department)
}
You could do
object emp {
def builder = new Builder(None, None)
case class Builder(address: Option[Address], department: Option[Department]) {
def withDepartment(name:String) = {
val dept = Department(name)
this.copy(department = Some(dept))
}
def withAddress(name:String, pincode:String) = {
val addr = Address(name, pincode)
this.copy(address = Some(addr))
}
def build = (address, department) match {
case (Some(a), Some(d)) => new emp(a, d)
case (None, _) => throw new IllegalStateException("Address not provided")
case _ => throw new IllegalStateException("Department not provided")
}
}
}
and use it as emp.builder.withAddress("abc","12222").withDepartment("HR").build().
You don't need optional fields, copy, or the builder pattern (exactly), if you are willing to have the build always take the arguments in a particular order:
case class emp(address:Address,department:Department, id: Long)
object emp {
def withAddress(name: String, pincode: String): WithDepartment =
new WithDepartment(Address(name, pincode))
final class WithDepartment(private val address: Address)
extends AnyVal {
def withDepartment(name: String): WithId =
new WithId(address, Department(name))
}
final class WithId(address: Address, department: Department) {
def withId(id: Long): emp = emp(address, department, id)
}
}
emp.withAddress("abc","12222").withDepartment("HR").withId(1)
The idea here is that each emp parameter gets its own class which provides a method to get you to the next class, until the final one gives you an emp object. It's like currying but at the type level. As you can see I've added an extra parameter just as an example of how to extend the pattern past the first two parameters.
The nice thing about this approach is that, even if you're part-way through the build, the type you have so far will guide you to the next step. So if you have a WithDepartment so far, you know that the next argument you need to supply is a department name.
If you want to avoid modifying the origin classes you can use implicit class, e.g.
implicit class EmpExtensions(emp: emp) {
def withAddress(name: String, pincode: String) {
//code omitted
}
// code omitted
}
then import EmpExtensions wherever you need these methods

Multiple slick `column`s for the same DB column break projection

I'm new to Slick thus I'm not sure whether the problem caused by incorrect usage of implicits or Slick doesn't allow doing what I'm trying to do.
In short I use Slick-pg extension for JSONB support in Postgres. I also use spray-json to deserialize JSONB fields into case classes.
In order to automagically convert columns into objects I wrote generic implicit JsonColumnType that you can see below. It allows me to have any case class for which I defined json formatter to be converted to jsonb field.
On the other hand I want to have alias of JsValue type for the same column so that I can use JSONB-operators.
import com.github.tminglei.slickpg._
import com.github.tminglei.slickpg.json.PgJsonExtensions
import org.bson.types.ObjectId
import slick.ast.BaseTypedType
import slick.jdbc.JdbcType
import spray.json.{JsValue, RootJsonWriter, RootJsonReader}
import scala.reflect.ClassTag
trait MyPostgresDriver extends ExPostgresDriver with PgArraySupport with PgDate2Support with PgRangeSupport with PgHStoreSupport with PgSprayJsonSupport with PgJsonExtensions with PgSearchSupport with PgNetSupport with PgLTreeSupport {
override def pgjson = "jsonb" // jsonb support is in postgres 9.4.0 onward; for 9.3.x use "json"
override val api = MyAPI
private val plainAPI = new API with SprayJsonPlainImplicits
object MyAPI extends API with DateTimeImplicits with JsonImplicits with NetImplicits with LTreeImplicits with RangeImplicits with HStoreImplicits with SearchImplicits with SearchAssistants { //with ArrayImplicits
implicit val ObjectIdColumnType = MappedColumnType.base[ObjectId, Array[Byte]](
{ obj => obj.toByteArray }, { arr => new ObjectId(arr) }
)
implicit def JsonColumnType[T: ClassTag](implicit reader: RootJsonReader[T], writer: RootJsonWriter[T]) = {
val columnType: JdbcType[T] with BaseTypedType[T] = MappedColumnType.base[T, JsValue]({ obj => writer.write(obj) }, { json => reader.read(json) })
columnType
}
}
}
object MyPostgresDriver extends MyPostgresDriver
Here is how my table is defined (minimized version)
case class Article(id : ObjectId, ids : Ids)
case class Ids(doi: Option[String], pmid: Option[Long])
class ArticleRow(tag: Tag) extends Table[Article](tag, "articles") {
def id = column[ObjectId]("id", O.PrimaryKey)
def idsJson = column[JsValue]("ext_ids")
def ids = column[Ids]("ext_ids")
private val fromTuple: ((ObjectId, Ids)) => Article = {
case (id, ids) => Article(id, ids)
}
private val toTuple = (v: Article) => Option((v.id, v.ids))
def * = ProvenShape.proveShapeOf((id, ids) <> (fromTuple, toTuple))(MappedProjection.mappedProjectionShape)
}
private val articles = TableQuery[ArticleRow]
Finally I have function that looks up articles by value of json field
def getArticleByDoi(doi : String): Future[Article] = {
val query = (for (a <- articles if (a.idsJson +>> "doi").asColumnOf[String] === doi) yield a).take(1).result
slickDb.run(query).map { items =>
items.headOption.getOrElse(throw new RuntimeException(s"Article with doi $doi is not found"))
}
}
Sadly I get following exception in runtime
java.lang.ClassCastException: spray.json.JsObject cannot be cast to server.models.db.Ids
The problem is in SpecializedJdbcResultConverter.base where ti.getValue is being called with wrong ti. It should be slick.driver.JdbcTypesComponent$MappedJdbcType but instead it's com.github.tminglei.slickpg.utils.PgCommonJdbcTypes$GenericJdbcType. As result wrong type is passed into my tuple converter.
What makes Slick choose different type for column even though there is explicit definition of projection in table row class ?
Sample project that demonstrates the issue is here.

How do you write a json4s CustomSerializer that handles collections

I have a class that I am trying to deserialize using the json4s CustomSerializer functionality. I need to do this due to the inability of json4s to deserialize mutable collections.
This is the basic structure of the class I want to deserialize (don't worry about why the class is structured like this):
case class FeatureValue(timestamp:Double)
object FeatureValue{
implicit def ordering[F <: FeatureValue] = new Ordering[F] {
override def compare(a: F, b: F): Int = {
a.timestamp.compareTo(b.timestamp)
}
}
}
class Point {
val features = new HashMap[String, SortedSet[FeatureValue]]
def add(name:String, value:FeatureValue):Unit = {
val oldValue:SortedSet[FeatureValue] = features.getOrElseUpdate(name, SortedSet[FeatureValue]())
oldValue += value
}
}
Json4s serializes this just fine. A serialized instance might look like the following:
{"features":
{
"CODE0":[{"timestamp":4.8828914447482E8}],
"CODE1":[{"timestamp":4.8828914541333E8}],
"CODE2":[{"timestamp":4.8828915127325E8},{"timestamp":4.8828910097466E8}]
}
}
I've tried writing a custom deserializer, but I don't know how to deal with the list tails. In a normal matcher you can just call your own function recursively, but in this case the function is anonymous and being called through the json4s API. I cannot find any examples that deal with this and I can't figure it out.
Currently I can match only a single hash key, and a single FeatureValue instance in its value. Here is the CustomSerializer as it stands:
import org.json4s.{FieldSerializer, DefaultFormats, Extraction, CustomSerializer}
import org.json4s.JsonAST._
class PointSerializer extends CustomSerializer[Point](format => (
{
case JObject(JField("features", JObject(Nil)) :: Nil) => new Point
case JObject(List(("features", JObject(List(
(feature:String, JArray(List(JObject(List(("timestamp",JDouble(ts)))))))))
))) => {
val point = new Point
point.add(feature, FeatureValue(ts))
point
}
},
{
// don't need to customize this, it works fine
case x: Point => Extraction.decompose(x)(DefaultFormats + FieldSerializer[Point]())
}
))
If I try to change to using the :: separated list format, so far I have gotten compiler errors. Even if I didn't get compiler errors, I am not sure what I would do with them.
You can get the list of json features in your pattern match and then map over this list to get the Features and their codes.
class PointSerializer extends CustomSerializer[Point](format => (
{
case JObject(List(("features", JObject(featuresJson)))) =>
val features = featuresJson.flatMap {
case (code:String, JArray(timestamps)) =>
timestamps.map { case JObject(List(("timestamp",JDouble(ts)))) =>
code -> FeatureValue(ts)
}
}
val point = new Point
features.foreach((point.add _).tupled)
point
}, {
case x: Point => Extraction.decompose(x)(DefaultFormats + FieldSerializer[Point]())
}
))
Which deserializes your json as follows :
import org.json4s.native.Serialization.{read, write}
implicit val formats = Serialization.formats(NoTypeHints) + new PointSerializer
val json = """
{"features":
{
"CODE0":[{"timestamp":4.8828914447482E8}],
"CODE1":[{"timestamp":4.8828914541333E8}],
"CODE2":[{"timestamp":4.8828915127325E8},{"timestamp":4.8828910097466E8}]
}
}
"""
val point0 = read[Point]("""{"features": {}}""")
val point1 = read[Point](json)
point0.features // Map()
point1.features
// Map(
// CODE0 -> TreeSet(FeatureValue(4.8828914447482E8)),
// CODE2 -> TreeSet(FeatureValue(4.8828910097466E8), FeatureValue(4.8828915127325E8)),
// CODE1 -> TreeSet(FeatureValue(4.8828914541333E8))
// )

required: spray.httpx.marshalling.ToResponseMarshallable Error

Hey im pretty new to Spray and reactive mongo .
Im trying to return a list of result as json but i'm having some issue with converting the result to list of json.
this is my model
import reactivemongo.bson.BSONDocumentReader
import reactivemongo.bson.BSONObjectID
import reactivemongo.bson.Macros
case class Post(id: BSONObjectID, likes: Long, message: String, object_id: String, shares: Long)
object Post {
implicit val reader: BSONDocumentReader[Post] = Macros.reader[Post]
}
the Mongo method
def getAll(): Future[List[Post]] ={
val query = BSONDocument(
"likes" -> BSONDocument(
"$gt" -> 27))
collection.find(query).cursor[Post].collect[List]()
}
and this is the route
val route1 =
path("posts") {
val res: Future[List[Post]]= mongoService.getAll()
onComplete(res) {
case Success(value) => complete(value)
case Failure(ex) => complete(ex.getMessage)
}
}
error
type mismatch; found : List[com.example.model.Post] required: spray.httpx.marshalling.ToResponseMarshallable
thanks,
miki
You'll need to define how a Post will be serialized, which you can do via a spray-json Protocol (see the docs for more detailed information). It's quite easy to do so, but before that, you'll also need to define a format for the BSONObjectId type, since there's no built-in support for that type in spray-json (alternatively, if object_id is a string representation of the BSONObjectId, think about removing the id property from your Post class or change it to be a String):
// necessary imports
import spray.json._
import spray.httpx.SprayJsonSupport._
implicit object BSONObjectIdProtocol extends RootJsonFormat[BSONObjectID] {
override def write(obj: BSONObjectID): JsValue = JsString(obj.stringify)
override def read(json: JsValue): BSONObjectID = json match {
case JsString(id) => BSONObjectID.parse(id) match {
case Success(validId) => validId
case _ => deserializationError("Invalid BSON Object Id")
}
case _ => deserializationError("BSON Object Id expected")
}
}
Now, we're able to define the actual protocol for the Post class:
object PostJsonProtocol extends DefaultJsonProtocol {
implicit val format = jsonFormat5(Post.apply)
}
Furthermore, we'll also need to make sure that we have the defined format in scope:
import PostJsonProtocol._
Now, everything will compile as expected.
One more thing: have a look at the docs about the DSL structure of spray. Your mongoService.getAll() isn't within a complete block, which might not reflect your intentions. This ain't an issue yet, but probably will be if your route get more complex. To fix this issue simply put the future into the onComplete call or make it lazy:
val route1 =
path("posts") {
onComplete(mongoService.getAll()) {
case Success(value) => complete(value)
case Failure(ex) => complete(ex.getMessage)
}
}

Scala Reflection to update a case class val

I'm using scala and slick here, and I have a baserepository which is responsible for doing the basic crud of my classes.
For a design decision, we do have updatedTime and createdTime columns all handled by the application, and not by triggers in database. Both of this fields are joda DataTime instances.
Those fields are defined in two traits called HasUpdatedAt, and HasCreatedAt, for the tables
trait HasCreatedAt {
val createdAt: Option[DateTime]
}
case class User(name:String,createdAt:Option[DateTime] = None) extends HasCreatedAt
I would like to know how can I use reflection to call the user copy method, to update the createdAt value during the database insertion method.
Edit after #vptron and #kevin-wright comments
I have a repo like this
trait BaseRepo[ID, R] {
def insert(r: R)(implicit session: Session): ID
}
I want to implement the insert just once, and there I want to createdAt to be updated, that's why I'm not using the copy method, otherwise I need to implement it everywhere I use the createdAt column.
This question was answered here to help other with this kind of problem.
I end up using this code to execute the copy method of my case classes using scala reflection.
import reflect._
import scala.reflect.runtime.universe._
import scala.reflect.runtime._
class Empty
val mirror = universe.runtimeMirror(getClass.getClassLoader)
// paramName is the parameter that I want to replacte the value
// paramValue is the new parameter value
def updateParam[R : ClassTag](r: R, paramName: String, paramValue: Any): R = {
val instanceMirror = mirror.reflect(r)
val decl = instanceMirror.symbol.asType.toType
val members = decl.members.map(method => transformMethod(method, paramName, paramValue, instanceMirror)).filter {
case _: Empty => false
case _ => true
}.toArray.reverse
val copyMethod = decl.declaration(newTermName("copy")).asMethod
val copyMethodInstance = instanceMirror.reflectMethod(copyMethod)
copyMethodInstance(members: _*).asInstanceOf[R]
}
def transformMethod(method: Symbol, paramName: String, paramValue: Any, instanceMirror: InstanceMirror) = {
val term = method.asTerm
if (term.isAccessor) {
if (term.name.toString == paramName) {
paramValue
} else instanceMirror.reflectField(term).get
} else new Empty
}
With this I can execute the copy method of my case classes, replacing a determined field value.
As comments have said, don't change a val using reflection. Would you that with a java final variable? It makes your code do really unexpected things. If you need to change the value of a val, don't use a val, use a var.
trait HasCreatedAt {
var createdAt: Option[DateTime] = None
}
case class User(name:String) extends HasCreatedAt
Although having a var in a case class may bring some unexpected behavior e.g. copy would not work as expected. This may lead to preferring not using a case class for this.
Another approach would be to make the insert method return an updated copy of the case class, e.g.:
trait HasCreatedAt {
val createdAt: Option[DateTime]
def withCreatedAt(dt:DateTime):this.type
}
case class User(name:String,createdAt:Option[DateTime] = None) extends HasCreatedAt {
def withCreatedAt(dt:DateTime) = this.copy(createdAt = Some(dt))
}
trait BaseRepo[ID, R <: HasCreatedAt] {
def insert(r: R)(implicit session: Session): (ID, R) = {
val id = ???//insert into db
(id, r.withCreatedAt(??? /*now*/))
}
}
EDIT:
Since I didn't answer your original question and you may know what you are doing I am adding a way to do this.
import scala.reflect.runtime.universe._
val user = User("aaa", None)
val m = runtimeMirror(getClass.getClassLoader)
val im = m.reflect(user)
val decl = im.symbol.asType.toType.declaration("createdAt":TermName).asTerm
val fm = im.reflectField(decl)
fm.set(??? /*now*/)
But again, please don't do this. Read this stackoveflow answer to get some insight into what it can cause (vals map to final fields).