Can not access field with sorm - scala

In Scala, I have the following scala case class:
case class Page(url: String)
object Page {
implicit val personFormat = Json.format[Page]
}
Which is encoded in the database like this:
object Db extends Instance(entities = Seq(Entity[Page]()), url="jdbc:h2:mem:test")
Afterwards, I retrieve one instance from the database like this:
val page = Db.query[Page].whereEqual("id", pageId).fetch
val content: String = new URL(page.url).getContent().toString
However, on the last line I am getting.
value url is not a member of Stream[models.Page with sorm.Persisted]
Why is url not a member?
I created a database representation for Page. Shouldn't that include all its fields?

It should be like this
package models
import sorm._
import play.api.libs.json.{JsValue, Writes, Json}
case class Page(url: String)
object Page {
implicit val writes = Json.writes[Page]
implicit val reads = Json.reads[Page]
}
object DB extends Instance(Set(Entity[Page]()), "jdbc:h2:mem:test")
def pages = Action {
val pages = DB.query[Page].fetch()
Ok(Json.toJson(pages))
}
def addPage = Action(parse.json) { request =>
val page = DB.save(request.body.as[Page])
Ok(Json.toJson(page))
}

Related

Persisting a recursive data model with SORM

For my project, I would like to make a tree model; let's say it's about files and directories. But files can be in multiple directories at the same time, so more like the same way you add tags to email in gmail.
I want to build a model for competences (say java, scala, angular, etc) and put them in categories. In this case java and scala are languages, agila and scrum are ways of working, angular is a framework / toolkit and so forth. But then we want to group stuff flexibly, ie play, java and scala are in a 'backend' category and angular, jquery, etc are in a frontend category.
I figured I would have a table competences like so:
case class Competence (name: String, categories: Option[Category])
and the categories as follows:
case class Category ( name: String, parent: Option[Category] )
This will compile, but SORM will generate an error (from activator console):
scala> import models.DB
import models.DB
scala> import models.Category
import models.Category
scala> import models.Competence
import models.Competence
scala> val cat1 = new Category ( "A", None )
cat1: models.Category = Category(A,None)
scala> val sav1 = DB.save ( cat1 )
sorm.Instance$ValidationException: Entity 'models.Category' recurses at 'models.Category'
at sorm.Instance$Initialization$$anonfun$2.apply(Instance.scala:216)
at sorm.Instance$Initialization$$anonfun$2.apply(Instance.scala:216)
at scala.Option.map(Option.scala:146)
at sorm.Instance$Initialization.<init>(Instance.scala:216)
at sorm.Instance.<init>(Instance.scala:38)
at models.DB$.<init>(DB.scala:5)
at models.DB$.<clinit>(DB.scala)
... 42 elided
Although I want the beautiful simplicity of sorm, will I need to switch to Slick for my project to implement this? I had the idea that link tables would be implicitly generated by sorm. Or could I simply work around the problem by making a:
case class Taxonomy ( child: Category, parent: Category )
and then do parsing / formatting work on the JS side? It seems to make the simplicity of using sorm disappear somewhat.
To give some idea, what I want is to make a ajaxy page where a user can add new competences in a list on the left, and then link/unlink them to whatever category tag in the tree he likes.
I encountered the same question. I needed to define an interation between two operant, which can be chained(recursive). Like:
case class InteractionModel(
val leftOperantId: Int,
val operation: String ,
val rightOperantId: Int,
val next: InteractionModel)
My working around: change this case class into Json(String) and persist it as String, when retreiving it, convert it from Json. And since it's String, do not register it as sorm Entity.
import spray.json._
case class InteractionModel(
val leftOperantId: Int,
val operation: String ,
val rightOperantId: Int,
val next: InteractionModel) extends Jsonable {
def toJSON: String = {
val js = this.toJson(InteractionModel.MyJsonProtocol.ImJsonFormat)
js.compactPrint
}
}
//define protocol
object InteractionModel {
def fromJSON(in: String): InteractionModel = {
in.parseJson.convertTo[InteractionModel](InteractionModel.MyJsonProtocol.ImJsonFormat)
}
val none = new InteractionModel((-1), "", (-1), null) {
override def toJSON = "{}"
}
object MyJsonProtocol extends DefaultJsonProtocol {
implicit object ImJsonFormat extends RootJsonFormat[InteractionModel] {
def write(im: InteractionModel) = {
def recWrite(i: InteractionModel): JsObject = {
val next = i.next match {
case null => JsNull
case iNext => recWrite(i.next)
}
JsObject(
"leftOperantId" -> JsNumber(i.leftOperantId),
"operation" -> JsString(i.operation.toString),
"rightOperantId" -> JsNumber(i.rightOperantId),
"next" -> next)
}
recWrite(im)
}
def read(value: JsValue) = {
def recRead(v: JsValue): InteractionModel = {
v.asJsObject.getFields("leftOperantId", "operation", "rightOperantId", "next") match {
case Seq(JsNumber(left), JsString(operation), JsNumber(right), nextJs) =>
val next = nextJs match {
case JsNull => null
case js => recRead(js)
}
InteractionModel(left.toInt, operation, right.toInt, next)
case s => InteractionModel.none
}
}
recRead(value)
}
}
}
}

spray-json Cannot find JsonWriter or JsonFormat type class for Class

I still get the same error, I have defined the marshaller (and imported it); it appears that the case class entry is not in context when the function is polymorphic. and this throws a Cannot find JsonWriter or JsonFormat type class for Case Class. Is there a reason why spray-json can not find the implicit marshaller for the case class, (even when defined) is this case class in context? Link to marshaller
import spray.json._
import queue.MedusaJsonProtocol._
object MysqlDb {
...
}
case class UserDbEntry(
id: Int,
username: String,
countryId: Int,
created: LocalDateTime
)
trait MysqlDb {
implicit lazy val pool = MysqlDb.pool
}
trait HydraMapperT extends MysqlDb {
val FetchAllSql: String
def fetchAll(currentDate: String): Future[List[HydraDbRow]]
def getJson[T](row: T): String
}
object UserHydraDbMapper extends HydraMapperT {
override val FetchAllSql = "SELECT * FROM user WHERE created >= ?"
override def fetchAll(currentDate: String): Future[List[UserDbEntry]] = {
pool.sendPreparedStatement(FetchAllSql, Array(currentDate)).map { queryResult =>
queryResult.rows match {
case Some(rows) =>
rows.toList map (x => rowToModel(x))
case None => List()
}
}
}
override def getJson[UserDbEntry](row: UserDbEntry): String = {
HydraQueueMessage(
tableType = HydraTableName.UserTable,
payload = row.toJson.toString()
).toJson.toString()
}
private def rowToModel(row: RowData): UserDbEntry = {
UserDbEntry (
id = row("id").asInstanceOf[Int],
username = row("username").asInstanceOf[String],
countryId = row("country_id").asInstanceOf[Int],
created = row("created").asInstanceOf[LocalDateTime]
)
}
}
payload = row.toJson.toString() Can't find marshaller for UserDbEntry
You have defined UserDbEntry locally and there is no JSON marshaller for that type. Add the following:
implicit val userDbEntryFormat = Json.format[UserDbEntry]
I'm not sure how you can call row.toJson given UserDbEntry is a local case class. There must be a macro in there somewhere, but it's fairly clear that it's not in scope for the local UserDbEntry.
Edit
Now that I see your Gist, it looks like you have a package dependency problem. As designed, it'll be circular. You have defined the JSON marshaller in package com.at.medusa.core.queue, which imports UserDbEntry, which depends on package com.at.medusa.core.queue for marshalling.

Multiple slick `column`s for the same DB column break projection

I'm new to Slick thus I'm not sure whether the problem caused by incorrect usage of implicits or Slick doesn't allow doing what I'm trying to do.
In short I use Slick-pg extension for JSONB support in Postgres. I also use spray-json to deserialize JSONB fields into case classes.
In order to automagically convert columns into objects I wrote generic implicit JsonColumnType that you can see below. It allows me to have any case class for which I defined json formatter to be converted to jsonb field.
On the other hand I want to have alias of JsValue type for the same column so that I can use JSONB-operators.
import com.github.tminglei.slickpg._
import com.github.tminglei.slickpg.json.PgJsonExtensions
import org.bson.types.ObjectId
import slick.ast.BaseTypedType
import slick.jdbc.JdbcType
import spray.json.{JsValue, RootJsonWriter, RootJsonReader}
import scala.reflect.ClassTag
trait MyPostgresDriver extends ExPostgresDriver with PgArraySupport with PgDate2Support with PgRangeSupport with PgHStoreSupport with PgSprayJsonSupport with PgJsonExtensions with PgSearchSupport with PgNetSupport with PgLTreeSupport {
override def pgjson = "jsonb" // jsonb support is in postgres 9.4.0 onward; for 9.3.x use "json"
override val api = MyAPI
private val plainAPI = new API with SprayJsonPlainImplicits
object MyAPI extends API with DateTimeImplicits with JsonImplicits with NetImplicits with LTreeImplicits with RangeImplicits with HStoreImplicits with SearchImplicits with SearchAssistants { //with ArrayImplicits
implicit val ObjectIdColumnType = MappedColumnType.base[ObjectId, Array[Byte]](
{ obj => obj.toByteArray }, { arr => new ObjectId(arr) }
)
implicit def JsonColumnType[T: ClassTag](implicit reader: RootJsonReader[T], writer: RootJsonWriter[T]) = {
val columnType: JdbcType[T] with BaseTypedType[T] = MappedColumnType.base[T, JsValue]({ obj => writer.write(obj) }, { json => reader.read(json) })
columnType
}
}
}
object MyPostgresDriver extends MyPostgresDriver
Here is how my table is defined (minimized version)
case class Article(id : ObjectId, ids : Ids)
case class Ids(doi: Option[String], pmid: Option[Long])
class ArticleRow(tag: Tag) extends Table[Article](tag, "articles") {
def id = column[ObjectId]("id", O.PrimaryKey)
def idsJson = column[JsValue]("ext_ids")
def ids = column[Ids]("ext_ids")
private val fromTuple: ((ObjectId, Ids)) => Article = {
case (id, ids) => Article(id, ids)
}
private val toTuple = (v: Article) => Option((v.id, v.ids))
def * = ProvenShape.proveShapeOf((id, ids) <> (fromTuple, toTuple))(MappedProjection.mappedProjectionShape)
}
private val articles = TableQuery[ArticleRow]
Finally I have function that looks up articles by value of json field
def getArticleByDoi(doi : String): Future[Article] = {
val query = (for (a <- articles if (a.idsJson +>> "doi").asColumnOf[String] === doi) yield a).take(1).result
slickDb.run(query).map { items =>
items.headOption.getOrElse(throw new RuntimeException(s"Article with doi $doi is not found"))
}
}
Sadly I get following exception in runtime
java.lang.ClassCastException: spray.json.JsObject cannot be cast to server.models.db.Ids
The problem is in SpecializedJdbcResultConverter.base where ti.getValue is being called with wrong ti. It should be slick.driver.JdbcTypesComponent$MappedJdbcType but instead it's com.github.tminglei.slickpg.utils.PgCommonJdbcTypes$GenericJdbcType. As result wrong type is passed into my tuple converter.
What makes Slick choose different type for column even though there is explicit definition of projection in table row class ?
Sample project that demonstrates the issue is here.

Parsing a simple array with Spray-json

I'm trying (and failing) to get my head around how spray-json converts json feeds into objects. If I have a simple key -> value json feed then it seems to work ok but the data I want to read comes in a list like this:
[{
"name": "John",
"age": "30"
},
{
"name": "Tom",
"age": "25"
}]
And my code looks like this:
package jsontest
import spray.json._
import DefaultJsonProtocol._
object JsonFun {
case class Person(name: String, age: String)
case class FriendList(items: List[Person])
object FriendsProtocol extends DefaultJsonProtocol {
implicit val personFormat = jsonFormat2(Person)
implicit val friendListFormat = jsonFormat1(FriendList)
}
def main(args: Array[String]): Unit = {
import FriendsProtocol._
val input = scala.io.Source.fromFile("test.json")("UTF-8").mkString.parseJson
val friendList = input.convertTo[FriendList]
println(friendList)
}
}
If I change my test file so it just has a single person not in an array and run val friendList = input.convertTo[Person] then it works and everything parses but as soon as I try and parse an array it fails with the error Object expected in field 'items'
Can anyone point me the direction of what I'm doing wrong?
Well, as is often the way immediately after posting something to StackOverflow after spending hours trying to get something working, I've managed to get this to work.
The correct implementation of FriendsProtocol was:
object FriendsProtocol extends DefaultJsonProtocol {
implicit val personFormat = jsonFormat2(Person)
implicit object friendListJsonFormat extends RootJsonFormat[FriendList] {
def read(value: JsValue) = FriendList(value.convertTo[List[Person]])
def write(f: FriendList) = ???
}
}
Telling Spray how to read / write (just read in my case) the list object is enough to get it working.
Hope that helps someone else!
To make the Friend array easier to use extend the IndexedSeq[Person]trait by implementing the appropriate apply and length methods. This will allow the Standard Scala Collections API methods like map, filter and sortBy directly on the FriendsArray instance itself without having to access the underlying Array[Person] value that it wraps.
case class Person(name: String, age: String)
// this case class allows special sequence trait in FriendArray class
// this will allow you to use .map .filter etc on FriendArray
case class FriendArray(items: Array[Person]) extends IndexedSeq[Person] {
def apply(index: Int) = items(index)
def length = items.length
}
object FriendsProtocol extends DefaultJsonProtocol {
implicit val personFormat = jsonFormat2(Person)
implicit object friendListJsonFormat extends RootJsonFormat[FriendArray] {
def read(value: JsValue) = FriendArray(value.convertTo[Array[Person]])
def write(f: FriendArray) = ???
}
}
import FriendsProtocol._
val input = jsonString.parseJson
val friends = input.convertTo[FriendArray]
friends.map(x => println(x.name))
println(friends.length)
This will then print:
John
Tom
2

How to write a global object for table creation using Slick MultiDBCake principle?

I'm using Play framework btw. Slick does not help you "create" tables. Users have to create it by themselves, using tableQuery.ddl.create. When it was still Slick 1.0, I read a post about creating a Global.scalaand inside it looks like this:
import play.api._
import scala.slick.jdbc.meta.MTable
import play.api.Play.current
import play.api.db.slick._
import models.current.dao._
import models.current.dao.profile.simple._
object Global extends GlobalSettings {
override def onStart(app: Application) {
DB.withSession {implicit rs =>
databaseCreate(Articles, Labels, Article_Labels, Users, Messages)
def databaseCreate(tables: TableQuery[_]*) = {
for (table <- tables)
if (MTable.getTables(table.toString).list.isEmpty) table.ddl.create
}
}
}
}
However, since it is Slick 2.0 and I'm using the Cake pattern, this doesn't work anymore. I used to be able to pull Table class out and there is an attribute called name which I can call, but now with the new and improved Cake pattern, all I'm left with is TableQuery, which literally has nothing. (Cake pattern example is here: https://github.com/slick/slick-examples/blob/master/src/main/scala/com/typesafe/slick/examples/lifted/MultiDBCakeExample.scala)
My DAO class looks like this:
class DAO(override val profile: JdbcProfile) extends ArticleComponent with Article_LabelComponent with LabelComponent with MessageComponent with UserComponent with Profile {
val Articles = TableQuery[Articles]
val Article_Labels = TableQuery[Article_Labels]
val Labels = TableQuery[Labels]
val Messages = TableQuery[Messages]
val Users = TableQuery[Users]
}
object current {
val dao = new DAO(DB(play.api.Play.current).driver)
}
How should I modify my Global.scala to create a table if a table is not present in the Database?
take a look : http://blog.knoldus.com/2014/01/20/scala-slick-2-0-for-multi-database/
and you can put directly ddl create statement in Glabal.scala:
object Global extends GlobalSettings
override def onStart(app: Application): Unit = {
Logger.info("Apllication has started")
try {
Connection.databaseObject.withSession { implicit session: Session =>
(Articles.ddl ++ Article_Labels.ddl ++ Labels.ddl ++ Messages.ddl ++ Users.ddl).create
Logger.info("All tables have been created")
} catch {
case ex: Exception => Logger.info(ex.getMessage() + ex.printStackTrace())
}
}
}
define connection object:
object Connection {
def databaseObject(): Database = {
Database.forDataSource(DB.getDataSource())
}
}