Can Slick Codegen generate all the mapped case classes outside of the ${container} trait
, so that they don't inherit its type? Maybe in another file altogether i.e. Models.scala?
// SuppliersRowsDAA.scala
import persistence.Tables
object SuppliersRowsDAA {
case class Save(sup: Tables.SuppliersRow)
}
I get this compilation error:
[error] /app/src/main/scala/persistence/dal/SuppliersDAA.scala:5: type mismatch;
[error] found : persistence.Tables.SuppliersRow
[error] required: SuppliersDAA.this.SuppliersRow
[error] case Save(sup) ⇒ sender ! db.run(Suppliers += sup)
Using Tables#SuppliersRow import gives the same error.
If I manually cut & paste the SuppliersRow case class outside of the auto-generated trait Tables it works!
....
trait Tables {
....
}
case class SuppliersRow(id: Int, userId: Int, name: String)
//EOF
Appending the original docWithCode of EntityType to super.packageCode():
import slick.codegen.SourceCodeGenerator
import slick.model.Model
import scala.collection.mutable
class CustomizedCodeGenerator(model: Model) extends SourceCodeGenerator(model) {
val models = new mutable.MutableList[String]
override def packageCode(profile: String, pkg: String, container: String, parentType: Option[String]): String = {
super.packageCode(profile, pkg, container, parentType) + "\n" + outsideCode
}
def outsideCode = s"${indent(models.mkString("\n"))}"
override def Table = new Table(_) {
override def EntityType = new EntityTypeDef {
override def docWithCode: String = {
models += super.docWithCode.toString + "\n"
""
}
}
}
}
Related
I am trying to define two case classes that are mapped to two Cassandra tables. These case classes have a field that is supposed to be an enum type (or something similar) as there's a limited set of values it can take.
import my.model.ExerciseStatus.ExerciseStatus
import com.datastax.spark.connector.SomeColumns
import com.datastax.spark.connector.types.TypeConverter
import scala.reflect.runtime.universe.typeTag
object ExerciseStatus extends Enumeration {
type ExerciseStatus = Value
val Active = Value("ACTIVE")
val PendingValidation = Value("PENDING_VALIDATION")
}
object StringToExerciseStatusConverter extends TypeConverter[ExerciseStatus] {
def targetTypeTag = typeTag[ExerciseStatus]
def convertPF = { case str: String => ExerciseStatus.withName(str) }
}
object ExerciseStatusToStringConverter extends TypeConverter[String] {
def targetTypeTag = typeTag[String]
def convertPF = { case ExerciseStatus => ExerciseStatus.toString() }
}
case class Exercise(
exerciseId: Long,
name: String,
description: String,
status: ExerciseStatus
)
In another file, I have something similar.
object KnownFactsImpact extends Enumeration {
type KnownFactsImpact = Value
val Low = Value("LOW")
val Medium = Value("MEDIUM")
val High = Value("HIGH")
}
object StringToKnownFactsImpactConverter extends TypeConverter[KnownFactsImpact] {
def targetTypeTag = typeTag[KnownFactsImpact]
def convertPF = { case str: String => KnownFactsImpact.withName(str) }
}
object KnownFactsImpactToStringConverter extends TypeConverter[String] {
def targetTypeTag = typeTag[String]
def convertPF = { case KnownFactsImpact => KnownFactsImpact.toString() }
}
case class KnownFacts(
exerciseId: Long,
...,
impact: KnownFactsImpact
)
From my "reader" class, I do this before actually reading the data from the two tables:
def loadCustomConverters(): Unit = {
TypeConverter.registerConverter(StringToExerciseStatusConverter)
TypeConverter.registerConverter(ExerciseStatusToStringConverter)
TypeConverter.registerConverter(StringToKnownFactsImpactConverter)
TypeConverter.registerConverter(KnownFactsImpactToStringConverter)
}
After the custom converters are registered, I read the data from those two tables like so:
def readExercises(sparkContext: SparkContext): CassandraTableScanRDD[Exercise] = {
sparkContext.cassandraTable[Exercise](sparkContext.getConf.get(SparkConfConstants.Keyspace), Exercise.tableName)
}
def readKnownFactsForExercise(sparkContext: SparkContext, exerciseId: Long): CassandraTableScanRDD[KnownFacts] = {
sparkContext
.cassandraTable[KnownFacts](sparkContext.getConf.get(SparkConfConstants.Keyspace), KnownFacts.tableName)
.where("exercise_id = ?", exerciseId)
}
The first one is loaded ok, but when it reaches the point of reading from the second one, I get the following error:
The Spark job class main method execution failed. Exception: "java.lang.reflect.InvocationTargetException" Caused By: "java.lang.UnsupportedOperationException: No Encoder found for my.model.KnownFactsImpact.KnownFactsImpact - field (class: "scala.Enumeration.Value", name: "impact") - root class: "my.model.KnownFacts""
Caused by: java.lang.UnsupportedOperationException: No Encoder found for my.model.KnownFactsImpact.KnownFactsImpact
- field (class: scala.Enumeration.Value, name: impact)
- root class: my.model.KnownFacts
at org.apache.spark.sql.catalyst.ScalaReflection$.$anonfun$serializerFor$1(ScalaReflection.scala:670)
at scala.reflect.internal.tpe.TypeConstraints$UndoLog.undo(TypeConstraints.scala:73)
at org.apache.spark.sql.catalyst.ScalaReflection.cleanUpReflectionObjects(ScalaReflection.scala:929)
at org.apache.spark.sql.catalyst.ScalaReflection.cleanUpReflectionObjects$(ScalaReflection.scala:928)
at org.apache.spark.sql.catalyst.ScalaReflection$.cleanUpReflectionObjects(ScalaReflection.scala:49)
at org.apache.spark.sql.catalyst.ScalaReflection$.serializerFor(ScalaReflection.scala:471)
at org.apache.spark.sql.catalyst.ScalaReflection$.$anonfun$serializerFor$6(ScalaReflection.scala:663)
at scala.collection.immutable.List.flatMap(List.scala:366)
at org.apache.spark.sql.catalyst.ScalaReflection$.$anonfun$serializerFor$1(ScalaReflection.scala:651)
at scala.reflect.internal.tpe.TypeConstraints$UndoLog.undo(TypeConstraints.scala:73)
at org.apache.spark.sql.catalyst.ScalaReflection.cleanUpReflectionObjects(ScalaReflection.scala:929)
at org.apache.spark.sql.catalyst.ScalaReflection.cleanUpReflectionObjects$(ScalaReflection.scala:928)
at org.apache.spark.sql.catalyst.ScalaReflection$.cleanUpReflectionObjects(ScalaReflection.scala:49)
at org.apache.spark.sql.catalyst.ScalaReflection$.serializerFor(ScalaReflection.scala:471)
at org.apache.spark.sql.catalyst.ScalaReflection$.serializerFor(ScalaReflection.scala:460)
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$.apply(ExpressionEncoder.scala:71)
at org.apache.spark.sql.Encoders$.product(Encoders.scala:275)
at org.apache.spark.sql.LowPrioritySQLImplicits.newProductEncoder(SQLImplicits.scala:248)
at org.apache.spark.sql.LowPrioritySQLImplicits.newProductEncoder$(SQLImplicits.scala:248)
at org.apache.spark.sql.SQLImplicits.newProductEncoder(SQLImplicits.scala:34)
Is there some interference between the registered encoders? And the one for the KnownFactsImpact are no longer seen?
Any help would be more than appreciated. Thank you
P.S. I am using the datastax-java-driver version 3.6.0.9. As for the Cassandra version, I don't know. I think it's 2.2
I want to print the contents of a collection and I've tried with the mkString method, but it gives me still not the right content of the object.
My code:
package org.template
import org.apache.predictionio.controller.LServing
class Serving
extends LServing[Query, PredictedResult] {
override
def serve(query: Query,
predictedResults: Seq[PredictedResult]): PredictedResult = {
println(predictedResults.mkString("\n"))
predictedResults.head
}
}
The response:
predictedResult([Lorg.template.ItemScore;#2fb3a837,[Lorg.template.Rule;#5cfc70a8)
Definition of the PredictedResult class:
package org.template
import org.apache.predictionio.controller.EngineFactory
import org.apache.predictionio.controller.Engine
// Query most similar (top num) items to the given
case class Query(items: Set[String], num: Int) extends Serializable
case class PredictedResult(itemScores: Array[ItemScore], rules: Array[Rule]) extends Serializable
If PredictedResult is a case class like so
case class PredictedResult(value: String)
val predictedResults = List(PredictedResult("aaa"), PredictedResult("bbb"))
println(predictedResults.mkString("\n"))
then we get nice output
PredictedResult(aaa)
PredictedResult(bbb)
However if it is a regular class like so
class PredictedResult(value: String)
val predictedResults = List(new PredictedResult("aaa"), new PredictedResult("bbb"))
println(predictedResults.mkString("\n"))
then we get
example.Hello$PredictedResult#566776ad
example.Hello$PredictedResult#6108b2d7
To get the nice output for regular class we need to override its toString method like so
class PredictedResult(value: String) {
override def toString: String = s"""PredictedResult($value)"""
}
which now outputs
PredictedResult(aaa)
PredictedResult(bbb)
Addressing the comment we have
case class Rule(v: String)
case class ItemScore(v: Int)
case class PredictedResult(itemScores: Array[ItemScore], rules: Array[Rule]) {
override def toString: String =
s"""
|PredictedResult(Array(${itemScores.mkString(",")}, Array(${rules.mkString(",")}))
""".stripMargin
}
val predictedResults = List(PredictedResult(Array(ItemScore(42), ItemScore(11)), Array(Rule("rule1"), Rule("rule2"))))
println(predictedResults.mkString("\n"))
which outputs
PredictedResult(Array(ItemScore(42),ItemScore(11), Array(Rule(rule1),Rule(rule2)))
If we change from Array to List like so
case class Rule(v: String)
case class ItemScore(v: Int)
case class PredictedResult(itemScores: List[ItemScore], rules: List[Rule])
val predictedResults = List(PredictedResult(List(ItemScore(42), ItemScore(11)), List(Rule("rule1"), Rule("rule2"))))
println(predictedResults.mkString("\n"))
then we get nice output out-of-the-box without the need to override toString
PredictedResult(List(ItemScore(42), ItemScore(11)),List(Rule(rule1), Rule(rule2)))
I still get the same error, I have defined the marshaller (and imported it); it appears that the case class entry is not in context when the function is polymorphic. and this throws a Cannot find JsonWriter or JsonFormat type class for Case Class. Is there a reason why spray-json can not find the implicit marshaller for the case class, (even when defined) is this case class in context? Link to marshaller
import spray.json._
import queue.MedusaJsonProtocol._
object MysqlDb {
...
}
case class UserDbEntry(
id: Int,
username: String,
countryId: Int,
created: LocalDateTime
)
trait MysqlDb {
implicit lazy val pool = MysqlDb.pool
}
trait HydraMapperT extends MysqlDb {
val FetchAllSql: String
def fetchAll(currentDate: String): Future[List[HydraDbRow]]
def getJson[T](row: T): String
}
object UserHydraDbMapper extends HydraMapperT {
override val FetchAllSql = "SELECT * FROM user WHERE created >= ?"
override def fetchAll(currentDate: String): Future[List[UserDbEntry]] = {
pool.sendPreparedStatement(FetchAllSql, Array(currentDate)).map { queryResult =>
queryResult.rows match {
case Some(rows) =>
rows.toList map (x => rowToModel(x))
case None => List()
}
}
}
override def getJson[UserDbEntry](row: UserDbEntry): String = {
HydraQueueMessage(
tableType = HydraTableName.UserTable,
payload = row.toJson.toString()
).toJson.toString()
}
private def rowToModel(row: RowData): UserDbEntry = {
UserDbEntry (
id = row("id").asInstanceOf[Int],
username = row("username").asInstanceOf[String],
countryId = row("country_id").asInstanceOf[Int],
created = row("created").asInstanceOf[LocalDateTime]
)
}
}
payload = row.toJson.toString() Can't find marshaller for UserDbEntry
You have defined UserDbEntry locally and there is no JSON marshaller for that type. Add the following:
implicit val userDbEntryFormat = Json.format[UserDbEntry]
I'm not sure how you can call row.toJson given UserDbEntry is a local case class. There must be a macro in there somewhere, but it's fairly clear that it's not in scope for the local UserDbEntry.
Edit
Now that I see your Gist, it looks like you have a package dependency problem. As designed, it'll be circular. You have defined the JSON marshaller in package com.at.medusa.core.queue, which imports UserDbEntry, which depends on package com.at.medusa.core.queue for marshalling.
I'm using Play framework with Scala and Reactive Mongo to save an object into my mongodb database. Following this http://reactivemongo.org/releases/0.10/documentation/bson/usage.html I came up with the following code:
import java.util.Date
import com.google.inject.Inject
import model.User
import play.modules.reactivemongo.ReactiveMongoApi
import play.modules.reactivemongo.json.collection.JSONCollection
import reactivemongo.bson.{BSONDocument, BSONDocumentReader, BSONDocumentWriter, BSONObjectID}
import play.modules.reactivemongo.json._, ImplicitBSONHandlers._
import json.JsonFormatters._
class UserRepository #Inject() (val reactiveMongoApi : ReactiveMongoApi) {
private def users = reactiveMongoApi.db.collection[JSONCollection]("users")
def save(user: User) = {
users.insert(user)
}
implicit object UserWriter extends BSONDocumentWriter[User] {
def write(user: User) = {
BSONDocument(
"_id" -> Option(user.id).getOrElse(BSONObjectID.generate),
"name" -> user.name,
"email" -> user.email,
"companyName" -> user.companyName,
"created" -> Option(user.created).getOrElse(new Date)
)
}
}
implicit object UserReader extends BSONDocumentReader[User] {
def read(doc: BSONDocument): User = {
User(
doc.getAs[BSONObjectID]("_id").get,
doc.getAs[String]("name").get,
doc.getAs[String]("email").get,
doc.getAs[String]("companyName").get,
doc.getAs[Date]("created").get
)
}
}
}
I created my implicit writers to convert to a BsonDocument, so I was expecting it to be properly converted and saved into the database.
However, when I compile, I get:
UserRepository.scala:18: No Json serializer as JsObject found for type model.User. Try to implement an implicit OWrites or OFormat for this type.
[error] Error occurred in an application involving default arguments.
[error] users.insert(user)
I'm importing necessary packages as mentioned in No Json serializer as JsObject found for type play.api.libs.json.JsObject.
I'm also importing json.JsonFormatters._ which includes:
implicit val userWrites : Format[User] = Json.format[User]
Yet, it's still returning the same error telling me it can't convert from JsObject to User. I fail to see where is the JsObject here, considering my User entity is just a case class with 5 fields.
case class User(var id: BSONObjectID, var name: String, var email: String, var companyName: String, var created: Date) {
}
Any ideas? What am I missing?
You are using BSONObjectID, and you do not have implicit OFormat for that type. Try this:
implicit val format = Json.format[User]
implicit val userFormats = new OFormat[User] {
override def reads(json: JsValue): JsResult[User] = format.reads(json)
override def writes(o: User): JsObject = format.writes(o).asInstanceOf[JsObject]
}
Typically, BSONObjectID (_id) does not make any sense in the application. Therefore, you should not use it. If you need an Id, for example as a running number, for users, you can define a new field. This is an example: https://github.com/luongbalinh/play-mongo/blob/master/app/models/User.scala
belongsTo relationship obligatory
I don't want to define typeWine as an optional value, but if i don't put it, I have to declare typeWine in the extract method and I don't know how to do that.
In the documentacion of Skinny ORM, it doesn't describe how to do this, and I'm getting stuck.
package app.models.wine
import scalikejdbc._
import skinny.orm.SkinnyCRUDMapper
case class Wine (id: Option[Long], typeWine: Option[Type] = None, name: String)
object Wine extends SkinnyCRUDMapper[Wine] {
override def defaultAlias = createAlias("w")
override def extract (rs: WrappedResultSet, n: ResultName[Wine]): Wine = new Wine(
id = rs.get(n.id),
name = rs.get(n.name)
)
belongsTo[Type](Type, (w, t) => w.copy(typeWine = t)).byDefault
}
package app.models.wine
import scalikejdbc._
import skinny.orm.SkinnyCRUDMapper
case class Type (id: Option[Long], typeName: String)
object Type extends SkinnyCRUDMapper[Type] {
override def defaultAlias = createAlias("t")
override def columnNames = Seq("id", "type_name")
override def extract (rs: WrappedResultSet, n: ResultName[Type]): Type = new Type(
id = rs.get(n.id),
typeName = rs.get(n.typeName)
)
}
Defining belongsTo/byDefault relationship to typeWine and extracting typeWine value with its resultName should work for you.