Background
I'm working on an API using both Play Framework and Slick. In an effort to avoid repetitive boiler plate, I want to define my public JSON models without their ID field and wrap them in a WithId container.
import play.api.libs.json._
import play.api.libs.functional.syntax._
case class WithId[T](id: Long, item: T)
case class Wiki(name: String, source: Option[String], text: String)
object WithId {
implicit def withIdRead[T : Reads] : Reads[WithId[T]] = (
(JsPath \ "id").read[Long] and
JsPath.read[T]
)((id, item) => WithId(id, item))
implicit def withIdWrite[T : Writes] : Writes[WithId[T]] = (
(JsPath \ "id").write[Long] and
JsPath.write[T]
).apply(unlift(WithId.unapply[T]))
}
Thanks to the magic of the Reads and Writes definition, I can easily handle JSON with or without the id.
scala> val rawIdJson = """{"id": 123, "name": "My First Wiki", "text": "This is my first wiki article"}"""
rawIdJson: String = {"id": 123, "name": "My First Wiki", "text": "This is my first wiki article"}
scala> val withId = Json.parse(rawIdJson).validate[WithId[Wiki]].get
withId: model.util.WithId[model.entity.Wiki] = WithId(123,Wiki(My First Wiki,None,This is my first wiki article))
scala> val withIdJson = Json.toJson(withId)
withIdJson: play.api.libs.json.JsValue = {"id":123,"name":"My First Wiki","text":"This is my first wiki article"}
scala> val rawJson = """{"name": "My First Wiki", "text": "This is my first wiki article"}"""
rawJson: String = {"name": "My First Wiki", "text": "This is my first wiki article"}
scala> val withoutId = Json.parse(rawJson).validate[Wiki].get
withoutId: model.entity.Wiki = Wiki(My First Wiki,None,This is my first wiki article)
scala> val withoutIdJson = Json.toJson(withoutId)
withoutIdJson: play.api.libs.json.JsValue = {"name":"My First Wiki","text":"This is my first wiki article"}
All well and good.
The problem I now have is that Slick will return rows from the database in the form of tuples or case classes, depending on the query that I'm using. Obviously I could write a lot of pretty straight forward helper methods to transform the tuple/case class into each public model:
object Wiki {
implicit val wikiFmt = Json.format[Wiki]
def fromRow(row: WikiRow) : WithId[Wiki] = WithId(row.id, Wiki(row.name, row.source, row.text))
def fromRow(tup: (Long, String, Option[String], String)) : WithId[Wiki] = WithId(tup._1, Wiki(tup._2, tup._3, tup._4))
}
... but that is a lot of boilerplate to maintain as the number of public models grow.
Problem
Is there a clean way to take a Tuple4[Long, String, Option[String], String] or a case class WikiRow(id: Long, name: String, source: Option[String], text: String) and convert it into WithId[Wiki] (and vice-versa)?
Once I introduce another public model like case class Template(name: String, description: String), can we generalize the solution from #1 to now handle converting a Tuple3[Long, String, Strong] into WithId[Template] (and vice-versa)?
What happens if we throw a field into the private model that isn't used in the public model? Eg, case class WikiRow(id: Long, name: String, source: Option[String], text: String, hidden: Boolean). The hidden field needs to dropped when going WikiRow => WithId[Wiki], and supplied from another source when going WithId[Wiki] => WikiRow.
As for questions 1 and 2: yes it is possible with shapeless, as has been suggested. Here is a solution to go from a tuple or row to WithId[T].
scala> :paste
// Entering paste mode (ctrl-D to finish)
case class WikiRow(id: Long, name: String, source: Option[String], text: String)
case class Wiki(name: String, source: Option[String], text: String)
case class WithId[T](id: Long, item: T)
def createWithId[T] = new WithIdCreator[T]
class WithIdCreator[Out] {
import shapeless._
import shapeless.ops.hlist.IsHCons
def apply[In, InGen <: HList, Tail <: HList](in: In)(
implicit
genIn: Generic.Aux[In,InGen],
hcons: IsHCons.Aux[InGen,Long,Tail],
genOut: Generic.Aux[Out,Tail]
): WithId[Out] = {
val rep = genIn.to(in)
val id = hcons.head(rep)
val tail = hcons.tail(rep)
WithId(id, genOut.from(tail))
}
}
// Exiting paste mode, now interpreting.
defined class WikiRow
defined class Wiki
defined class WithId
createWithId: [T]=> WithIdCreator[T]
defined class WithIdCreator
scala> createWithId[Wiki](WikiRow(3L, "foo", None, "barbaz"))
res1: WithId[Wiki] = WithId(3,Wiki(foo,None,barbaz))
scala> createWithId[Wiki]((3L, "foo", None: Option[String], "barbaz"))
res2: WithId[Wiki] = WithId(3,Wiki(foo,None,barbaz))
scala> case class Template(name: String, description: String)
defined class Template
scala> createWithId[Template]((3L, "foo", "barbaz"))
res3: WithId[Template] = WithId(3,Template(foo,barbaz))
A conversion in the other direction will be more or less analogous.
I see no reason why 3 wouldn't be possible either, but then you will have to rewrite the conversions again to handle dropping or manually supplying parameters.
You can learn more about this in the shapeless guide. All the necessary concepts are explained in there.
Related
I have the following Input Objects:
val BusinessInputType = InputObjectType[BusinessInput]("BusinessInput", List(
InputField("userId", StringType),
InputField("name", StringType),
InputField("address", OptionInputType(StringType)),
InputField("phonenumber", OptionInputType(StringType)),
InputField("email", OptionInputType(StringType)),
InputField("hours", ListInputType(BusinessHoursInputType))
))
val BusinessHoursInputType = InputObjectType[BusinessHoursInput]("hours", List(
InputField("weekDay", IntType),
InputField("startTime", StringType),
InputField("endTime", StringType)
))
And here are my models with custom Marshalling defined:
case class BusinessInput(userId: String, name: String, address: Option[String], phonenumber: Option[String], email: Option[String], hours: Seq[BusinessHoursInput])
object BusinessInput {
implicit val manual = new FromInput[BusinessInput] {
val marshaller = CoercedScalaResultMarshaller.default
def fromResult(node: marshaller.Node) = {
val ad = node.asInstanceOf[Map[String, Any]]
System.out.println(ad)
BusinessInput(
userId = ad("userId").asInstanceOf[String],
name = ad("name").asInstanceOf[String],
address = ad.get("address").flatMap(_.asInstanceOf[Option[String]]),
phonenumber = ad.get("phonenumber").flatMap(_.asInstanceOf[Option[String]]),
email = ad.get("email").flatMap(_.asInstanceOf[Option[String]]),
hours = ad("hours").asInstanceOf[Seq[BusinessHoursInput]]
)
}
}
}
case class BusinessHoursInput(weekDay: Int, startTime: Time, endTime: Time)
object BusinessHoursInput {
implicit val manual = new FromInput[BusinessHoursInput] {
val marshaller = CoercedScalaResultMarshaller.default
def fromResult(node: marshaller.Node) = {
val ad = node.asInstanceOf[Map[String, Any]]
System.out.println("HEY")
BusinessHoursInput(
weekDay = ad("weekDay").asInstanceOf[Int],
startTime = Time.valueOf(ad("startTime").asInstanceOf[String]),
endTime = Time.valueOf(ad("endTime").asInstanceOf[String])
)
}
}
}
My question is, When I have a nested InputObject that has custom Marshalling, I dont see the marshalling of BusinessHoursInput getting invoked before the BusinessInput is marshalled. I noticed this because the print statement of "Hey" is never executed before the print statement of "ad" in BusinessInput. This causes problems later down the road for me when I try to insert the hours field of BusinessInput in the DB because it cannot cast it to BusinessHoursInput object. In Sangria, is it not possible to custom Marshal nested Objects before the parent Object is marshalled?
You are probably are using BusinessInput as an argument type. The actual implicit lookup takes place at the Argument definition time and only for BusinessInput type.
Since FromInput is a type-class based deserialization, you need to explicitly define the dependency between deserializers of nested object. For example, you can rewrite the deserializer like this:
case class BusinessInput(userId: String, name: String, address: Option[String], phonenumber: Option[String], email: Option[String], hours: Seq[BusinessHoursInput])
object BusinessInput {
implicit def manual(implicit hoursFromInput: FromInput[BusinessHoursInput]) = new FromInput[BusinessInput] {
val marshaller = CoercedScalaResultMarshaller.default
def fromResult(node: marshaller.Node) = {
val ad = node.asInstanceOf[Map[String, Any]]
BusinessInput(
userId = ad("userId").asInstanceOf[String],
name = ad("name").asInstanceOf[String],
address = ad.get("address").flatMap(_.asInstanceOf[Option[String]]),
phonenumber = ad.get("phonenumber").flatMap(_.asInstanceOf[Option[String]]),
email = ad.get("email").flatMap(_.asInstanceOf[Option[String]]),
hours = hoursFromInput.fromResult(ad("hours").asInstanceOf[Seq[hoursFromInput.marshaller.Node]])
)
}
}
}
In this version, I'm taking advantage of existing FromInput[BusinessHoursInput] to deserialize BusinessHoursInput from the raw input.
Also as an alternative, you can avoid defining manual FromInput deserializers altogether by taking advantage of existing JSON-based deserializers. For example, in most cases, circe's automatic derivation works just fine. You just need these 2 imports (in the file where you are defining the arguments):
import sangria.marshalling.circe._
import io.circe.generic.auto._
Those import put appropriate FromInput instances into the scope. These instances take advantage of circe's own deserialization mechanism.
import io.circe.Decoder
import io.circe.generic.semiauto.deriveDecoder
import sangria.macros.derive.deriveInputObjectType
import sangria.marshalling.circe._
import sangria.schema.{Argument, InputObjectType}
object XXX {
// when you have FromInput for all types in case class (Int, String) you can derive it
case class BusinessHoursInput(weekDay: Int, startTime: String, endTime: String)
object BusinessHoursInput {
implicit val decoder: Decoder[BusinessHoursInput] = deriveDecoder
implicit val inputType: InputObjectType[BusinessHoursInput] = deriveInputObjectType[BusinessHoursInput]()
}
// the same here, you need InputObjectType also for BusinessHoursInput
case class BusinessInput(userId: String, name: String, address: Option[String], phonenumber: Option[String], email: Option[String], hours: Seq[BusinessHoursInput])
object BusinessInput {
implicit val decoder: Decoder[BusinessInput] = deriveDecoder
implicit val inputType: InputObjectType[BusinessInput] = deriveInputObjectType[BusinessInput]()
}
// for this to work you need to have in scope InputType BusinessInput and FromInput for BusinessInput
// FromInput you can get by having Decoder in scope and import sangria.marshalling.circe._
private val businessInputArg = Argument("businessInput", BusinessInput.inputType)
}
id you do not use circe but different json library you should have of course different typeclasses and proper import in scope
I'll like to escape characters in a dynamic List used in creating a case class.
case class Profile(biography: String,
userid: String,
creationdate: String) extends Serializable
object Profile {
val cs = this.getClass.getConstructors
def createFromList(params: List[Any]) = params match {
case List(biography: Any,
userid: Any,
creationdate: Any) => Profile(StringEscapeUtils.escapeJava(biography.asInstanceOf[String]),
StringEscapeUtils.escapeJava(creationdate.asInstanceOf[String]),
StringEscapeUtils.escapeJava(userid.asInstanceOf[String]))
}
}
JSON.parseFull("""{"biography":"An avid learner","userid":"165774c2-a0e7-4a24-8f79-0f52bf3e2cda", "creationdate":"2015-07-13T07:48:47.924Z"}""")
.map(_.get.asInstanceOf[scala.collection.immutable.Map[String, Any]])
.map {
m => Profile.createFromList(m.values.to[collection.immutable.List])
} saveToCassandra("testks", "profiles", SomeColumns("biography", "userid", "creationdate"))
I get this error:
scala.MatchError: List(An avid learner, 165774c2-a0e7-4a24-8f79-0f52bf3e2cda, 2015-07-13T07:48:47.925Z) (of class scala.collection.immutable.$colon$colon)
Any ideas please?
It might be simpler to use a different (external) JSON library than scala.util.parsing.json (which has been deprecated since Scala 2.11).
There are a lot of good Scala Json libraries, below an example using json4s.
import org.json4s._
import org.json4s.native.JsonMethods._
case class Profile(biography: String, userid: String, creationdate: String)
val json = """{
| "biography":"An avid learner",
| "userid":"165774c2-a0e7-4a24-8f79-0f52bf3e2cda",
| "creationdate":"2015-07-13T07:48:47.924Z"
|}""".stripMargin
parse(json).extract[Profile]
// Profile(An avid learner,165774c2-a0e7-4a24-8f79-0f52bf3e2cda,2015-07-13T07:48:47.924Z)
I'm trying (and failing) to get my head around how spray-json converts json feeds into objects. If I have a simple key -> value json feed then it seems to work ok but the data I want to read comes in a list like this:
[{
"name": "John",
"age": "30"
},
{
"name": "Tom",
"age": "25"
}]
And my code looks like this:
package jsontest
import spray.json._
import DefaultJsonProtocol._
object JsonFun {
case class Person(name: String, age: String)
case class FriendList(items: List[Person])
object FriendsProtocol extends DefaultJsonProtocol {
implicit val personFormat = jsonFormat2(Person)
implicit val friendListFormat = jsonFormat1(FriendList)
}
def main(args: Array[String]): Unit = {
import FriendsProtocol._
val input = scala.io.Source.fromFile("test.json")("UTF-8").mkString.parseJson
val friendList = input.convertTo[FriendList]
println(friendList)
}
}
If I change my test file so it just has a single person not in an array and run val friendList = input.convertTo[Person] then it works and everything parses but as soon as I try and parse an array it fails with the error Object expected in field 'items'
Can anyone point me the direction of what I'm doing wrong?
Well, as is often the way immediately after posting something to StackOverflow after spending hours trying to get something working, I've managed to get this to work.
The correct implementation of FriendsProtocol was:
object FriendsProtocol extends DefaultJsonProtocol {
implicit val personFormat = jsonFormat2(Person)
implicit object friendListJsonFormat extends RootJsonFormat[FriendList] {
def read(value: JsValue) = FriendList(value.convertTo[List[Person]])
def write(f: FriendList) = ???
}
}
Telling Spray how to read / write (just read in my case) the list object is enough to get it working.
Hope that helps someone else!
To make the Friend array easier to use extend the IndexedSeq[Person]trait by implementing the appropriate apply and length methods. This will allow the Standard Scala Collections API methods like map, filter and sortBy directly on the FriendsArray instance itself without having to access the underlying Array[Person] value that it wraps.
case class Person(name: String, age: String)
// this case class allows special sequence trait in FriendArray class
// this will allow you to use .map .filter etc on FriendArray
case class FriendArray(items: Array[Person]) extends IndexedSeq[Person] {
def apply(index: Int) = items(index)
def length = items.length
}
object FriendsProtocol extends DefaultJsonProtocol {
implicit val personFormat = jsonFormat2(Person)
implicit object friendListJsonFormat extends RootJsonFormat[FriendArray] {
def read(value: JsValue) = FriendArray(value.convertTo[Array[Person]])
def write(f: FriendArray) = ???
}
}
import FriendsProtocol._
val input = jsonString.parseJson
val friends = input.convertTo[FriendArray]
friends.map(x => println(x.name))
println(friends.length)
This will then print:
John
Tom
2
I am starting a new project using Scala and Akka and am having trouble writing tests. In my tests I am checking the equality of two List objects using should equal:
actualBook should equal (expectedBook)
Everything in my test suite compiles and runs, but the tests fail with the following message:
org.scalatest.exceptions.TestFailedException: List(BookRow(A,100.0,10.6)) did not equal List(BookRow(A,100.0,10.6))
Clearly the tests are passing (i.e., both List objects contain the same contents). Not sure if this is relevant or not, but actualBook and expectedBook have the same hash code (and actualBook(0) and expectedBook(0) also have the same hash code).
I am wondering if the problem is due to...
my using the incorrect comparison operator
the fact that I have not explicitly defined a way to compare BookRow objects.
For reference here is the code for my tests:
package lob
import cucumber.api.DataTable
import org.scalatest.Matchers._
import scala.collection.JavaConversions._
import cucumber.api.java.en.{When, Then}
class OrderBookSteps {
val orderTypes = OrderType.all()
val buyBook: OrderBook = new OrderBook(Bid, orderTypes)
val sellBook: OrderBook = new OrderBook(Ask, orderTypes)
#When("""^the following orders are added to the "(.*?)" book:$""")
def ordersAddedToBook(sideString: String, orderTable: DataTable) {
val (side, book) = getBook(sideString)
val orders = orderTable.asList[OrderRow](classOf[OrderRow]).toList.map(
r => LimitOrder(r.broker, side, r.volume, r.price.toDouble))
orders.foreach(book.add)
}
#Then("""^the "(.*?)" order book looks like:$""")
def orderBookLooksLike(sideString: String, bookTable: DataTable) {
val (_, book) = getBook(sideString)
val expectedBook = bookTable.asList[BookRow](classOf[BookRow]).toList
val actualBook = book.orders().map(o => BookRow(o.broker, o.volume, orderTypes(o).bookDisplay))
actualBook should equal (expectedBook)
}
def getBook(side: String) = side match {
case "Bid" => (Bid, buyBook)
case "Ask" => (Ask, sellBook)
}
case class OrderRow(broker: String, volume: Double, price: String)
case class BookRow(broker: String, volume: Double, price: String)
}
You can try:
List(BookRow(A,100.0,10.6)).toSeq should equal (List(BookRow(A,100.0,10.6)).toSeq)
Or:
List(BookRow(A,100.0,10.6) should contain theSameElementsAs List(BookRow(A,100.0,10.6))
Assuming you have BookRow (normal class) equals overridden.
I have found a solution, although I don't understand why it works! I just needed to replace:
case class OrderRow(broker: String, volume: Double, price: String)
case class BookRow(broker: String, volume: Double, price: String)
with
private case class OrderRow(broker: String, volume: Double, price: String)
private case class BookRow(broker: String, volume: Double, price: String)
I would like to know why this works.
I try to use XStream in scala, but it looks like the annotation in XSteam is not working in scala. In the following code, only #XStreamAlias work, I also attach one sample output, could anyone help this ?
#XStreamAlias("posts")
case class Posts(
#XStreamAsAttribute tag: String,
#XStreamAsAttribute total: String,
#XStreamAsAttribute user: String,
#XStreamImplicit(itemFieldName="post") posts: JList[Post])
#XStreamAlias("post")
case class Post(
#XStreamAsAttribute description: String,
#XStreamAsAttribute extended: String,
#XStreamAsAttribute hash: String,
#XStreamAsAttribute href: String,
#XStreamAsAttribute shared: String,
#XStreamAsAttribute tag: String,
#XStreamAsAttribute time: String)
<posts>
<tag>a</tag>
<total>b</total>
<user>c</user>
<posts>
<post>
<description></description>
<extended></extended>
<hash></hash>
<href></href>
<shared></shared>
<tag></tag>
<time></time>
</post>
</posts>
</posts>
This is an old question, but I recently had a similar problem and I'd like to add my solution to it for future reference to others.
I wanted to use an #XStreamAlias for a case class and its fields and to deserialize an XML from a string according to the case class annotations:
import com.thoughtworks.xstream.XStream
import com.thoughtworks.xstream.annotations.XStreamAlias
import com.thoughtworks.xstream.io.xml.DomDriver
#XStreamAlias("TOP")
case class example(
#XStreamAlias("SUB")
val param : String) {
}
val xs = new XStream(new DomDriver())
xs.setClassLoader(getClass.getClassLoader)
xs.processAnnotations(example.getClass)
//OK
xs.fromXML(<TOP><param>x</param></TOP>.toString())
//Error - No such field sub
xs.fromXML(<TOP><SUB>x</SUB></TOP>.toString())
The problem was that the annotation didn't work for the class field.
After searching a bit I found out that scala generates several accessors to a class field (cf. scala.annotation.meta) but by default an annotation is only applied to the constructor parameter and not to all the accessors.
To get the annotation to apply to the field as well (the spec/library is ambiguous about these concepts so I will assume it only applies to the field and not to the accessors) you can use #(XStreamAlias #field)("SUB"). This solved my problem:
#XStreamAlias("TOP")
case class example2 (
#(XStreamAlias #scala.annotation.meta.field)("SUB")
val param : String) {
}
val xs2 = new XStream(new DomDriver())
xs2.setClassLoader(getClass.getClassLoader)
xs2.processAnnotations(example.getClass)
//OK
val obj = xs2.fromXML(<TOP><SUB>x</SUB></TOP>.toString()).asInstanceOf[example2]
print(obj)
//> example2(x)
It looks like java annotations do not play well with Scala. :-) You'd have to rely on setting this with plain old method syntax. See below for a code snippet that I knocked up looking at their API documentation.
import com.thoughtworks.xstream.annotations._
import com.thoughtworks.xstream.XStream
import java.util.{ ArrayList => JList }
class Posts(
val tag: String,
val total: String,
val user: String,
val post: JList[Post])
class Post(
val description: String,
val extended: String,
val hash: String,
val href: String,
val shared: String,
val tag: String,
val time: String)
object Main extends App {
val xst = new XStream();
val pp = new JList[Post]
val rstv = new Post("a", "b", "c", "d", "e", "f", "g")
pp.add(rstv)
val postClazz = classOf[Post]
val postsClazz = classOf[Posts]
val pstv = new Posts("p", "q", "r", pp)
xst.useAttributeFor(postsClazz, "tag")
xst.useAttributeFor(postsClazz, "total")
xst.useAttributeFor(postsClazz, "user")
xst.useAttributeFor(postClazz, "description")
xst.useAttributeFor(postClazz, "extended")
xst.useAttributeFor(postClazz, "hash")
xst.useAttributeFor(postClazz, "href")
xst.useAttributeFor(postClazz, "shared")
xst.useAttributeFor(postClazz, "tag")
xst.useAttributeFor(postClazz, "time")
val foo = xst.toXML(pstv)
println(foo)
}
Note that all classes must have fields set up to be looked up by XStream. With this code, I got the below output:
<Posts tag="p" total="q" user="r">
<post>
<Post description="a" extended="b" hash="c" href="d" shared="e" tag="f" time
="g"/>
</post>
</Posts>
Hope this helps.
EDIT: To expand on what I said above, annotations such as #XStreamAlias et al, were completely stripped from compiled bytecode. This can be seen by running javap or scalap on those compiled classes. This led me to conclude that java annotations were not treated on par with scala annotations(Though, ideally it should be - Feel free to chime in if I made any mistake). Would be glad to learn something. :-)