Play! could not find implicit value for parameter reducer - scala

I'm following along the Play! 2.1 coast-to-coast tutorial at http://mandubian.com/2013/01/13/JSON-Coast-to-Coast/ but cannot get even the most trivial example working.
When I compile my project I get an error:
could not find implicit value for parameter reducer: play.api.libs.functional.Reducer[play.api.libs.json.JsString,B]
My controller code is as follows:
package controllers
import play.api._
import play.api.mvc._
import play.api.libs.json._
import play.api.libs.json.Reads._
import play.api.libs.functional.syntax._
object MyController extends Controller{
val validate = (
(__ \ 'title).json.pick[JsString] and
(__ \ 'desc).json.pick[JsString]
).reduce
def test() = Action { implicit request =>
Ok("test")
}
}
What am I missing to get this working?

The syntax here is not quite right. 'pick' returns a JsValue (the Play! equivalent of valid Json types including String, Array, etc).
To validate multiple json fields you need to use 'pickBranch' which returns a JsObject (which is basically the equivalent of a Map[String, JsValue]). I'm guessing that reduce is a merge operation for several JsObjects.
I actually still haven't found a good use case for 'pick'. The '\' syntax seems to do the equivalent job with less code and clutter.

Related

Providing implicit evidence for context bounds on Object

I'm trying to write some abstractions in some Spark Scala code, but running into some issues when using objects. I'm using Spark's Encoder which is used to convert case classes to database schema's here as an example, but I think this question goes for any context bound.
Here is a minimal code example of what I'm trying to do:
package com.sample.myexample
import org.apache.spark.sql.Encoder
import scala.reflect.runtime.universe.TypeTag
case class MySparkSchema(id: String, value: Double)
abstract class MyTrait[T: TypeTag: Encoder]
object MyObject extends MyTrait[MySparkSchema]
Which fails with the following compilation error:
Unable to find encoder for type com.sample.myexample.MySparkSchema. An implicit Encoder[com.sample.myexample.MySparkSchema] is needed to store com.sample.myexample.MySparkSchema instances in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._ Support for serializing other types will be added in future releases.
I tried defining the implicit evidence in the object like such: (the import statement was suggested by IntelliJ, but it looks a bit weird)
import com.sample.myexample.MyObject.encoder
object MyObject extends MyTrait[MySparkSchema] {
implicit val encoder: Encoder[MySparkSchema] = Encoders.product[MySparkSchema]
}
Which fails with the error message
MyTrait.scala:13:25: super constructor cannot be passed a self reference unless parameter is declared by-name
One other thing I tried is to convert the object to a class and provide implicit evidence to the constructor:
class MyObject(implicit evidence: Encoder[MySparkSchema]) extends MyTrait[MySparkSchema]
This compiles and works fine, but at the expense of MyObject now being a class instead.
Question: Is it possible to provide implicit evidence for the context bounds when extending a trait? Or does the implicit evidence force me to make a constructor and use class instead?
Your first error almost gives you the solution, you have to import spark.implicits._ for Product types.
You could do this:
val spark: SparkSession = SparkSession.builder().getOrCreate()
import spark.implicits._
Full Example
package com.sample.myexample
import org.apache.spark.sql.Encoder
import scala.reflect.runtime.universe.TypeTag
case class MySparkSchema(id: String, value: Double)
abstract class MyTrait[T: TypeTag: Encoder]
val spark: SparkSession = SparkSession.builder().getOrCreate()
import spark.implicits._
object MyObject extends MyTrait[MySparkSchema]

What is the right way to send JSON response in http4s?

Not so long time ago I switched from akka-http to http4s. One of the basic things which I wanted to do correctly — JSON handling, in particular sending a JSON response.
I decided to use http4s with ZIO instead of cats, so here is how an http route looks like:
import fs2.Stream
import org.http4s._
import org.http4s.dsl.io._
import org.http4s.implicits._
import scalaz.zio.Task
import scalaz.zio.interop.catz._
import io.circe.generic.auto._
import io.circe.syntax._
class TweetsRoutes {
case class Tweet(author: String, tweet: String)
val helloWorldService = HttpRoutes.of[Task] {
case GET -> Root / "hello" / name => Task {
Response[Task](Ok)
.withBodyStream(Stream.emits(
Tweet(name, "dummy tweet text").asJson.toString.getBytes
))
}
}.orNotFound
}
As you see, JSON serialization part is pretty verbose:
.withBodyStream(Stream.emits(
Tweet(name, "dummy tweet text").asJson.toString.getBytes
))
Is there any other way to send JSON in a response?
Yes, there is: define and Encoder and Decoder for Task:
implicit def circeJsonDecoder[A](
implicit decoder: Decoder[A]
): EntityDecoder[Task, A] = jsonOf[Task, A]
implicit def circeJsonEncoder[A](
implicit encoder: Encoder[A]
): EntityEncoder[Task, A] = jsonEncoderOf[Task, A]
this way there is no need to transform to bytes.
EDIT: there is a full example here: https://github.com/mschuwalow/zio-todo-backend/blob/develop/src/main/scala/com/schuwalow/zio/todo/http/TodoService.scala
HT: #mschuwalow
There is even simpler solution for this. If you want to handle case class JSON encoding for HTTP responses, you just can add these imports:
import io.circe.generic.auto._
import org.http4s.circe.CirceEntityCodec._
BTW, the same imports handle decoding of incoming JSON requests into case classes as well

How is the <> method resolved on a tuple by Slick

Linked from this question
I came across Slick's documentation and found it mandates a def * method in the definition of a table to get a mapped projection.
So the line looks like this
def * = (name, id.?).<>(User.tupled,User.unapply)
Slick example here
I see the <> method is invoked on a tuple - in this case a Tuple2. The method is defined on the case class ShapedValue in Slick's code. How do I find out the implicit method that is doing the lookup?
Here are my imports:
import scala.concurrent.Await
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration.Duration
import slick.driver.H2Driver.api._
import slick.lifted.ShapedValue
import slick.lifted.ProvenShape
So i figured that one out for myself.
The object Shape implements three traits namely ConstColumnShapeImplicits , AbstractTableShapeImplicits and TupleShapeImplicits . These three traits handle the implicit conversions concerning Shapes in Slick .
The TupleShapeImplicits houses all implicit conversion methods required to convert a Tuple to a TupleShape.
Now in the line (name, id.?, salary.?).<>(User.tupled,User.unapply) what is happening is that the the method <> has a implicit parameter of Shape
The Shape class thus comes in scope for the implicit conversion. And the TupleShapeImplicits comes into scope as well.

Force Play to JSON serialize timestamps as strings, not integer seconds

I'm trying to figure out how to get Play's toJSon method to serialize java.sql.Timestamp/java.sql.Date objects as date/time strings rather than seconds since epoch, which seems to be the default. I've tried two methods to accomplish this:
1) I changed the Jackson JSON configuration default as seen here in the Global onStart handler:
import play.api._
import play.libs.Json
import com.fasterxml.jackson.databind.ObjectMapper
import com.fasterxml.jackson.databind.SerializationFeature
object Global extends GlobalSettings {
override def onStart(app: play.api.Application){
println("really started")
var om = new ObjectMapper()
om.configure(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS, false)
Json.setObjectMapper(om)
}
}
But this doesn't seem to have any effect. I can tell the code is executing based on the println statement but the serialization is unaffected.
2) Write a custom Writer for the java.sql.Date object:
implicit val sqlDateWrites: Writes[java.sql.Date] = new Writes[java.sql.Date] {
def writes(d: java.sql.Date): JsValue = JsString("WTF")
}
However this doesn't work either. I'm not sure if it's an error in how I'm writing it, or if I am just including it in the wrong place (I'm declaring it in the same file that I'm calling "toJson" in.
Any help would be appreciated.
https://gist.github.com/fancellu/f4b72e853766acf26bf16a7fb37cb8ac
Give this code a go, stores as ISO-8601 format
You're mixing up Play's Java JSON library and its Scala library.
If you're using Scala, only use play.api.libs.json. If you're in Java, play.libs.Json.
To create a Writes[java.sql.Date], call Writes.sqlDateWrites(pattern) with whatever pattern you're using.
val sqlDateWrite = Writes.sqlDateWrites(myPattern)
Then, when you create your Writes for whatever object you're converting:
case class Foo(id: Long, createdAt: java.sql.Date)
implicit val fooWrites: Writes[Foo] = (
(__ \ "id").write[Long] and
(__ \ "createdAt").write[java.sql.Date](sqlDateWrite)
)(unlift(Foo.unapply))

JSON Rendering Scala/Play 2.1

I am working on a small project try to get a Scala/Play backend working. I am trying to have it return and also process JSON on the web service side. I cannot seem to figure out how to get the JSON marshalling and unmarshalling to work. Could someone help me with this issue? I am using Play 2.1 and Scala 2.10. The error that I get is
"overriding method reads in trait Reads of type (json: play.api.libs.json.JsValue)play.api.libs.json.JsResult[models.Address]; method reads has incompatible type"
Edited. Someone else gave me the solution. For read you must use JsSuccess, not JsResult.
case class Address(id: Long, name: String)
object Address {
implicit object AddressFormat extends Format[Address] {
def reads(json: JsValue):Address = JsSuccess(Address(
(json \ "id").as[Long],
(json \ "name").as[String]
))
def writes(address: Address): JsValue = JsObject(Seq(
"id" -> JsNumber(address.id),
"name" -> JsString(address.name)
))
}
}
With Play 2.1 you could simplify your code:
import play.api.libs.json._
import play.api.libs.functional.syntax._
implicit val addressFormat = (
(__ \ "id").format[String] and
(__ \ "name").format[Long]
)(Address.apply, unlift(Address.unapply))
More detailed information can be found here: ScalaJsonCombinators
You can simplify your code even further by using macros, although they are marked as experimental:
import play.api.libs.json._
import play.api.libs.functional.syntax._
case class Address(id: Long, name: String)
implicit val addressFormat = Json.format[Address]
More details on this technique in the official Play documentation.
Hey my solution would be:
import play.api.libs.json.JsonNaming.SnakeCase
import play.api.libs.json._
object Test {
implicit val config = JsonConfiguration(SnakeCase)
implicit val userFormat: OFormat[Test] = Json.format[Test]
}
case class Test(
testName: String,
testWert: String,
testHaus: String
)
In conclusion, you get a compenion object. The config converts all keys of the case class into snakecase. The implicit values ensure that a valid Json can be parsed into a model. So you get your test model back.
The Json should look like this:
{
"test_name" : "Hello",
"test_wert": "Hello",
"test_haus": "Hello"
}
https://www.playframework.com/documentation/2.6.x/ScalaJsonAutomated