Redoc documentation for tapir endpoint with sealed heirarchy not rendering as expected - scala

I'm trying to define a tapir endpoint, which will accept two potential different payloads (in the snippet below, two different ways of defining a Thing). I'm broadly following the instructions here: https://circe.github.io/circe/codecs/adt.html, and defining my endpoint:
endpoint
.post
.in(jsonBody[ThingSpec].description("Specification of the thing"))
.out(jsonBody[Thing].description("Thing!"))
ThingSpec is a sealed trait, which both the classes representing possible payloads extend:
import io.circe.{Decoder, Encoder, derivation}
import io.circe.derivation.{deriveDecoder, deriveEncoder}
import sttp.tapir.Schema
import sttp.tapir.Schema.annotations.description
import sttp.tapir.generic.Configuration
import cats.syntax.functor._
import io.circe.syntax.EncoderOps
sealed trait ThingSpec {
def kind: String
}
object ThingSpec {
implicit val config: Configuration = Configuration.default.withSnakeCaseMemberNames
implicit val thingConfigDecoder
: Decoder[ThingSpec] = Decoder[ThingOneSpec].widen or Decoder[ThingTwoSpec].widen
implicit val thingConfigEncoder: Encoder[ThingSpec] = {
case one # ThingOneSpec(_, _) => one.asJson
case two # ThingTwoSpec(_, _) => two.asJson
}
implicit val thingConfigSchema: Schema[ThingSpec] =
Schema.oneOfUsingField[ThingSpec, String](_.kind, _.toString)(
"one" -> ThingOneSpec.thingConfigSchema,
"two" -> ThingTwoSpec.thingConfigSchema
)
}
case class ThingOneSpec(
name: String,
age: Long
) extends ThingSpec {
def kind: String = "one"
}
object ThingOneSpec {
implicit val config: Configuration = Configuration.default.withSnakeCaseMemberNames
implicit val thingConfigEncoder: Encoder[ThingOneSpec] = deriveEncoder(
derivation.renaming.snakeCase
)
implicit val thingConfigDecoder: Decoder[ThingOneSpec] = deriveDecoder(
derivation.renaming.snakeCase
)
implicit val thingConfigSchema: Schema[ThingOneSpec] = Schema.derived
}
case class ThingTwoSpec(
height: Long,
weight: Long,
) extends ThingSpec {
def kind: String = "two"
}
object ThingTwoSpec {
implicit val config: Configuration = Configuration.default.withSnakeCaseMemberNames
implicit val thingConfigEncoder: Encoder[ThingTwoSpec] = deriveEncoder(
derivation.renaming.snakeCase
)
implicit val thingConfigDecoder: Decoder[ThingTwoSpec] = deriveDecoder(
derivation.renaming.snakeCase
)
implicit val thingConfigSchema: Schema[ThingTwoSpec] = Schema.derived
}
Which seems to be working OK - except for the redoc docs which are generated. The "request body section" of the redoc, which I believe is generated from
.in(jsonBody[ThingSpec].description("Specification of the thing"))
only includes details of the ThingOneSpec object, there is no mention of ThingTwoSpec. The "payload" example section includes both.
My main question is how to get the request body section of the docs to show both possible payloads.
However - I'm aware that I might not have done this in the best way (from a circe/tapir point of view). Ideally, I'd like not to include an explicit discriminator (kind) in the trait/classes, because I'd rather it not be exposed to the end user in the 'Payload' sections of the docs. Despite reading
https://tapir.softwaremill.com/en/v0.17.7/endpoint/customtypes.html
https://github.com/softwaremill/tapir/blob/master/examples/src/main/scala/sttp/tapir/examples/custom_types/SealedTraitWithDiscriminator.scala
https://github.com/softwaremill/tapir/issues/315
I cannot get this working without the explicit discriminator.

You can get rid of the discriminator by defining a one-of schema by hand:
implicit val thingConfigSchema: Schema[ThingSpec] =
Schema(
SchemaType.SCoproduct(List(ThingOneSpec.thingConfigSchema, ThingTwoSpec.thingConfigSchema), None) {
case one: ThingOneSpec => Some(SchemaWithValue(ThingOneSpec.thingConfigSchema, one))
case two: ThingTwoSpec => Some(SchemaWithValue(ThingTwoSpec.thingConfigSchema, two))
},
Some(Schema.SName(ThingSpec.getClass.getName))
)
(Yes, it is unnecessarily hard to write; I'll look if this can be possibly generated by a macro or otherwise.)
When rendered by redoc, I get a "one of" switch, so I think this is the desired outcome:

Related

Using different JsonNaming strategies for different case classes in scala play

I've got two different JSON messages that I want to turn into instances of case classes.
case class ThisThing(attributeOne: String)
case class ThatThing(attributeTwo: String)
implicit val config: Aux[Json.MacroOptions] = JsonConfiguration(SnakeCase)
implicit val thisThingFormat: OFormat[ThisThing] = Json.format[ThisThing]
implicit val thatThingFormat: OFormat[ThatThing]= Json.format[ThatThing]
I can now parse messages like:
val thisThing = Json.fromJson[ThisThing](Json.parse("{\"attribute_one\": \"hurray\"}"))
However, my ThatThing JSON messages are not snake cased, their attributes match the case class:
val thatThing = Json.fromJson[ThatThing](Json.parse("{\"attributeTwo\": \"hurray\"}"))
This gives an error, as it's looking for an attribute called attribute_two to map to attributeTwo.
How do I specify a naming strategy of SnakeCase for only certain case classes?
As any implicit, the configuration can be scoped:
import play.api.libs.json._
case class ThisThing(attributeOne: String)
case class ThatThing(attributeTwo: String)
implicit val thisThingFormat: OFormat[ThisThing] = {
implicit val config = JsonConfiguration(JsonNaming.SnakeCase)
Json.format[ThisThing]
}
implicit val thatThingFormat: OFormat[ThatThing] = Json.format[ThatThing]
Then:
Json.fromJson[ThisThing](Json.parse("{\"attribute_one\": \"hurray\"}"))
// res0: play.api.libs.json.JsResult[ThisThing] = JsSuccess(ThisThing(hurray),)
Json.fromJson[ThatThing](Json.parse("{\"attributeTwo\": \"hurray\"}"))
// res1: play.api.libs.json.JsResult[ThatThing] = JsSuccess(ThatThing(hurray),)

Handling nulls with json Play

I am trying to parse json with null values for some fields using Play library. There is a case class which represents the data:
case class Id(value: Int) extends AnyVal
case class Name(value: String) extends AnyVal
case class Number(value: Int) extends AnyVal
case class Data(id: Option[Id], name: Option[Name], number: Option[Number])
Here is how parsing currently works:
def parse(jsValue: JsValue): Try[Seq[Data]] = Try {
jsValue.as[JsArray].value
.flatMap { record =>
val id = Id((record \ "id").as[Int])
val name = Name((record \ "name").as[String])
val number = Number((record \ "number").as[Int])
Some(Data(Some(id), Some(name), Some(number)))
}
}
Parsing with specific data types doesn't handle null cases, so this implementation returns:
Failure(play.api.libs.json.JsResultException: JsResultException(errors:List((,List(JsonValidationError(List(error.expected.jsstring),WrappedArray()))))))
For the input data like this:
{
"id": 1248,
"default": false,
"name": null,
"number": 2
}
I would like to have something like this: Seq(Data(Some(Id(1248)), None, Some(Number(2))))
I am going to write the data into the database so I do not mind writing some null values for these fields.
How can I handle null values for fields in parsed json?
You can simply let the play-json library generate the Reads for your case classes instead of writing them manually:
import play.api.libs.json._
object Data {
implicit val reads: Reads[Data] = {
// move these into the corresponding companion objects if used elsewhere...
implicit val idReads = Json.reads[Id]
implicit val numberReads = Json.reads[Number]
implicit val nameReads = Json.reads[Name]
Json.reads[Data]
}
}
def parse(jsValue: JsValue): Try[Seq[Data]] = Json.fromJson[Seq[Data]](jsValue).toTry
That way, your code will work even in case you change the arguments of your case classes.
If you still want to code it manually, you can use the readNullable parser:
val name: Option[Name] = Name(record.as((__ \ "name").readNullable[String]))
Note, however, that using Try is somewhat frowned upon in FP and directly using JsResult would be more idiomatic.
If you are not locked to use play-json then let me show how it can be done easily with jsoniter-scala:
import com.github.plokhotnyuk.jsoniter_scala.core._
import com.github.plokhotnyuk.jsoniter_scala.macros._
implicit val codec: JsonValueCodec[Seq[Data]] = JsonCodecMaker.make(CodecMakerConfig)
val json: Array[Byte] = """[
{
"id": 1248,
"default": false,
"name": null,
"number": 2
}
]""".getBytes("UTF-8")
val data: Seq[Data] = readFromArray(json)
println(data)
That will produce the following output:
List(Data(Some(Id(1248)),None,Some(Number(2))))
Here you can see an example how to integrate it with the Play framework.

How to validate optional query parameters in Play Framework?

So I have a routes file that looks something like this:
GET /myRes controllers.MyController.get(ids: Option[String], elems: Option[String])
All well and good. Users can get stuff by doing:
/myRes
/myRes?ids=X
/myRes?elems=Y
/myRes?ids=X&elems=Y
However, they can also query the interface by doing:
/myRes?id=X
Which is problematic, because in this case the user is gets the same result as if they had queried /myRes, which is almost certainly not the result they expected. This has been causing a lot of confusion/bugs for developers of the API. Is there an elegant way to catch incorrect/unspecified query parameters being passed to the controller and return a hard error for such queries?
Edit: Changed title to something more descriptive. My problem is basically validating the query parameters to catch any query parameters passed to the API which are not valid/correct.
It can be done with the help of a macro annotation like the following one:
import scala.reflect.macros.whitebox.Context
import scala.language.experimental.macros
import scala.annotation.StaticAnnotation
import scala.annotation.compileTimeOnly
import play.api.mvc._
#compileTimeOnly("respond 400 bad request in case of unexpected params")
class StrictParams extends StaticAnnotation {
def macroTransform(annottees: Any*): Any = macro StrictParamsMacro.impl
}
object StrictParamsMacro {
def impl(c: Context)(annottees: c.Expr[Any]*): c.Expr[Any] = {
import c.universe._
annottees.map(_.tree).toList match {
case q"def $name(..$params) = $action { ..$body }" :: Nil =>
val supportedParamNames = params.map(ab => ab.name.toString).toSet
c.Expr[Any](
q"""def $name(..$params) = { req: Request[_] =>
val unsupportedParams = req.queryString.keySet -- $supportedParamNames
if (unsupportedParams.nonEmpty) {
BadRequest(unsupportedParams.mkString("Unsupported Params: ", ", ", ""))
} else {
$body
}
}"""
)
}
}
}
Then you can annotate your action method like this:
#StrictParams
def get(ids: Option[String], elems: Option[String]) = Action {
...
}
i usually pass it like this on get method
GET /getSomething Controllers.Application.getData()
GET /getSomething/:id Controllers.Application.getData(id:Integer)
GET /getSomething/:id/:name Controllers.Application.getData(id:Integer, name :String)
You can define a QueryStringBindable[A] to bind a Map of query string parameters to an instance of type A.
See the corresponding documentation.
Didn't fully explore it yet, but it doesn't seem too difficult to implement a QueryStringBindable[Option[A]].
If you want to forbid confusing parameters (e.g. allowing ids but not id), you can check for unexpected keys in the parameters Map and return an error message if needed (although I would recommand to accept the id key to match the behaviour expected by users).

Persisting a recursive data model with SORM

For my project, I would like to make a tree model; let's say it's about files and directories. But files can be in multiple directories at the same time, so more like the same way you add tags to email in gmail.
I want to build a model for competences (say java, scala, angular, etc) and put them in categories. In this case java and scala are languages, agila and scrum are ways of working, angular is a framework / toolkit and so forth. But then we want to group stuff flexibly, ie play, java and scala are in a 'backend' category and angular, jquery, etc are in a frontend category.
I figured I would have a table competences like so:
case class Competence (name: String, categories: Option[Category])
and the categories as follows:
case class Category ( name: String, parent: Option[Category] )
This will compile, but SORM will generate an error (from activator console):
scala> import models.DB
import models.DB
scala> import models.Category
import models.Category
scala> import models.Competence
import models.Competence
scala> val cat1 = new Category ( "A", None )
cat1: models.Category = Category(A,None)
scala> val sav1 = DB.save ( cat1 )
sorm.Instance$ValidationException: Entity 'models.Category' recurses at 'models.Category'
at sorm.Instance$Initialization$$anonfun$2.apply(Instance.scala:216)
at sorm.Instance$Initialization$$anonfun$2.apply(Instance.scala:216)
at scala.Option.map(Option.scala:146)
at sorm.Instance$Initialization.<init>(Instance.scala:216)
at sorm.Instance.<init>(Instance.scala:38)
at models.DB$.<init>(DB.scala:5)
at models.DB$.<clinit>(DB.scala)
... 42 elided
Although I want the beautiful simplicity of sorm, will I need to switch to Slick for my project to implement this? I had the idea that link tables would be implicitly generated by sorm. Or could I simply work around the problem by making a:
case class Taxonomy ( child: Category, parent: Category )
and then do parsing / formatting work on the JS side? It seems to make the simplicity of using sorm disappear somewhat.
To give some idea, what I want is to make a ajaxy page where a user can add new competences in a list on the left, and then link/unlink them to whatever category tag in the tree he likes.
I encountered the same question. I needed to define an interation between two operant, which can be chained(recursive). Like:
case class InteractionModel(
val leftOperantId: Int,
val operation: String ,
val rightOperantId: Int,
val next: InteractionModel)
My working around: change this case class into Json(String) and persist it as String, when retreiving it, convert it from Json. And since it's String, do not register it as sorm Entity.
import spray.json._
case class InteractionModel(
val leftOperantId: Int,
val operation: String ,
val rightOperantId: Int,
val next: InteractionModel) extends Jsonable {
def toJSON: String = {
val js = this.toJson(InteractionModel.MyJsonProtocol.ImJsonFormat)
js.compactPrint
}
}
//define protocol
object InteractionModel {
def fromJSON(in: String): InteractionModel = {
in.parseJson.convertTo[InteractionModel](InteractionModel.MyJsonProtocol.ImJsonFormat)
}
val none = new InteractionModel((-1), "", (-1), null) {
override def toJSON = "{}"
}
object MyJsonProtocol extends DefaultJsonProtocol {
implicit object ImJsonFormat extends RootJsonFormat[InteractionModel] {
def write(im: InteractionModel) = {
def recWrite(i: InteractionModel): JsObject = {
val next = i.next match {
case null => JsNull
case iNext => recWrite(i.next)
}
JsObject(
"leftOperantId" -> JsNumber(i.leftOperantId),
"operation" -> JsString(i.operation.toString),
"rightOperantId" -> JsNumber(i.rightOperantId),
"next" -> next)
}
recWrite(im)
}
def read(value: JsValue) = {
def recRead(v: JsValue): InteractionModel = {
v.asJsObject.getFields("leftOperantId", "operation", "rightOperantId", "next") match {
case Seq(JsNumber(left), JsString(operation), JsNumber(right), nextJs) =>
val next = nextJs match {
case JsNull => null
case js => recRead(js)
}
InteractionModel(left.toInt, operation, right.toInt, next)
case s => InteractionModel.none
}
}
recRead(value)
}
}
}
}

How to represent optional fields in spray-json?

I have an optional field on my requests:
case class SearchRequest(url: String, nextAt: Option[Date])
My protocol is:
object SearchRequestJsonProtocol extends DefaultJsonProtocol {
implicit val searchRequestFormat = jsonFormat(SearchRequest, "url", "nextAt")
}
How do I mark the nextAt field optional, such that the following JSON objects will be correctly read and accepted:
{"url":"..."}
{"url":"...", "nextAt":null}
{"url":"...", "nextAt":"2012-05-30T15:23Z"}
I actually don't really care about the null case, but if you have details, it would be nice. I'm using spray-json, and was under the impression that using an Option would skip the field if it was absent on the original JSON object.
Works for me (spray-json 1.1.1 scala 2.9.1 build)
import cc.spray.json._
import cc.spray.json.DefaultJsonProtocol._
// string instead of date for simplicity
case class SearchRequest(url: String, nextAt: Option[String])
// btw, you could use jsonFormat2 method here
implicit val searchRequestFormat = jsonFormat(SearchRequest, "url", "nextAt")
assert {
List(
"""{"url":"..."}""",
"""{"url":"...", "nextAt":null}""",
"""{"url":"...", "nextAt":"2012-05-30T15:23Z"}""")
.map(_.asJson.convertTo[SearchRequest]) == List(
SearchRequest("...", None),
SearchRequest("...", None),
SearchRequest("...", Some("2012-05-30T15:23Z")))
}
You might have to create an explicit format (warning: psuedocodish):
object SearchRequestJsonProtocol extends DefaultJsonProtocol {
implicit object SearchRequestJsonFormat extends JsonFormat[SearchRequest] {
def read(value: JsValue) = value match {
case JsObject(List(
JsField("url", JsString(url)),
JsField("nextAt", JsString(nextAt)))) =>
SearchRequest(url, Some(new Instant(nextAt)))
case JsObject(List(JsField("url", JsString(url)))) =>
SearchRequest(url, None)
case _ =>
throw new DeserializationException("SearchRequest expected")
}
def write(obj: SearchRequest) = obj.nextAt match {
case Some(nextAt) =>
JsObject(JsField("url", JsString(obj.url)),
JsField("nextAt", JsString(nextAt.toString)))
case None => JsObject(JsField("url", JsString(obj.url)))
}
}
}
Use NullOptions trait to disable skipping nulls:
https://github.com/spray/spray-json#nulloptions
Example:
https://github.com/spray/spray-json/blob/master/src/test/scala/spray/json/ProductFormatsSpec.scala
Don't know if this will help you but you can give that field a default value in the case class definition, so if the field is not in the json, it will assign the default value to it.
Easy.
import cc.spray.json._
trait MyJsonProtocol extends DefaultJsonProtocol {
implicit val searchFormat = new JsonWriter[SearchRequest] {
def write(r: SearchRequest): JsValue = {
JsObject(
"url" -> JsString(r.url),
"next_at" -> r.nextAt.toJson,
)
}
}
}
class JsonTest extends FunSuite with MyJsonProtocol {
test("JSON") {
val search = new SearchRequest("www.site.ru", None)
val marshalled = search.toJson
println(marshalled)
}
}
For anyone who is chancing upon this post and wants an update to François Beausoleil's answer for newer versions of Spray (circa 2015+?), JsField is deprecated as a public member of JsValue; you should simply supply a list of tuples instead of JsFields. Their answer is spot-on, though.