Swagger 3 documenting Map using annotations - scala

I have a model for response in Scala:
case class ModelDto(
...
map: Map[UUID, AnotherDto]
...
)
How can I document this using annotations #Schema or #ArraySchema or smth like that?
Have no yml file and all fields describing using only Schema|ArraySchema like this:
case class ModelDto(
#Schema(description = "field description", required = true, `type` = "string", example = "field example")
field: String
)

The easiest way would be to do something like this:
case class ModelDto(
...
#Schema(implementation=classOf[YourTypeMap])
map: Map[UUID, AnotherDto]
...
)
class YourTypeMap extends java.util.HashMap[UUID, AnotherDto] {}

Related

Redoc documentation for tapir endpoint with sealed heirarchy not rendering as expected

I'm trying to define a tapir endpoint, which will accept two potential different payloads (in the snippet below, two different ways of defining a Thing). I'm broadly following the instructions here: https://circe.github.io/circe/codecs/adt.html, and defining my endpoint:
endpoint
.post
.in(jsonBody[ThingSpec].description("Specification of the thing"))
.out(jsonBody[Thing].description("Thing!"))
ThingSpec is a sealed trait, which both the classes representing possible payloads extend:
import io.circe.{Decoder, Encoder, derivation}
import io.circe.derivation.{deriveDecoder, deriveEncoder}
import sttp.tapir.Schema
import sttp.tapir.Schema.annotations.description
import sttp.tapir.generic.Configuration
import cats.syntax.functor._
import io.circe.syntax.EncoderOps
sealed trait ThingSpec {
def kind: String
}
object ThingSpec {
implicit val config: Configuration = Configuration.default.withSnakeCaseMemberNames
implicit val thingConfigDecoder
: Decoder[ThingSpec] = Decoder[ThingOneSpec].widen or Decoder[ThingTwoSpec].widen
implicit val thingConfigEncoder: Encoder[ThingSpec] = {
case one # ThingOneSpec(_, _) => one.asJson
case two # ThingTwoSpec(_, _) => two.asJson
}
implicit val thingConfigSchema: Schema[ThingSpec] =
Schema.oneOfUsingField[ThingSpec, String](_.kind, _.toString)(
"one" -> ThingOneSpec.thingConfigSchema,
"two" -> ThingTwoSpec.thingConfigSchema
)
}
case class ThingOneSpec(
name: String,
age: Long
) extends ThingSpec {
def kind: String = "one"
}
object ThingOneSpec {
implicit val config: Configuration = Configuration.default.withSnakeCaseMemberNames
implicit val thingConfigEncoder: Encoder[ThingOneSpec] = deriveEncoder(
derivation.renaming.snakeCase
)
implicit val thingConfigDecoder: Decoder[ThingOneSpec] = deriveDecoder(
derivation.renaming.snakeCase
)
implicit val thingConfigSchema: Schema[ThingOneSpec] = Schema.derived
}
case class ThingTwoSpec(
height: Long,
weight: Long,
) extends ThingSpec {
def kind: String = "two"
}
object ThingTwoSpec {
implicit val config: Configuration = Configuration.default.withSnakeCaseMemberNames
implicit val thingConfigEncoder: Encoder[ThingTwoSpec] = deriveEncoder(
derivation.renaming.snakeCase
)
implicit val thingConfigDecoder: Decoder[ThingTwoSpec] = deriveDecoder(
derivation.renaming.snakeCase
)
implicit val thingConfigSchema: Schema[ThingTwoSpec] = Schema.derived
}
Which seems to be working OK - except for the redoc docs which are generated. The "request body section" of the redoc, which I believe is generated from
.in(jsonBody[ThingSpec].description("Specification of the thing"))
only includes details of the ThingOneSpec object, there is no mention of ThingTwoSpec. The "payload" example section includes both.
My main question is how to get the request body section of the docs to show both possible payloads.
However - I'm aware that I might not have done this in the best way (from a circe/tapir point of view). Ideally, I'd like not to include an explicit discriminator (kind) in the trait/classes, because I'd rather it not be exposed to the end user in the 'Payload' sections of the docs. Despite reading
https://tapir.softwaremill.com/en/v0.17.7/endpoint/customtypes.html
https://github.com/softwaremill/tapir/blob/master/examples/src/main/scala/sttp/tapir/examples/custom_types/SealedTraitWithDiscriminator.scala
https://github.com/softwaremill/tapir/issues/315
I cannot get this working without the explicit discriminator.
You can get rid of the discriminator by defining a one-of schema by hand:
implicit val thingConfigSchema: Schema[ThingSpec] =
Schema(
SchemaType.SCoproduct(List(ThingOneSpec.thingConfigSchema, ThingTwoSpec.thingConfigSchema), None) {
case one: ThingOneSpec => Some(SchemaWithValue(ThingOneSpec.thingConfigSchema, one))
case two: ThingTwoSpec => Some(SchemaWithValue(ThingTwoSpec.thingConfigSchema, two))
},
Some(Schema.SName(ThingSpec.getClass.getName))
)
(Yes, it is unnecessarily hard to write; I'll look if this can be possibly generated by a macro or otherwise.)
When rendered by redoc, I get a "one of" switch, so I think this is the desired outcome:

How to use SQLSyntaxSupport with a case class containing other case classes

I'm trying to use Scalikejbdc's ORM with SQLSyntaxSupport
The table layout is as follows:
create table samples (
id text
event_name text
event_data text
created_at timestamp
scope text
)
The classes in Scala are as follows
case class SampleData(id: String, eventName: String, eventData: String)
case class SampleMetadata(createdAt: OffsetDateTime, scope: String)
case class Sample(data: SampleData, metadata: SampleMetadata)
I would like to use SQLSyntaxSupport like so
object SampleSchema extends SQLSyntaxSupport[Sample] {
override val tableName = "sources"
override lazy val columns = Seq(
"id",
"event_name",
"event_data",
"created_at",
"scope"
)
val insertColumns = Seq(
column.id,
column.eventName,
column.eventData,
column.createdAt,
column.scope
)
}
and later be able to do:
def insert(source: Source): Unit = DB localTx { implicit session =>
applyUpdate {
insertInto(SampleSchema)
.columns(SampleSchema.insertColumns: _*)
.values(SampleSchema.toInsertValues(session, source): _*)
}
}
Code doesn't compile; with errors like
Sample#eventName not found. Expected fields are #data, #metadata
column.eventName
Is there a way to make it work without flattening the case class Sample ?

JSON to TSV in scala

Is there some elegant way to convert the json data(based on case class) to tsv form?
I have a case class that has nested case class and nested case class can have list and map.
case class Product (
pname: Option[String],
pid: Int,
pDetail: Option[PDetail]
)
case class PDetail (
pbatchNo: List[Int]
)
example json:
{
"pname" : "pnameValue",
"pid" : "pidValue",
"pDetail":
{
"pbatchNo" : [1,2]
}
}
I want a output like:
pnameValue pidValue 1 2
You can override the toString method and separate the field by tab \t.
Something like:
override def toString: String = {
s"pnameValue:$pnameValue\tpidValue:$pidValue"
}

Handling nulls with json Play

I am trying to parse json with null values for some fields using Play library. There is a case class which represents the data:
case class Id(value: Int) extends AnyVal
case class Name(value: String) extends AnyVal
case class Number(value: Int) extends AnyVal
case class Data(id: Option[Id], name: Option[Name], number: Option[Number])
Here is how parsing currently works:
def parse(jsValue: JsValue): Try[Seq[Data]] = Try {
jsValue.as[JsArray].value
.flatMap { record =>
val id = Id((record \ "id").as[Int])
val name = Name((record \ "name").as[String])
val number = Number((record \ "number").as[Int])
Some(Data(Some(id), Some(name), Some(number)))
}
}
Parsing with specific data types doesn't handle null cases, so this implementation returns:
Failure(play.api.libs.json.JsResultException: JsResultException(errors:List((,List(JsonValidationError(List(error.expected.jsstring),WrappedArray()))))))
For the input data like this:
{
"id": 1248,
"default": false,
"name": null,
"number": 2
}
I would like to have something like this: Seq(Data(Some(Id(1248)), None, Some(Number(2))))
I am going to write the data into the database so I do not mind writing some null values for these fields.
How can I handle null values for fields in parsed json?
You can simply let the play-json library generate the Reads for your case classes instead of writing them manually:
import play.api.libs.json._
object Data {
implicit val reads: Reads[Data] = {
// move these into the corresponding companion objects if used elsewhere...
implicit val idReads = Json.reads[Id]
implicit val numberReads = Json.reads[Number]
implicit val nameReads = Json.reads[Name]
Json.reads[Data]
}
}
def parse(jsValue: JsValue): Try[Seq[Data]] = Json.fromJson[Seq[Data]](jsValue).toTry
That way, your code will work even in case you change the arguments of your case classes.
If you still want to code it manually, you can use the readNullable parser:
val name: Option[Name] = Name(record.as((__ \ "name").readNullable[String]))
Note, however, that using Try is somewhat frowned upon in FP and directly using JsResult would be more idiomatic.
If you are not locked to use play-json then let me show how it can be done easily with jsoniter-scala:
import com.github.plokhotnyuk.jsoniter_scala.core._
import com.github.plokhotnyuk.jsoniter_scala.macros._
implicit val codec: JsonValueCodec[Seq[Data]] = JsonCodecMaker.make(CodecMakerConfig)
val json: Array[Byte] = """[
{
"id": 1248,
"default": false,
"name": null,
"number": 2
}
]""".getBytes("UTF-8")
val data: Seq[Data] = readFromArray(json)
println(data)
That will produce the following output:
List(Data(Some(Id(1248)),None,Some(Number(2))))
Here you can see an example how to integrate it with the Play framework.

Parsing JSON in Scala using ScalaObjectMapper

I am trying to parse json using case class but running into an issue.
I have the following json
{
"general": {
"table": "123",
},
"employee" : {
"table": "employee_data"
},
"fulltime" : {
"table": "fulltime_employee_data"
},
"consultant" : {
"table": "consultant_employee_data"
}
}
Here's my case class:
case class EmployeeInfo(employees: List[Map[String, String]])
I'm trying to parse the above json using the case class using the following code. It returns the object as null.
val mapper = new ObjectMapper() with ScalaObjectMapper
mapper.registerModule(new JavaTimeModule())
mapper.registerModule(DefaultScalaModule)
mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)
val str = Source.fromFile("employeeInfo.json").mkString
val temp = mapper.readValue[EmployeeInfo](str)
temp here is being returned as null. My json seems to be list of maps which is what I provided in my case class. Any thoughts on what I'm missing?
I figured it out. My case class needed the variable name as defined in the json.
case class EmployeeInfo(general: TableDetails, employee: TableDetails, fulltime: TableDetails, consultant: TableDetails)
class TableDetails {
val table: String = ""
//getter and setter for table field
}