How to document a JSON request body schema in tapir endpoint definition - scala

I'm using tapir to define a series of endpoints, as follows:
def thingModifyState[M: Encoder: Decoder: Schema] =
endpoint.put
.name(s"Modify state of a $name Thing")
.description("Apply a modification object to a Thing")
.in("state")
.in(this.name.description("Type of Thing"))
.in(
path[String Refined And[MatchesRegex["^[a-z0-9_]+$"], MaxSize[64]]]
.description("Name of Thing")
.name("thing")
)
.in(jsonBody[M].description("Object describing modification"))
.errorOut(
statusCode
.description(StatusCode(404), "Thing does not exist")
)
.tag(name)
thingModifyState is then used to define multiple endpoints:
blueRoutes.thingModifyState[things.models.blue.StateChange]
redRoutes.thingModifyState[things.models.red.StateChange]
The blue.StateChange object is defined like this:
object StateChange {
implicit val config: Configuration = Configuration.default.withSnakeCaseMemberNames
implicit val thingStateEncoder: Encoder[StateChange] = deriveEncoder(derivation.renaming.snakeCase)
implicit val thingStateDecoder: Decoder[StateChange] = deriveDecoder(derivation.renaming.snakeCase)
implicit val thingStateSchema: Schema[StateChange] = Schema.derived
}
/**
* Specifies a change to the Thing's state
*
* #param counterChange negative or positive increment of the counter
* #param resetTimestamp new timestamp value
*/
case class StateChange(counterChange: Long, resetTimestamp: Long)
When docs are generated (using redoc), the 'request body schema' section looks like this:
The overall description ("Object describing modification") of the jsonBody is visible in the docs, but I'd like to include descriptions of the jsonBody fields (counter_change / reset_timestamp) as well as their types.
I wouldn't expect the scaladoc definitions from StateChange to get picked up, but right now I cannot figure out what to do to get descriptions of the jsonBody fields into the output documentation. Do I need to derive the Schema manually, and include the descriptions somehow?
EDIT: I suspect this: https://github.com/softwaremill/tapir/issues/247 of being relevant, but the documentation link at the end of the issue (https://tapir-scala.readthedocs.io/en/latest/endpoint/customtypes.html#customising-derived-schemas) links to an anchor that is no longer there. I haven't yet found its new location!
EDIT2: Ah, maybe the link is now here: https://tapir.softwaremill.com/en/latest/endpoint/schemas.html#customising-derived-schemas. It mentions using #description annotations, but is missing explanation/examples of where those annotations go for derived schemas.
EDIT3: I was hoping for something like this:
import sttp.tapir.EndpointIO.annotations.description
case class StateChange(
#description("negative or positive increment of the counter") counterChange: Long,
#description("new timestamp value") resetTimestamp: Long
)
... but it doesn't help.

Following the documentation here: https://tapir.softwaremill.com/en/latest/endpoint/schemas.html#customising-derived-schemas, define the case class like this:
import sttp.tapir.Schema.annotations._
case class StateChange(
#description("negative or positive increment of the counter") counterChange: Long,
#description("new value for reset timestamp") resetTimestamp: Long
)
Note that you need to import the annotation from sttp.tapir.Schema.annotations, not from the location referred to in my question.

Related

Cannot construct a Read instance for type User. Type misunderstanding with Doobie in Scala

I am trying to return a User record from a database using doobie, http4s, and cats. I have been stymied by the type system, which is providing the following error based on the code below:
router:
val httpRoutes = HttpRoutes.of[IO] {
case GET -> Root / "second" / id =>
val intId : Integer = Integer.parseInt(id)
//if i make thie ConnectionIO[Option[Unit]] it compiles, but returns a cats Free object
val userOption: ConnectionIO[Option[User]] = UserModel.findById(intId, transactor.transactor)
Ok(s"userOption is instance of: ${userOption.getClass} object: ${userOption.toString}")
}.orNotFound
model:
case class User(
id: Read[Integer],
username: Read[String],
email: Read[String],
passwordHash: Read[String], //PasswordHash[SCrypt],
isActive: Read[Boolean],
dob: Read[Date]
) {
// def verifyPassword(password: String) : VerificationStatus = SCrypt.checkpw[cats.Id](password, passwordHash)
}
object UserModel {
def findById[User: Read](id: Integer, transactor: Transactor[ConnectionIO]): ConnectionIO[Option[User]] = findBy(fr"id = ${id.toString}", transactor)
private def findBy[User: Read](by: Fragment, transactor: Transactor[ConnectionIO]): ConnectionIO[Option[User]] = {
(sql"SELECT id, username, email, password_hash, is_active, dob FROM public.user WHERE " ++ by)
.query[User]
.option
.transact(transactor)
}
}
Error:
Error:(35, 70) Cannot find or construct a Read instance for type:
core.model.User
This can happen for a few reasons, but the most common case is that a data
member somewhere within this type doesn't have a Get instance in scope. Here are
some debugging hints:
- For Option types, ensure that a Read instance is in scope for the non-Option
version.
- For types you expect to map to a single column ensure that a Get instance is
in scope.
- For case classes, HLists, and shapeless records ensure that each element
has a Read instance in scope.
- Lather, rinse, repeat, recursively until you find the problematic bit.
You can check that an instance exists for Read in the REPL or in your code:
scala> Read[Foo]
and similarly with Get:
scala> Get[Foo]
And find the missing instance and construct it as needed. Refer to Chapter 12
of the book of doobie for more information.
val userOption: ConnectionIO[Option[User]] = UserModel.findById(intId, transactor.transactor)
If I change the line to a ConnectionIO[Option[User] to ConnectionIO[Option[Unit]] it compiles and runs but returns a Free(...) object from the cats library which I have not been able to figure out how to parse, and I don't see why I shouldn't be able to return my case class!
also See the type declarations on the findBy and findById methods. Before I added those there was a compile error that said it found a User, but required a Read[User]. I attempted applying the same type declaration to the invocation of findById in the router, but it gave the same error provided above.
Thank you for your help in advance, and please be patient with my ignorance. I've never encountered a type system so much smarter than me!
There's a lot to unpack here...
You don't need to wrap fields in User in Read.
Parameterizing the functions with User is not necessary, since you know what type you are getting back.
Most of the time if you manually handle Read instances, you're doing something wrong. Building a Read instance is only useful for when the data you're reading doesn't directly map to your type.
Transactor is meant to be a conversion from ConnectionIO (some action over a JDBC connection) to some other monad (e.g. IO) by summoning a connection, performing the action in a transaction, and disposing of said action. Transactor[ConnectionIO] doesn't really make much sense with this, and can probably lead to deadlocks (since you will eventually try to summon a connection while you are holding onto one). Just write your DB logic in ConnectionIO, and transact the whole thing afterwards.
Integer is not used in Scala code other than to interop with Java, and Doobie doesn't have Get/Put instances for it.
In your routes you take ConnectionIO[Option[User]], and do .toString. This doesn't do what you want it to - it just turns the action you've built into a useless string, without actually evaluating it. To actually get an Option[User] you would need to evaluate your action.
Putting all of that together, we end up with a piece of code like this:
import java.util.Date
import cats.effect.IO
import doobie.{ConnectionIO, Fragment, Transactor}
import doobie.implicits._
import org.http4s.HttpRoutes
import org.http4s.dsl.io._
import org.http4s.syntax.kleisli._
def httpRoutes(transactor: Transactor[IO]) = HttpRoutes.of[IO] {
case GET -> Root / "second" / IntVar(intId) =>
UserModel.findById(intId)
.transact(transactor)
.flatMap { userOption =>
Ok(s"userOption is instance of: ${userOption.getClass} object: ${userOption.toString}")
}
}.orNotFound
final case class User(
id: Int,
username: String,
email: String,
passwordHash: String,
isActive: Boolean,
dob: Date
)
object UserModel {
def findById(id: Int): ConnectionIO[Option[User]] = findBy(fr"id = ${id.toString}")
private def findBy(by: Fragment): ConnectionIO[Option[User]] =
(sql"SELECT id, username, email, password_hash, is_active, dob FROM public.user WHERE " ++ by)
.query[User]
.option
}
userOption here is Option[User].

Upickle: read an attribute that may be a String or Int as a String

I have a field that may come from a rest api as a String or Int, but when I read it I always want to read it as a String, i.e. if it comes as an Int I want to do a toString on it
case class ZoneList(
someField: Int,
targetField: String
)
object ZoneList {
implicit val rw: ReadWriter[ZoneList] = macroRW
}
targetField is the field in question
Looking at http://www.lihaoyi.com/upickle/#CustomPicklers, but still dont think I have enough of a handle to start a custom pickler
edit:
ended up doing this
implicit val anyToStringReader: Reader[Option[String]] =
reader[ujson.Value].map[Option[String]] { j =>
Try(j.toString()).toOption
}
Would have preferred if I could single out the targetField attribute only but my actual case class has a lot of fields and don't think I can do that and also utilize the default macro. If anyone knows how let me know
Solved by lihaoyi in the upickle gitter:
"if you want to single out that attribute, give it a new type that’s a wrapper around Option String and write your pickler for that type"

How to use Argonaut to decode poorly structured JSON where key name is meaningful

Hi the Decode Person example in the documentation is great if the JSON has a key and value and you can use the key name to extract its value, but what about if the string that makes up the key is arbitrary but meaningful.
for Fxample one open cryptocurrency api can give historic prices of coins and the structure of the JSON returned is different depending on the base currency of the coin I'm asking for and the various quote currencies I want it priced in.. for example lets say I want the price at a particular date of 'DOGE' in 'AUD' and 'XRP' the returned JSON looks like
{"DOGE":{"AUD":0.008835,"XRP":0.004988}}
I can't navigate to base and get its value and then prices and get them as the JSON is not stuctured that way, I need to look for 'DOGE' as a Key then in the Object retrned know that there will be a 'AUD' key and 'XRP' key. And of course that will be different for every result depending on my query.
Of course I know these keys as I create the search based on them but how can I use Argonaut to parse this JSON? Can I somehow create a Decode that closes over my key names?
Any help or guidance is appreciated, thanks.
Since you don't know what the property names are going to be ahead of time, you can't create a codec and decode the raw JSON directly to a Scala class.
You want to parse the raw JSON as a generic argonaut.Json object, then you can pattern match or use fold to examine the contents. For example:
val rawJson: String = ...
val parsed: Either[String, argonaut.Json] = argonaut.Parse.parse(rawJson)
You can see the methods available on argonaut's Json object by inspecting the source code.
As per Fried Brice's answer I did go down the parse route then mapped the resulting Either to produce my data type see code snippet below, suggestions, improvements welcome.
def parseHistoricPriceJSON(rawJson: String, fromcurrency: Currency, toCurrencies: List[Currency]): Either[String, PricedAsset] = {
import argonaut._, Argonaut._
import monocle.macros.syntax.lens._
val parsed: Either[String, Json] = Parse.parse(rawJson)
val myTocurrs = Currency("XRP") :: toCurrencies
parsed.right.map(outer => {
val cursor = outer.cursor
val ps = for {
toC <- myTocurrs
prices <- cursor.downField(fromcurrency.sym)
price <- prices.downField(toC.sym)
thep <- price.focus.number
} yield (toC, thep.toDouble.get)
PricedAsset(fromcurrency, ps)
})
}
case class Currency(sym: String) extends AnyVal {
def show = sym
}
case class PricedAsset(base:Currency, quotePrices: List[(Currency,Double)])

How to write efficient type bounded code if the types are unrelated in Scala

I want to improve the following Cassandra related Scala code. I have two unrelated user defined types which are actually in Java source files (leaving out the details).
public class Blob { .. }
public class Meta { .. }
So here is how I use them currently from Scala:
private val blobMapper: Mapper[Blob] = mappingManager.mapper(classOf[Blob])
private val metaMapper: Mapper[Meta] = mappingManager.mapper(classOf[Meta])
def save(entity: Object) = {
entity match {
case blob: Blob => blobMapper.saveAsync(blob)
case meta: Meta => metaMapper.saveAsync(meta)
case _ => // exception
}
}
While this works, how can you avoid the following problems
repetition when adding new user defined type classes like Blob or Meta
pattern matching repetition when adding new methods like save
having Object as parameter type
You can definitely use Mapper as a typeclass, doing:
def save[A](entity: A)(implicit mapper: Mapper[A]) = mapper.saveAsync(entity)
Now you have a generic method able to perform a save operation on every type A for which a Mapper[A] is in scope.
Also, the mappingManager.mapper implementation could be probably improved to avoid classOf, but it's hard to tell from the question in the current state.
A few questions:
Is mappingManager.mapper(cls) expensive?
How much do you care about handling subclasses of Blob or Meta?
Can something like this work for you?
def save[T: Manifest](entity: T) = {
mappingManager.mapper(manifest[T].runtimeClass).saveAsync(entity)
}
If you do care about making sure that subclasses of Meta grab the proper mapper then you may find isAssignableFrom helpful in your .mapper (and store found sub-classes in a HashMap so you only have to look once).
EDIT: Then maybe you want something like this (ignoring threading concerns):
private[this] val mapperMap = mutable.HashMap[Class[_], Mapper[_]]()
def save[T: Manifest](entity: T) = {
val cls = manifest[T].runtimeClass
mapperMap.getOrElseUpdate(cls, mappingManager.mapper(cls))
.asInstanceOf[Mapper[T]]
.saveAsync(entity)
}

Add element to JsValue?

I'm trying to add in a new element to a JsValue, but I'm not sure how to go about it.
val rJson = Json.parse(response)
val imgId = //stuff to get the id :Long
rJson.apply("imgId", imgId)
Json.stringify(rJson)
Should I be converting to a JSONObject or is there some method that can be applied directly to the JsValue to insert a new element to the JSON?
Edit:
response is coming from another server, but I do have control over it. So, if I need to add an empty "imgId" element to the JSON Object, that's fine.
You can do this as a JsObject, which extends JsValue and has a + method:
val rJson: JsValue = Json.parse(response)
val imgId = ...
val returnJson: JsObject = rJson.as[JsObject] + ("imgId" -> Json.toJson(imgId))
Json.stringify(returnJson)
I use the following helper in a project I'm working on:
/** Insert a new value at the given path */
def insert(path: JsPath, value: JsValue) =
__.json.update(path.json.put(value))
, which can be used as a JSON transformer as such:
val rJson = Json.parse(response)
val imgId = //stuff to get the id :Long
Json.stringify(rJson.transform(insert(__ \ 'imgId, imgId)))
You could definitely just use the body of that insert method, but I personally find the transformer API to be really counterintuitive.
The nice thing about this is that any number of transforms can be composed using andThen. We typically use this to convert API responses to the format of our models so that we can use Reads deserializers to instantiate model instances. We use this insert helper to mock parts of the API response that don't yet exist, so that we can build models we need ahead of the API.
Even though the API is convoluted, I highly, highly recommend investing some time in reading the Play framework docs on JSON handling, all 5 pages. They're not the world's greatest docs, but they actually are pretty thorough.