We need to rename the fields in the graphQL, so we get the AST of the GraphQL firstly, and modify it as we need.
However, I can't find any solution to convert the AST back to the GraphQL.
The parser I use is "GraphQL Dotnet Parser"
There is a AstPrinter class that can print out the Document or any Ast Node.
https://github.com/graphql-dotnet/graphql-dotnet/blob/master/src/GraphQL/Utilities/AstPrinter.cs
AstPrinter.Print(…)
Related
I'm still pretty new to Scala and arrived at some kind of typing-roadblock.
Non-SQL databases such as mongo and rethinkdb do not enforce any scheme for their tables and manage data in json format. I've been struggling to get the java API for rethinkdb to work on Scala and there seems to be surprisingly low information on how to actually use the results returned from the database.
Assuming a simple document schema such as this:
{
"name": "melvin",
"age": 42,
"tags": ["solution"]
}
I fail to get how to actually this data in Scala. After running a query, for example, by running something like r.table("test").run(connection), I receive an object from which I can iterate AnyRef objects. In the python word, this most likely would be a simple dict. How do I convey the structure of this data to Scala, so I can use it in code (e.g., query fields of the returned documents)?
From a quick scan of the docs and code, the Java Rethink client uses Jackson to handle deserialization of the JSON received from the DB into JVM objects. Since by definition every JSON object received is going to be deserializable into a JSON AST (Abstract Syntax Tree: a representation in plain Scala objects of the structure of a JSON document), you could implement a custom Jackson ObjectMapper which, instead of doing the usual Jackson magic with reflection, always deserializes into the JSON AST.
For example, Play JSON defers the actual serialization/deserialization to/from JSON to Jackson: it installs a module into a vanilla ObjectMapper which specially takes care of instances of JsValue, which is the root type of Play JSON's AST. Then something like this should work:
import com.fasterxml.jackson.databind.ObjectMapper
import play.api.libs.json.jackson.PlayJsonModule
// Use Play JSON's ObjectMapper... best to do this before connecting
RethinkDB.setResultMapper(new ObjectMapper().registerModule(new PlayJsonModule(JsonParserSettings())))
run(connection) returns a Result[AnyRef] in Scala notation. There's an alternative version, run(connection, typeRef), where the second argument specifies a result type; this is passed to the ObjectMapper to ensure that every document will either fail to deserialize or be an instance of that result type:
import play.api.libs.json.JsValue
val result = r.table("table").run(connection, classOf[JsValue]) : Result[JsValue]
You can then get the next element from the result as a JsValue and use the usual Play JSON machinery to convert the JsValue into your domain type:
import play.api.libs.json.Json
case class MyDocument(name: String, age: Int, tags: Seq[String])
object MyDocument {
implicit val jsonFormat = Json.format[MyDocument]
}
// result is a Result[JsValue] ... may need an import MyDocument.jsonFormat or similar
val myDoc = Json.fromJson[MyDocument](result.next()).asOpt[MyDocument] : Option[MyDocument]
There's some ability with enrichments to improve the Scala API to make a lot of this machinery more transparent.
You could do similar things with the other Scala JSON ASTs (e.g. Circe, json4s), but might have to implement functionality similar to what Play does with the ObjectMapper yourself.
I'm using ReactiveMongo. I want to run a set of pipelines (i.e., use the Mongo aggregation framework) on a collection, and stream the results. I want to retrieve them as BSON documents.
I've seen examples that suggest something like:
coll.aggregatorContext[BSONDocument](pipelines.head, pipelines.tail).prepared[AkkaStreamCursor].cursor.documentSource()
This gets me a compilation error because I'm missing an implicit CursorProducer.Aux[BSONDocument, AkkaStreamCursor], but I have no idea how to import or construct one of these -- can't find examples. (Please don't refer me to the ReactiveMongo tests, as I can't see an example of this in them, or don't understand what I'm looking at if there is one.)
I should add that I'm using version 0.20.3 of ReactiveMongo.
If my issue is due to my choice of retrieving results as BSON focuments, and there's another type that would find an existing implicit CursorProducer.Aux, I'd happily switch if someone can tell me how to get this to compile?
So, IntelliJ is telling me that I'm missing an implicit for .prepared.
But, an sbt compile is telling me that my problem is that AkkaStreamCursor doesn't fulfill the type bounds of .prepared:
type arguments [reactivemongo.akkastream.AkkaStreamCursor] do not conform to method prepared's type parameter bounds [AC[_] <: reactivemongo.api.Cursor.WithOps[_]]
What ReactiveMongo type is available to use for this?
I need to parse several json fields, which I'm using Play Json to do it. As parsing may fail, I need to throw a custom exception for each field.
To read a field, I use this:
val fieldData = parseField[String](json \ fieldName, "fieldName")
My parseField function:
def parseField[T](result: JsLookupResult, fieldName: String): T = {
result.asOpt[T].getOrElse(throw new IllegalArgumentException(s"""Can't access $fieldName."""))
}
However, I get an error that reads:
Error:(17, 17) No Json deserializer found for type T. Try to implement
an implicit Reads or Format for this type.
result.asOpt[T].getOrElse(throw new IllegalArgumentException(s"""Can't access $fieldName."""))
Is there a way to tell the asOpt[] to use the type in T?
I strongly suggest that you do not throw exceptions. The Play JSON API has both a JsSuccess and JsError types that will help you encode parsing errors.
As per the documentation
To convert a Scala object to and from JSON, we use Json.toJson[T: Writes] and Json.fromJson[T: Reads] respectively. Play JSON provides the Reads and Writes typeclasses to define how to read or write specific types. You can get these either by using Play's automatic JSON macros, or by manually defining them. You can also read JSON from a JsValue using validate, as and asOpt methods. Generally it's preferable to use validate since it returns a JsResult which may contain an error if the JSON is malformed.
See https://github.com/playframework/play-json#reading-and-writing-objects
There is also a good example on the Play Discourse forum on how the API manifests in practice.
Scala's play framework claims that Anorm, and writing your own SQL is better that ORM's. One of the reasons is that you anyway most often want only transfer data between database and frontend as json. However, most tutorials, and even Play documentation give examples of parsing sql's returned values into case classes, in order to parse it again into json. We still have an object relational mapping anyway, or am I missing a point?
In my database there exists a table with 33 columns. Declaring a case class takes me 33 lines, declaring a parser with ~ operator, takes another 33. Using case statement to create an Object, another 66! Seriously, what am I doing wrong? Is there any shortcut? In django the same thing takes only 33 lines.
If you're using Anorm within a Play application, then the mapping into a Json object of your case class (assuming it has fairly normal apply and unapply functions defined for it, which most do) should be pretty much as simple as defining an implicit which uses the >2.10 macro based Json-inception methods...so all you actually need is a definition like this:
implicit val myCaseFormats = Json.format[MyCaseClass]
where 'MyCaseClass' is the name of your case type. You could even bake this into the parser combinator you use for de-serialising row-sets back from the database...that would dramatically clean up your code and cut down the amount of code you have to write.
See here for details on the Json macros:
https://www.playframework.com/documentation/2.1.1/ScalaJsonInception
I use this quite extensively in a pretty large code-base and it does make things quite clean.
In terms of your parsers for Anorm, remember that you don't have to produce a case-class instance as a result of a parse...you can actually return anything you like, which could just be an indexed sequence of your column values (if you're using something like Shapeless to allow for mixed-type lists etc...) or some other structure.
You do hav macro support in Anorm as well so the the parsers for your case classes can be one liners like this:
import norm.{Macro, Rowset}
val parser = Macro.namedParser[MyCaseClass]
If you want to do something custom, (such as parse direct to JsValue) then you have the flexibility to just hand-craft a more crafty parser.
HTH
The traditional way to use spray-json seems to be to bind all your models to appropriate JsonFormats (built-in or custom) at compile time, with the formats all as implicits. Is there a way instead to look the formatters up at runtime? I'm trying to marshal a heterogeneous list of values, and the only ways I'm seeing to do it are
Write an explicit lookup (e.g. using pattern matching) that hard-codes which fomratter to use for which value type, or
Something insane using reflection to find all the implicits
I'm pretty new to Scala and spray-json both, so I worry I'm missing some simpler approach.
More context: I'm trying to write a custom serializer that writes out only a specified subset of (lazy) object fields. I'm looping over the list of specified fields (field names) at runtime and getting the values by reflection (actually it's more complicated than that, but close enough), and now for each one I need to find a JsonFormat that can serialize it.