I am trying to parse this json object and I ran a unit test and I got this error Error: Error when parsing result of 'listdescriptors': {"obj":[{"msg":["error.expected.jsarray"],"args":[]}]}! [info] at org.bitcoins.rpc.client.common.Client.parseResult(Client.scala:455). I think the problem may be that I'm parsing the json file into a case class incorectly.
Result:
{ (json object)
"wallet_name" : "str", (string) Name of wallet this operation was performed on
"descriptors" : [ (json array) Array of descriptor objects
{ (json object)
"desc" : "str", (string) Descriptor string representation
"timestamp" : n, (numeric) The creation time of the descriptor
"active" : true|false, (boolean) Activeness flag
"internal" : true|false, (boolean, optional) Whether this is an internal or external descriptor; defined only for active descriptors
"range" : [ (json array, optional) Defined only for ranged descriptors
n, (numeric) Range start inclusive
n (numeric) Range end inclusive
],
"next" : n (numeric, optional) The next index to generate addresses from; defined only for ranged descriptors
},
...
]
}
object JsonSerializers {
implicit val descriptorsClassReads: Reads[descriptorsClass] =
Json.reads[descriptorsClass]
implicit val listDescriptorsReads: Reads[listDescriptorsResult] =
Json.reads[listDescriptorsResult]
sealed abstract class WalletResult
case class listDescriptorsResult(
wallet_name: String,
descriptors: Vector[descriptorsClass]
) extends WalletResult
case class descriptorsClass(
desc: String,
timestamp: ZonedDateTime,
active: Boolean,
internal: Boolean,
range: Array[(Int, Int)],
next: Int
) extends WalletResult
Here is a working example without the ZonedDateTime. It is probably better to have it represented as a Long in case class and convert it to ZonedDateTime later only when you need to represent it as ZonedDateTime
import play.api.libs.json.{Json, Reads, Writes}
sealed abstract class WalletResult
case class ListDescriptorsResult(
wallet_name: String,
descriptors: Seq[DescriptorsClass]
) extends WalletResult
case class DescriptorsClass(
desc: String,
timestamp: Long,
active: Boolean,
internal: Boolean,
range: (Int, Int),
next: Int
) extends WalletResult
implicit val descriptorsClassReads: Reads[DescriptorsClass] =
Json.reads[DescriptorsClass]
implicit val listDescriptorsResultReads: Reads[ListDescriptorsResult] =
Json.reads[ListDescriptorsResult]
val jsonString =
"""
|{
| "wallet_name" : "str",
| "descriptors" : [
| {
| "desc" : "desc1",
| "timestamp" : 1657920976,
| "active" : true,
| "internal" : true,
| "range" : [
| 10,
| 20
| ],
| "next" : 10
| },
| {
| "desc" : "desc2",
| "timestamp" : 1657920976,
| "active" : true,
| "internal" : false,
| "range" : [
| 30,
| 40
| ],
| "next" : 20
| }
| ]
|}
|
""".stripMargin.replaceAll("\n","")
val jsValue = Json.parse(jsonString)
println(Json.fromJson[ListDescriptorsResult] (jsValue).get)
Here is the code in a scastie editor
Related
I'm learning Circe and need help navigating json hierarchies.
Given a json defined as follows:
import io.circe._
import io.circe.parser._
val jStr = """
{
"a1": ["c1", "c2"],
"a2": [{"d1": "abc", "d2": "efg"}, {"d1": "hij", "d2": "klm"}]
}
"""
val j = parse(jStr).getOrElse(Json.Null)
val ja = j.hcursor.downField("a2").as[Json].getOrElse("").toString
ja
ja is now: [ { "d1" : "abc", "d2" : "efg" }, { "d1" : "hij", "d2" : "klm" } ]
I can now do the following to this list:
case class Song(id: String, title: String)
implicit val songDecoder: Decoder[Song] = (c: HCursor ) => for {
id <- c.downField("d1").as[String]
title <- c.downField("d2").as[String]
} yield Song(id,title)
io.circe.parser.decode[List[Song]](ja).getOrElse("")
Which returns what I want: List(Song(abc,efg), Song(hij,klm))
My questions are as follows:
How do I add item a1.c1 from the original json (j) to each item retrieved from the array? I want to add it to Song modified as follows: case class Song(id: String, title: String, artist: String)
It seems wrong to turn the json object back into a String for the iterative step of retrieving id and title. Is there a way to do this without turning json into String?
I have a use case where I need to read a json file or json string using spark as Dataset[T] in scala. The json file has nested elements and some of the elements in the json are optional. I am able to read the json file and map those to case class if I ignore optional fields in the json as the schema matches with the case class.
According to this link and answer it works for first level json when case class have option field but if it is there is nested element it does not work.
Json String that I am using is as below :
val jsonString = """{
"Input" :
{
"field1" : "Test1",
"field2" : "Test2",
"field3Array" : [
{
"key1" : "Key123",
"key2" : ["keyxyz","keyAbc"]
}
]
},
"Output":
{
"field1" : "Test2",
"field2" : "Test3",
"requiredKey" : "RequiredKeyValue",
"field3Array" : [
{
"key1" : "Key123",
"key2" : ["keyxyz","keyAbc"]
}
]
}
}"""
The case class that I have created are as below :
case class InternalFields (key1: String, key2 : Array[String])
case class Input(field1:String, field2: String,field3Array : Array[InternalFields])
case class Output(field1:String, field2: String,requiredKey : String,field3Array : Array[InternalFields])
case class ExternalObject(input : Input, output : Output)
The code through which I am reading the jsonString is as below :
val df = spark.read.option("multiline","true").json(Seq(jsonString).toDS).as[ExternalObject]
The above code works perfectly fine. Now when I add a optional field in the Output case class as json string could have it to support some use case it throws an error saying that the optional field that I have specified in the case class is missing.
So in order to get around this I went ahead and tried providing schema using encoders and see if that works.
After adding optional field my case class got changed to as below :
case class InternalFields (key1: String, key2 : Array[String])
case class Input(field1:String, field2: String,field3Array : Array[InternalFields])
case class Output(field1:String, field2: String,requiredKey : String, optionalKey : Option[String],field3Array : Array[InternalFields]) //changed
case class ExternalObject(input : Input, output : Output)
There is one additional optional field added in Output case class.
Now I am trying to read the jsonString as below :
import org.apache.spark.sql.Encoders
val schema = Encoders.product[ExternalObject].schema
val df = spark.read
.schema(schema)
.json(Seq(jsonString).toDS)
.as[ExternalObject]
When I do df.show or display(df) it gives me output table as below which is null for both input column as well as output column.
If I remove that optional field from the case class then this code also works fine and shows me the expected output.
Is there any way by which I can make this optional field in the inner json or inner case class work and cast it directly to respective case class inside dataset[T].
Any ideas, guidance, suggestions that can make it work would be of great help.
The problem is that spark uses struct types to map a class to a Row, take this as an example:
case class MyRow(a: String, b: String, c: Option[String])
Can spark create a dataframe, which sometimes has column c and sometimes not? like:
+-----+-----+-----+
| a | b | c |
+-----+-----+-----+
| a1 | b1 | c1 |
+-----+-----+-----+
| a2 | b2 | <-- note the non-existence here :)
+-----+-----+-----+
| a3 | b3 | c3 |
+-----+-----+-----+
Well it cannot, and being nullable, means the key has to exist, but the value can be null:
... other key values
"optionalKey": null,
...
This is considered to be valid, and is convertible to your structs. I suggest you use a dedicated JSON library (as you know there are many of them out there), and use udf's or something to extract what you need from json.
I tested the above code base with the following case class structres
case class Field3Array(
key1: String,
key2: List[String]
)
case class Input(
field1: String,
field2: String,
field3Array: List[Field3Array]
)
case class Output(
field1: String,
field2: String,
requiredKey: String,
field3Array: List[Field3Array]
)
case class Root(
Input: Input,
Output: Output
)
The Json string cannot be directly passed to the DataFrameReader as you have tried since the json method expects a path.
I put the JSON string in a file and passed the file path to the DataFrameReader and the results were as follows
import org.apache.spark.sql.{Encoder,Encoders}
import org.apache.spark.sql.Dataset
case class Field3Array(
key1: String,
key2: List[String]
)
case class Input(
field1: String,
field2: String,
field3Array: List[Field3Array]
)
case class Output(
field1: String,
field2: String,
requiredKey: String,
field3Array: List[Field3Array]
)
case class Root(
Input: Input,
Output: Output
)
val pathToJson: String = "file:////path/to/json/file/on/local/filesystem"
val jsEncoder: Encoder[Root] = Encoders.product[Root]
val df: Dataset[Root] = spark.read.option("multiline","true").json(pathToJson).as[Root]
The results for show are as follows:
df.show(false)
+--------------------------------------------+--------------------------------------------------------------+
|Input |Output |
+--------------------------------------------+--------------------------------------------------------------+
|[Test1, Test2, [[Key123, [keyxyz, keyAbc]]]]|[Test2, Test3, [[Key123, [keyxyz, keyAbc]]], RequiredKeyValue]|
+--------------------------------------------+--------------------------------------------------------------+
df.select("Input.field1").show()
+------+
|field1|
+------+
| Test1|
+------+
I have this Vector:
val imageIds = Vector(
"XXXX1",
"XXXX2",
"XXXX3"
)
And I currently create an Array using the following method:
def makeTheImageDataArray: Vector[JsObject] = {
imageIds.map(SingleImageData(_, "theURL", "theStatus").asJsObject)
}
with this case class:
case class SingleImageData(ImageId: String, URL: String, Status: String) {
def imageId: String = ImageId
def getURL: String = URL
def status: String = Status
def asJsObject: JsObject = JsObject(
"ImageId" -> JsString(imageId),
"URL" -> JsString(getURL),
"Status" -> JsString(status)
)
}
Which produces:
Vector(
{"ImageId":"XXXX1","URL":"theURL","Status":"theStatus"},
{"ImageId":"XXXX2","URL":"theURL","Status":"theStatus"},
{"ImageId":"XXXX3","URL":"theURL","Status":"theStatus"}
)
Instead of producing a Vector, I want to create a HashMap instead, with the ImageId as the key, i.e. :
Map(
XXX1 -> {"URL":"theURL","Status":"theStatus"},
XXX2 -> {"URL":"theURL","Status":"theStatus"},
XXX3 -> {"URL":"theURL","Status":"theStatus"}
)
Can anyone show me how to do this?
Remove "ImageId" -> JsString(imageId) from asJsObject, then
imageIds.map(id => id -> SingleImageData(id, "theURL", "theStatus").asJsObject).toMap
or if SingleImageData doesn't need to know the id, remove ImageID entirely from SingleImageData, and
imageIds.map(_ -> SingleImageData("theURL", "theStatus").asJsObject).toMap
I am working on a Rest API using Scala - Spray - Sorm. I am trying to implement Sorm framework on an existing database. Saving, updating and deleting already works fine, but whenever I try to query something from the database, it is giving me an:
[ERROR] [07/22/2016 09:57:14.377] [on-spray-can-akka.actor.default-
dispatcher-5] [akka://on-spray-can/user/growficient-api] argument type mismatch:
Incorrect values of parameter types:
- long:
| class java.lang.Long:
| 91
- class java.lang.String:
| class org.joda.time.LocalDate:
| 2016-04-08
- class java.lang.String:
| class org.joda.time.LocalTime:
| 12:55:27.000
- int:
| class java.lang.Integer:
| 0
- int:
| class java.lang.Double:
| 2155.0
- int:
| class java.lang.Integer:
| 22
- int:
| class java.lang.Integer:
| 35
- int:
| class java.lang.Integer:
| -65
My model is:
object Samples extends DefaultJsonProtocol with SprayJsonSupport {
implicit object samplesFormat extends RootJsonFormat[Samples] {
override def read(value: JsValue) = {
println(value)
value.asJsObject.getFields("gatewayid", "sensorid", "date", "time", "wc", "ec", "temp", "battery", "rssi") match {
case Seq(
JsString(gatewayid),
JsString(sensorid),
JsString(date),
JsString(time),
JsNumber(wc),
JsNumber(ec),
JsNumber(temp),
JsNumber(battery),
JsNumber(rssi)
) =>
new Samples(gatewayid, sensorid, date, time, wc.toInt, ec.toInt, temp.toInt, battery.toInt, rssi.toInt)
case _ => throw new DeserializationException(s"$value is not properly formatted")
}
}
override def write(s: Samples) = JsObject(
"gatewayid" -> JsString(s.gatewayid),
"sensorid" -> JsString(s.sensorid),
"date" -> JsString(s.date),
"time" -> JsString(s.time),
"wc" -> JsNumber(s.wc),
"ec" -> JsNumber(s.ec),
"temp" -> JsNumber(s.temp),
"battery" -> JsNumber(s.battery),
"rssi" -> JsNumber(s.rssi)
)
}
}
case class Samples(gatewayid: String, sensorid: String, date: String, time: String, wc: Int, ec: Int, temp: Int, battery: Int, rssi: Int)
Right now for testing purposes I am just doing a simple query to get everything:
object DB extends Instance(
entities = Set(
Entity[Samples]()
),
url = s"jdbc:mysql://$addr:$port/$database",
user = username,
password = password,
initMode = InitMode.Create
)
DB.query[Samples].fetch().toList
Unfortunately it crashes at the query with given error output. I understand that something is getting the wrong types of parameters but I can't figure out what.
I would really appreciate if anyone could point me into the right direction.
The culprit is there:
- class java.lang.String:
| class org.joda.time.LocalDate:
| 2016-04-08
- class java.lang.String:
| class org.joda.time.LocalTime:
| 12:55:27.000
You should use the suggested Joda types instead of String on the according fields.
Let say I have a config file with the following:
someConfig: [
{"t1" :
[ {"t11" : "v11",
"t12" : "v12",
"t13" : "v13",
"t14" : "v14",
"t15" : "v15"},
{"t21" : "v21",
"t22" : "v22",
"t23" : "v13",
"t24" : "v14",
"t25" : "v15"}]
},
"p1" :
[ {"p11" : "k11",
"p12" : "k12",
"p13" : "k13",
"p14" : "k14",
"p15" : "k15"},
{"p21" : "k21",
"p22" : "k22",
"p23" : "k13",
"p24" : "k14",
"p25" : "k15"}]
}
]
I would like to retrieve it as a Scala immutable collection Map[List[Map[String, String]]].
using the following code I am only able to retrieve it as a List of HashMaps (more precisely a $colon$colon of HashMap) which fails when I try to iterate trough it. Ideally to complete my code I need a way to convert the HashMap to scala maps
def example: Map[String, List[Map[String,String]]] = {
val tmp = ConfigFactory.load("filename.conf")
val mylist : Iterable[ConfigObject] = tmp.getObjectList("someConfig")
.asScala
(for {
item : ConfigObject <- mylist
myEntry: Entry[String, ConfigValue] <- item.entrySet().asScala
name = entry.getKey
value = entry.getValue.unwrapped()
.asInstanceOf[util.ArrayList[Map[String,String]]]
.asScala.toList
} yield (name, value)).toMap
}
This code should be able to give you what you are looking for.
It builds up lists and maps for your bespoke structure.
The final reduceLeft, is because your json starts with a list, someConfig: [ ], and so I've flattened that out. If you wanted you could probably have removed the [ ]'s, as they as probably not required to represent the data you have.
//These methods convert from Java lists/maps to Scala ones, so its easier to use
private def toMap(hashMap: AnyRef): Map[String, AnyRef] = hashMap.asInstanceOf[java.util.Map[String, AnyRef]].asScala.toMap
private def toList(list: AnyRef): List[AnyRef] = list.asInstanceOf[java.util.List[AnyRef]].asScala.toList
val someConfig: Map[String, List[Map[String, String]]] =
config.getList("someConfig").unwrapped().map { someConfigItem =>
toMap(someConfigItem) map {
case (key, value) =>
key -> toList(value).map {
x => toMap(x).map { case (k, v) => k -> v.toString }
}
}
}.reduceLeft(_ ++ _)
if you stroe your configs in the application.conf like this
someConfig{
list1{
value1 = "myvalue1"
value2 = "myvalue2"
.....
valueN = "myvalueN"
}
list2{
......
}
.....
listN{
......
}
}
you can do the following:
val myconfig = ConfigFactory.load().getObject("someConfig.list1").toConfig
and after you can acces the values like
myconfig.getString("value1")
myconfig.getString("value2")
etc.
which will return the strings "myvalue1", "myvalue2"
not the most elegant way but plain easy