SCALA: Reading JSON file with the path provided - scala

I have scenario where I will be getting different JSON result from multiple API's, I need to read specific value from the response.
For instance my JSON response is as below, now I need a format from user to provider by which I can read the value of Lat, Don't want hard-coded approach for this, user can provided a node to read in some other json file or txt file:
{
"name" : "Watership Down",
"location" : {
"lat" : 51.235685,
"long" : -1.309197
},
"residents" : [ {
"name" : "Fiver",
"age" : 4,
"role" : null
}, {
"name" : "Bigwig",
"age" : 6,
"role" : "Owsla"
} ]
}

You can get the key of json using scala JSON parser as below. Im defining a function to get the lat, which you can make generic as per your need, so that you just need to change the function.
import scala.util.parsing.json.JSON
val json =
"""
|{
| "name" : "Watership Down",
| "location" : {
| "lat" : 51.235685,
| "long" : -1.309197
| },
| "residents" : [ {
| "name" : "Fiver",
| "age" : 4,
| "role" : null
| }, {
| "name" : "Bigwig",
| "age" : 6,
| "role" : "Owsla"
| } ]
|}
""".stripMargin
val jsonObject = JSON.parseFull(json).get.asInstanceOf[Map[String, Any]]
val latLambda : (Map[String, Any] => Option[Double] ) = _.get("location")
.map(_.asInstanceOf[Map[String, Any]]("lat").toString.toDouble)
assert(latLambda(jsonObject) == Some(51.235685))
The expanded version of function,
val latitudeLambda = new Function[Map[String, Any], Double]{
override def apply(input: Map[String, Any]): Double = {
input("location").asInstanceOf[Map[String, Any]]("lat").toString.toDouble
}
}
Make the function generic so that once you know what key you want from the JSON, just change the function and apply the JSON.
Hope it helps. But there are nicer APIs out there like Play JSON lib. You simply can use,
import play.api.libs.json._
val jsonVal = Json.parse(json)
val lat = (jsonVal \ "location" \ "lat").get

Related

Join data-frame based on value in list of WrappedArray

I have to join two spark data-frames in Scala based on a custom function. Both data-frames have the same schema.
Sample Row of data in DF1:
{
"F1" : "A",
"F2" : "B",
"F3" : "C",
"F4" : [
{
"name" : "N1",
"unit" : "none",
"count" : 50.0,
"sf1" : "val_1",
"sf2" : "val_2"
},
{
"name" : "N2",
"unit" : "none",
"count" : 100.0,
"sf1" : "val_3",
"sf2" : "val_4"
}
]
}
Sample Row of data in DF2:
{
"F1" : "A",
"F2" : "B",
"F3" : "C",
"F4" : [
{
"name" : "N1",
"unit" : "none",
"count" : 80.0,
"sf1" : "val_5",
"sf2" : "val_6"
},
{
"name" : "N2",
"unit" : "none",
"count" : 90.0,
"sf1" : "val_7",
"sf2" : "val_8"
},
{
"name" : "N3",
"unit" : "none",
"count" : 99.0,
"sf1" : "val_9",
"sf2" : "val_10"
}
]
}
RESULT of Joining these sample rows:
{
"F1" : "A",
"F2" : "B",
"F3" : "C",
"F4" : [
{
"name" : "N1",
"unit" : "none",
"count" : 80.0,
"sf1" : "val_5",
"sf2" : "val_6"
},
{
"name" : "N2",
"unit" : "none",
"count" : 100.0,
"sf1" : "val_3",
"sf2" : "val_4"
},
{
"name" : "N3",
"unit" : "none",
"count" : 99.0,
"sf1" : "val_9",
"sf2" : "val_10"
}
]
}
The result is:
full-outer-join based on value of "F1", "F2" and "F3" +
join of "F4" keeping unique nodes(use name as id) with max value of "count"
I am not very familiar with Scala and have been struggling with this for more than a day now. Here is what I have gotten to so far:
val df1 = sqlContext.read.parquet("stack_a.parquet")
val df2 = sqlContext.read.parquet("stack_b.parquet")
val df4 = df1.toDF(df1.columns.map(_ + "_A"):_*)
val df5 = df2.toDF(df1.columns.map(_ + "_B"):_*)
val df6 = df4.join(df5, df4("F1_A") === df5("F1_B") && df4("F2_A") === df5("F2_B") && df4("F3_A") === df5("F3_B"), "outer")
def joinFunction(r:Row) = {
//Need the real-deal here!
//print(r(3)) //-->Any = WrappedArray([..])
//also considering parsing as json to do the processing but not sure about the performance impact
//val parsed = JSON.parseFull(r.json) //then play with parsed
r.toSeq //
}
val finalResult = df6.rdd.map(joinFunction)
finalResult.collect
I was planning to add the custom merge logic in joinFunction but I am struggling to convert the WrappedArray/Any class to something I can work with.
Any inputs on how to do the conversion or the join in a better way will be very helpful.
Thanks!
Edit (7 Mar, 2021)
The full-outer join actually has to be performed only on "F1".
Hence, using #werner's answer, I am doing:
val df1_a = df1.toDF(df1.columns.map(_ + "_A"):_*)
val df2_b = df2.toDF(df2.columns.map(_ + "_B"):_*)
val finalResult = df1_a.join(df2_b, df1_a("F1_A") === df2_b("F1_B"), "full_outer")
.drop("F1_B")
.withColumn("F4", joinFunction(col("F4_A"), col("F4_B")))
.drop("F4_A", "F4_B")
.withColumn("F2", when(col("F2_A").isNull, col("F2_B")).otherwise(col("F2_A")))
.drop("F2_A", "F2_B")
.withColumn("F3", when(col("F3_A").isNull, col("F3_B")).otherwise(col("F3_A")))
.drop("F3_A", "F3_B")
But I am getting this error. What am I missing..?
You can implement the merge logic with the help of an udf:
//case class to define the schema of the udf's return value
case class F4(name: String, unit: String, count: Double, sf1: String, sf2: String)
val joinFunction = udf((a: Seq[Row], b: Seq[Row]) =>
(a ++ b).map(r => F4(r.getAs[String]("name"),
r.getAs[String]("unit"),
r.getAs[Double]("count"),
r.getAs[String]("sf1"),
r.getAs[String]("sf2")))
//group the elements from both arrays by name
.groupBy(_.name)
//take the element with the max count from each group
.map { case (_, d) => d.maxBy(_.count) }
.toSeq)
//join the two dataframes
val finalResult = df1.withColumnRenamed("F4", "F4_A").join(
df2.withColumnRenamed("F4", "F4_B"), Seq("F1", "F2", "F3"), "full_outer")
//call the merge function
.withColumn("F4", joinFunction('F4_A, 'F4_B))
//drop the the intermediate columns
.drop("F4_A", "F4_B")

Scala play json - update all values with the same key

Let's say I have a JsValue in the form:
{
"businessDetails" : {
"name" : "Business",
"phoneNumber" : "+44 0808 157 0192"
},
"employees" : [
{
"name" : "Employee 1",
"phoneNumber" : "07700 900 982"
},
{
"name" : "Employee 2",
"phoneNumber" : "+44(0)151 999 2458"
}
]
}
I was wondering if there is a way to do an update on every value belonging to a key with a certain name inside a JsValue regardless of its complexity?
Ideally I'd like to map on every phone number to ensure that a (0) is removed if there is one.
I have come across play-json-zipper updateAll but I'm getting unresolved dependency issues when adding the library to my sbt project.
Any help either adding the play-json-zipper library or implementing this in ordinary play-json would be much appreciated.
Thanks!
From what I can see in play-json-zipper project page, you might forgot to add resolver resolvers += "mandubian maven bintray" at "http://dl.bintray.com/mandubian/maven"
If it won't help, and you would like to proceed with custom implementation: play-json does not provide folding or traversing API over JsValue out of the box, so it can be implemented recursively in the next way:
/**
* JSON path from the root. Int - index in array, String - field
*/
type JsPath = Seq[Either[Int,String]]
type JsEntry = (JsPath, JsValue)
type JsTraverse = PartialFunction[JsEntry, JsValue]
implicit class JsPathOps(underlying: JsPath) {
def isEndsWith(field: String): Boolean = underlying.lastOption.contains(Right(field))
def isEndsWith(index: Int): Boolean = underlying.lastOption.contains(Left(index))
def /(field: String): JsPath = underlying :+ Right(field)
def /(index: Int): JsPath = underlying :+ Left(index)
}
implicit class JsValueOps(underlying: JsValue) {
/**
* Traverse underlying json based on given partially defined function `f` only on scalar values, like:
* null, string or number.
*
* #param f function
* #return updated json
*/
def traverse(f: JsTraverse): JsValue = {
def traverseRec(prefix: JsPath, value: JsValue): JsValue = {
val lifted: JsValue => JsValue = value => f.lift(prefix -> value).getOrElse(value)
value match {
case JsNull => lifted(JsNull)
case boolean: JsBoolean => lifted(boolean)
case number: JsNumber => lifted(number)
case string: JsString => lifted(string)
case array: JsArray =>
val updatedArray = array.value.zipWithIndex.map {
case (arrayValue, index) => traverseRec(prefix / index, arrayValue)
}
JsArray(updatedArray)
case `object`: JsObject =>
val updatedFields = `object`.fieldSet.toSeq.map {
case (field, fieldValue) => field -> traverseRec(prefix / field, fieldValue)
}
JsObject(updatedFields)
}
}
traverseRec(Nil, underlying)
}
}
which can be used in the next way:
val json =
s"""
|{
| "businessDetails" : {
| "name" : "Business",
| "phoneNumber" : "+44(0) 0808 157 0192"
| },
| "employees" : [
| {
| "name" : "Employee 1",
| "phoneNumber" : "07700 900 982"
| },
| {
| "name" : "Employee 2",
| "phoneNumber" : "+44(0)151 999 2458"
| }
| ]
|}
|""".stripMargin
val updated = Json.parse(json).traverse {
case (path, JsString(phone)) if path.isEndsWith("phoneNumber") => JsString(phone.replace("(0)", ""))
}
println(Json.prettyPrint(updated))
which will produce desired result:
{
"businessDetails" : {
"name" : "Business",
"phoneNumber" : "+44 0808 157 0192"
},
"employees" : [ {
"name" : "Employee 1",
"phoneNumber" : "07700 900 982"
}, {
"name" : "Employee 2",
"phoneNumber" : "+44151 999 2458"
} ]
}
Hope this helps!

Convert list to a dataframe column in pyspark

I have a dataframe in which one of the string type column contains a list of items that I want to explode and make it part of the parent dataframe. How can I do it?
Here is the code to create a sample dataframe:
from pyspark.sql import Row
from collections import OrderedDict
def convert_to_row(d: dict) -> Row:
return Row(**OrderedDict(sorted(d.items())))
df=sc.parallelize([{"arg1": "first", "arg2": "John", "arg3" : '[{"name" : "click", "datetime" : "1570103345039", "event" : "entry" }, {"name" : "drag", "datetime" : "1580133345039", "event" : "exit" }]'},{"arg1": "second", "arg2": "Joe", "arg3": '[{"name" : "click", "datetime" : "1670105345039", "event" : "entry" }, {"name" : "drop", "datetime" : "1750134345039", "event" : "exit" }]'},{"arg1": "third", "arg2": "Jane", "arg3" : '[{"name" : "click", "datetime" : "1580105245039", "event" : "entry" }, {"name" : "drop", "datetime" : "1650134345039", "event" : "exit" }]'}]) \
.map(convert_to_row).toDF()
Running this code will create a dataframe as shown below:
+------+----+--------------------+
| arg1|arg2| arg3|
+------+----+--------------------+
| first|John|[{"name" : "click...|
|second| Joe|[{"name" : "click...|
| third|Jane|[{"name" : "click...|
+------+----+--------------------+
The arg3 column contains a list which I want to explode it into the detailed columns. I want the dataframe as follows:
arg1 | arg2 | arg3 | name | datetime | event
How can I achieve that?
You need to specify array to the schema in from_json function:
from pyspark.sql.functions import explode, from_json
schema = 'array<struct<name:string,datetime:string,event:string>>'
df.withColumn('data', explode(from_json('arg3', schema))) \
.select(*df.columns, 'data.*') \
.show()
+------+----+--------------------+-----+-------------+-----+
| arg1|arg2| arg3| name| datetime|event|
+------+----+--------------------+-----+-------------+-----+
| first|John|[{"name" : "click...|click|1570103345039|entry|
| first|John|[{"name" : "click...| drag|1580133345039| exit|
|second| Joe|[{"name" : "click...|click|1670105345039|entry|
|second| Joe|[{"name" : "click...| drop|1750134345039| exit|
| third|Jane|[{"name" : "click...|click|1580105245039|entry|
| third|Jane|[{"name" : "click...| drop|1650134345039| exit|
+------+----+--------------------+-----+-------------+-----+
Note: if your Spark version does not support simpleString format for schema, try the following:
from pyspark.sql.types import ArrayType, StringType, StructType, StructField
schema = ArrayType(
StructType([
StructField('name',StringType())
, StructField('datetime',StringType())
, StructField('event',StringType())
])
)

Assign SQL schema to Spark DataFrame

I'm converting my team's legacy Redshift SQL code to Spark SQL code. All the Spark examples I've seen define the schema in a non-SQL way using StructType and StructField and I'd prefer to define the schema in SQL, since most of my users know SQL but not Spark.
This is the ugly workaround I'm doing now. Is there a more elegant way that doesn't require defining an empty table just so that I can pull the SQL schema?
create_table_sql = '''
CREATE TABLE public.example (
id LONG,
example VARCHAR(80)
)'''
spark.sql(create_table_sql)
schema = spark.sql("DESCRIBE public.example").collect()
s3_data = spark.read.\
option("delimiter", "|")\
.csv(
path="s3a://"+s3_bucket_path,
schema=schema
)\
.saveAsTable('public.example')
Yes there is a way to create schema from string although I am not sure if it really looks like SQL! So you can use:
from pyspark.sql.types import _parse_datatype_string
_parse_datatype_string("id: long, example: string")
This will create the next schema:
StructType(List(StructField(id,LongType,true),StructField(example,StringType,true)))
Or you may have a complex schema as well:
schema = _parse_datatype_string("customers array<struct<id: long, name: string, address: string>>")
StructType(
List(StructField(
customers,ArrayType(
StructType(
List(
StructField(id,LongType,true),
StructField(name,StringType,true),
StructField(address,StringType,true)
)
),true),true)
)
)
You can check for more examples here
adding up to what has already been said, making a schema (e.g. StructType-based or JSON) is more straightforward in scala spark than in pySpark:
> import org.apache.spark.sql.types.StructType
> val s = StructType.fromDDL("customers array<struct<id: long, name: string, address: string>>")
> s
res3: org.apache.spark.sql.types.StructType = StructType(StructField(customers,ArrayType(StructType(StructField(id,LongType,true),StructField(name,StringType,true),StructField(address,StringType,true)),true),true))
> s.prettyJson
res9: String =
{
"type" : "struct",
"fields" : [ {
"name" : "customers",
"type" : {
"type" : "array",
"elementType" : {
"type" : "struct",
"fields" : [ {
"name" : "id",
"type" : "long",
"nullable" : true,
"metadata" : { }
}, {
"name" : "name",
"type" : "string",
"nullable" : true,
"metadata" : { }
}, {
"name" : "address",
"type" : "string",
"nullable" : true,
"metadata" : { }
} ]
},
"containsNull" : true
},
"nullable" : true,
"metadata" : { }
} ]
}

Not a data file error while reading Avro file

I have a file with the data in Avro format. I would like to read this data into GenericRecord type data structure or any other type data structure so I would be able to send it from Kafka to Spark.
I tried to use DataFileReader, however the result was this error:
Exception in thread "main" java.io.IOException: Not a data file.
at org.apache.avro.file.DataFileStream.initialize(DataFileStream.java:105)
Here is the code with produced it:
val schema = Source.fromFile(schemaPath).mkString
val parser = new Schema.Parser
val avroSchema = parser.parse(schema)
val avroDataFile = new File(dataPath)
val avroReader = new GenericDatumReader[GenericRecord](avroSchema)
val dataFileReader = new DataFileReader[GenericRecord](avroDataFile, avroReader)
//THIS LINE PRODUCED ERROR
How can I fix this error?
This is how my Avro data schema looks like:
{
"type" : "record",
"namespace" : "input_data",
"name" : "testUser",
"fields" : [
{"name" : "name", "type" : "string", "default": "NONE"},
{"name" : "age", "type" : "int", "default": -1},
{"name" : "phone", "type" : "string", "default" : "NONE"},
{"name" : "city", "type" : "string", "default" : "NONE"},
{"name" : "country", "type" : "string", "default" : "NONE"}
]
}
And this is the data I tried to read (it was generated by this tool):
{
"name" : "O= ~usP3\u0001\bY\u0011k\u0001",
"age" : 585392215,
"phone" : "\u0012\u001F#\u001FH]e\u0015UW\u0000\fo",
"city" : "aWi\u001B'\u000Bh\u00163\u001A_I\u0001\u0001L",
"country" : "]H\u001Dl(n!Sr}oVCH"
}
{
"name" : "\u0011Y~\fV\u001Dv%4\u0006;\u0012",
"age" : -2045540864,
"phone" : "UyOdgny-hA",
"city" : "\u0015f?\u0000\u0015oN{\u0019\u0010\u001D%",
"country" : "eY>c\u0010j\u0002[\u001CdDQ"
}
...
Well, that data is not Avro, it is JSON.
If it were binary Avro data, you would not be able to read the file without first using avro-tools.jar tojson action.
If you look at the usage doc, JSON is the default
-j, --json: Encode outputted data in JSON format (default)
To actually get Avro, use arg -s schema.avsc -b -o out.avro
There are also other ways to generate test data in Kafka