flattening nested dict type json file - pyspark

We are trying to flatten a nested json with below format :
{
"Type" : "Notification",
"MessageId" : "5b37cfab-8a88-5609-b179-93f87126030a",
"SequenceNumber" : "10000000000000010000",
"TopicArn" : "arn:aws:sns:us-east-1:956978673417:de-eec-org-group-association-info.fifo",
"Message" : "{\n\"messageType\": \"NEW_USER_ASSOCIATION\",\n\"messageStatus\": \"SUCCESS\",\n\"messageDetails\": \"Successfully Associated\",\n\"organizationId\": 1000830784,\n\"organizationName\": \"Pfizer\",\n\"emailDomains\": [\n\"#pfizer.com\",\n\"#gmail.com\"\n],\n\"parentDunsNumber\": \"879262386\",\n\"cdcId\": \"49f5b88e036d46f38e05ce260aeaeb2a\",\n\"isUserActive\": true,\n\"associatedDateTime\": 1674467158154,\n\"associatedBy\": \"01bdd9a4a21345929b7fa06acd3c53d5\"\n}",
"Timestamp" : "2023-02-01T05:32:30.825Z",
"UnsubscribeURL" : "https://sns.us-east-1.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-east-1:956978673417:de-eec-org-group-association-info.fifo:e986e954-6f3d-4309-b7c4-a4df2182227e"
}
Tried different way but it's not working , mostly the type of Message column is stringType need to change it in MapType and need to flatten the entire json file . Can anyone help?

Please try the following solution with explode() function :
from pyspark.sql.types import StructType, StructField, StringType, ArrayType, IntegerType, BooleanType
from pyspark.sql.functions import explode
# Define the schema for the nested JSON
schema = StructType([
StructField("messageType", StringType()),
StructField("messageStatus", StringType()),
StructField("messageDetails", StringType()),
StructField("organizationId", IntegerType()),
StructField("organizationName", StringType()),
StructField("emailDomains", ArrayType(StringType())),
StructField("parentDunsNumber", StringType()),
StructField("cdcId", StringType()),
StructField("isUserActive", BooleanType()),
StructField("associatedDateTime", IntegerType()),
StructField("associatedBy", StringType())
])
df = spark.createDataFrame([input_json])
df_message = df.select("Message").select(from_json("Message", schema).alias("data"))
df_flat = df_message.select("data.*").select(explode("emailDomains").alias("emailDomains"))
df_result = df.join(df_flat, "MessageId", "left").drop("Message")
# Show the result
df_result.show()

Related

How to define a schema for json to be used in from_json to parse out values

I am trying to come up with a schema definition to parse out information from dataframe string column I am using from_json for that . I need help in defining schema which I am somehow not getting it right.
Here is the Json I have
[
{
"sectionid":"838096e332d4419191877a3fd40ed1f4",
"sequence":0,
"questions":[
{
"xid":"urn:com.mheducation.openlearning:lms.assessment.author:qastg.global:assessment_item:2a0f52fb93954f4590ac88d90888be7b",
"questionid":"d36e1d7eeeae459c8db75c7d2dfd6ac6",
"quizquestionid":"d36e1d7eeeae459c8db75c7d2dfd6ac6",
"qtype":"3",
"sequence":0,
"subsectionsequence":-1,
"type":"80",
"question":"<p>This is a simple, 1 question assessment for automation testing</p>",
"totalpoints":"5.0",
"scoring":"1",
"scoringrules":"{\"type\":\"perfect\",\"points\":5.0,\"pointsEach\":null,\"rules\":[]}",
"inputoption":"0",
"casesensitive":"0",
"suggestedscoring":"1",
"suggestedscoringrules":"{\"type\":\"perfect\",\"points\":5.0,\"pointsEach\":null,\"rules\":[]}",
"answers":[
"1"
],
"options":[
]
}
]
}
]
I want to parse this information out which will result in columns
sectionid , sequence, xid, question.sequence, question.question(question text), answers
Here is what I have I have defined a schema for testing like this
import org.apache.spark.sql.types.{StringType, ArrayType, StructType,
StructField}
val schema = new StructType()
.add("sectionid", StringType, true)
.add("sequence", StringType, true)
.add("questions", StringType, true)
.add("answers", StringType, true)
finalDF = finalDF
.withColumn( "parsed", from_json(col("enriched_payload.transformed"),schema) )
But I am getting NULL in result columns the reason I think is my schema is not right.
I am struggling to come up with right definition . How do I come up with correct json schema definition ?
I am using spark 3.0
Try below code.
import org.apache.spark.sql.types._
val schema = ArrayType(
new StructType()
.add("sectionid",StringType,true)
.add("sequence",LongType,true)
.add("questions", ArrayType(
new StructType()
.add("answers",ArrayType(StringType,true),true)
.add("casesensitive",StringType,true)
.add("inputoption",StringType,true)
.add("options",ArrayType(StringType,true),true)
.add("qtype",StringType,true)
.add("question",StringType,true)
.add("questionid",StringType,true)
.add("quizquestionid",StringType,true)
.add("scoring",StringType,true)
.add("scoringrules",StringType,true)
.add("sequence",LongType,true)
.add("subsectionsequence",LongType,true)
.add("suggestedscoring",StringType,true)
.add("suggestedscoringrules",StringType,true)
.add("totalpoints",StringType,true)
.add("type",StringType,true)
.add("xid",StringType,true)
)
)
)

Spark Structured Streaming output not displaying in inteliJ console

I'm trying to emulate this example from Jacek Laskowski Book of read a CSV file and aggregate the data in the console, but for some reason, the output is not displaying in the InteliJ console.
scala> spark.version
res4: String = 2.2.0
I found some reference in some places (1, 2, 3, 4, 5) here in SO, I tried everything but I didn't solve the problem.
This is the code:
package org.sample
import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.streaming.{OutputMode, Trigger}
object App {
def main(args : Array[String]): Unit = {
val DIR = new java.io.File(".").getCanonicalPath + "dataset/stream_in"
val conf = new SparkConf()
.setMaster("local[*]")
.setAppName("Spark Structured Streaming Job")
val spark = SparkSession.builder()
.appName("Spark Structured Streaming Job")
.master("local[*]")
.getOrCreate()
val reader = spark.readStream
.format("csv")
.option("header", true)
.option("delimiter", ";")
.option("latestFirst", "true")
.schema(SchemaDefinition.csvSchema)
.load(DIR + "/*")
reader.createOrReplaceTempView("user_records")
val tranformation = spark.sql(
"""
SELECT carrier, marital_status, COUNT(1) as num_users
FROM user_records
GROUP BY carrier, marital_status
"""
)
val consoleStream = tranformation
.writeStream
.format("console")
.option("truncate", false)
.outputMode("complete")
.start()
consoleStream.awaitTermination()
}
}
My output it is only:
18/11/30 15:40:31 INFO StreamExecution: Streaming query made progress: {
"id" : "9420f826-0daf-40c9-a427-e89ed42ee738",
"runId" : "991c9085-3425-4ea6-82af-4cef20007a66",
"name" : null,
"timestamp" : "2018-11-30T14:40:31.117Z",
"numInputRows" : 0,
"inputRowsPerSecond" : 0.0,
"processedRowsPerSecond" : 0.0,
"durationMs" : {
"getOffset" : 2,
"triggerExecution" : 2
},
"eventTime" : {
"watermark" : "1970-01-01T00:00:00.000Z"
},
"stateOperators" : [ ],
"sources" : [ {
"description" : "FileStreamSource[file:/structured-streamming-taskdataset/stream_in/*]",
"startOffset" : null,
"endOffset" : null,
"numInputRows" : 0,
"inputRowsPerSecond" : 0.0,
"processedRowsPerSecond" : 0.0
} ],
"sink" : {
"description" : "org.apache.spark.sql.execution.streaming.ConsoleSink#6a62e7ef"
}
}
I redefine the file and right now worked for me:
Differences:
remove the unnecessary conf. With SparkSession we do not need to
call the conf
The .load(/*) didn't work. What worked was the keep only the path
dataset/stream_in;
The data to tranformation was wrong (the fields didn't match the
file)
Final code:
package org.sample
import org.apache.spark.sql.SparkSession
import org.apache.log4j.{Level, Logger}
object StreamCities {
def main(args : Array[String]): Unit = {
// Turn off logs in console
Logger.getLogger("org").setLevel(Level.OFF)
Logger.getLogger("akka").setLevel(Level.OFF)
val spark = SparkSession.builder()
.appName("Spark Structured Streaming get CSV and agregate")
.master("local[*]")
.getOrCreate()
// 01. Schema Definition: We'll put the structure of our
// CSV file. Can be done using a class, but for simplicity
// I'll keep it here
import org.apache.spark.sql.types._
def csvSchema = StructType {
StructType(Array(
StructField("id", StringType, true),
StructField("name", StringType, true),
StructField("city", StringType, true)
))
}
// 02. Read the Stream: Create DataFrame representing the
// stream of the CSV according our Schema. The source it is
// the folder in the .load() option
val users = spark.readStream
.format("csv")
.option("sep", ",")
.option("header", true)
.schema(csvSchema)
.load("dataset/stream_in")
// 03. Aggregation of the Stream: To use the .writeStream()
// we must pass a DF aggregated. We can do this using the
// Untyped API or SparkSQL
// 03.1: Aggregation using untyped API
//val aggUsers = users.groupBy("city").count()
// 03.2: Aggregation using Spark SQL
users.createOrReplaceTempView("user_records")
val aggUsers = spark.sql(
"""
SELECT city, COUNT(1) as num_users
FROM user_records
GROUP BY city"""
)
// Print the schema of our aggregation
println(aggUsers.printSchema())
// 04. Output the stream: Now we'll write our stream in
// console and as new files will be included in the folder
// that Spark it's listening the results will be updated
val consoleStream = aggUsers.writeStream
.outputMode("complete")
.format("console")
.start()
.awaitTermination()
}
}
I had the same issue, I solved it with:
.option("startingOffsets", "earliest") \
in the
def read_from_kafka(spark):
df_sales = spark.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "localhost:9092") \
.option("subscribe", "spark_topic_sales") \
.option("startingOffsets", "earliest") \
.load()
return df_sales
this will make sure the data is read from the topic from the earliest. hope it will help someone and he would not spend 2 hours like I did!

How to create an empty dataFrame in Spark

I have a set of Avro based hive tables and I need to read data from them. As Spark-SQL uses hive serdes to read the data from HDFS, it is much slower than reading HDFS directly. So I have used data bricks Spark-Avro jar to read the Avro files from underlying HDFS dir.
Everything works fine except when the table is empty. I have managed to get the schema from the .avsc file of hive table using the following command but I am getting an error "No Avro files found"
val schemaFile = FileSystem.get(sc.hadoopConfiguration).open(new Path("hdfs://myfile.avsc"));
val schema = new Schema.Parser().parse(schemaFile);
spark.read.format("com.databricks.spark.avro").option("avroSchema", schema.toString).load("/tmp/myoutput.avro").show()
Workarounds:
I have placed an empty file in that directory and the same thing works fine.
Are there any other ways to achieve the same? like conf setting or something?
You don't need to use emptyRDD. Here is what worked for me with PySpark 2.4:
empty_df = spark.createDataFrame([], schema) # spark is the Spark Session
If you already have a schema from another dataframe, you can just do this:
schema = some_other_df.schema
If you don't, then manually create the schema of the empty dataframe, for example:
schema = StructType([StructField("col_1", StringType(), True),
StructField("col_2", DateType(), True),
StructField("col_3", StringType(), True),
StructField("col_4", IntegerType(), False)]
)
I hope this helps.
Similar to EmiCareOfCell44's answer, just a little bit more elegant and more "empty"
val emptySchema = StructType(Seq())
val emptyDF = spark.createDataFrame(spark.sparkContext.emptyRDD[Row],
emptySchema)
To create an empty DataFrame:
val my_schema = StructType(Seq(
StructField("field1", StringType, nullable = false),
StructField("field2", StringType, nullable = false)
))
val empty: DataFrame = spark.createDataFrame(spark.sparkContext.emptyRDD[Row], my_schema)
Maybe this may help
Depending on your Spark version, you can use the reflection way.. There is a private method in SchemaConverters which does the job to convert the Schema to a StructType.. (not sure why it is private to be honest, it would be really useful in other situations). Using scala reflection you should be able to do it in the following way
import scala.reflect.runtime.{universe => ru}
import org.apache.avro.Schema
import org.apache.spark.sql.Row
import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType}
var schemaStr = "{\n \"type\": \"record\",\n \"namespace\": \"com.example\",\n \"name\": \"FullName\",\n \"fields\": [\n { \"name\": \"first\", \"type\": \"string\" },\n { \"name\": \"last\", \"type\": \"string\" }\n ]\n }"
val schema = new Schema.Parser().parse(schemaStr);
val m = ru.runtimeMirror(getClass.getClassLoader)
val module = m.staticModule("com.databricks.spark.avro.SchemaConverters")
val im = m.reflectModule(module)
val method = im.symbol.info.decl(ru.TermName("toSqlType")).asMethod
val objMirror = m.reflect(im.instance)
val structure = objMirror.reflectMethod(method)(schema).asInstanceOf[com.databricks.spark.avro.SchemaConverters.SchemaType]
val sqlSchema = structure.dataType.asInstanceOf[StructType]
val empty = spark.createDataFrame(spark.sparkContext.emptyRDD[Row], sqlSchema)
empty.printSchema

Spark: import data frame to mongodb (scala)

Given the following data frame in spark:
Name,LicenseID_1,TypeCode_1,State_1,LicenseID_2,TypeCode_2,State_2,LicenseID_3,TypeCode_3,State_3
"John","123ABC",1,"WA","456DEF",2,"FL","789GHI",3,"CA"
"Jane","ABC123",5,"AZ","DEF456",7,"CO","GHI789",8,"GA"
How could I use scala in spark to write this into mongodb as collection of document as follows:
{ "Name" : "John",
"Licenses" :
{
[
{"LicenseID":"123ABC","TypeCode":"1","State":"WA" },
{"LicenseID":"456DEF","TypeCode":"2","State":"FL" },
{"LicenseID":"789GHI","TypeCode":"3","State":"CA" }
]
}
},
{ "Name" : "Jane",
"Licenses" :
{
[
{"LicenseID":"ABC123","TypeCode":"5","State":"AZ" },
{"LicenseID":"DEF456","TypeCode":"7","State":"CO" },
{"LicenseID":"GHI789","TypeCode":"8","State":"GA" }
]
}
}
I tried to do this but got block at the following code:
val customSchema = StructType(Array( StructField("Name", StringType, true), StructField("LicenseID_1", StringType, true), StructField("TypeCode_1", StringType, true), StructField("State_1", StringType, true), StructField("LicenseID_2", StringType, true), StructField("TypeCode_2", StringType, true), StructField("State_2", StringType, true), StructField("LicenseID_3", StringType, true), StructField("TypeCode_3", StringType, true), StructField("State_3", StringType, true)))
val license = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").schema(customSchema).load("D:\\test\\test.csv")
case class License(LicenseID:String, TypeCode:String, State:String)
case class Data(Name:String, Licenses: Array[License])
val transformedData = license.map(data => Data(data(0),Array(License(data(1),data(2),data(3)),License(data(4),data(5),data(6)),License(data(7),data(8),data(9)))))
<console>:46: error: type mismatch;
found : Any
required: String
val transformedData = license.map(data => Data(data(0),Array(License(data(1),data(2),data(3)),License(data(4),data(5),data(6)),License(data(7),data(8),data(9)))))
...
Not sure what is your question , adding example how to save data spark and Mango
https://docs.mongodb.com/spark-connector/current/
https://docs.mongodb.com/spark-connector/current/scala-api/
import org.apache.spark.sql.SparkSession
import com.mongodb.spark.sql._
val sc: SparkContext // An existing SparkContext.
val sparkSession = SparkSession.builder().getOrCreate()
//mongo spark helper
val df = MongoSpark.load(sparkSession) // Uses the SparkConf
Read
sparkSession.loadFromMongoDB() // Uses the SparkConf for configuration
sparkSession.loadFromMongoDB(ReadConfig(Map("uri" -> "mongodb://example.com/database.collection"))) // Uses the ReadConfig
sparkSession.read.mongo()
sparkSession.read.format("com.mongodb.spark.sql").load()
// Set custom options:
sparkSession.read.mongo(customReadConfig)
sparkSession.read.format("com.mongodb.spark.sql").options.
(customReadConfig.asOptions).load()
The connector provides the ability to persist data into MongoDB.
MongoSpark.save(centenarians.write.option("collection", "hundredClub"))
MongoSpark.load[Character](sparkSession, ReadConfig(Map("collection" ->
"data"), Some(ReadConfig(sparkSession)))).show()
Alternative to save data
dataFrameWriter.write.mongo()
dataFrameWriter.write.format("com.mongodb.spark.sql").save()
Adding .toString fixed the issue and I was able to saved to mongodb the format I wanted.
val transformedData = license.map(data => Data(data(0).toString,Array(License(data(1).toString,data(2).toString,data(3).toString),License(data(4).toString,data(5).toString,data(6).toString),License(data(7).toString,data(8).toString,data(9).toString))))

Flattening JSON into Tabular Structure using Spark-Scala RDD only fucntion

I have nested JSON and like to have output in tabular structure. I am able to parse the JSON values individually , but having some problems in tabularizing it. I am able to do it via dataframe easily. But I want do it using "RDD ONLY " functions. Any help much appreciated.
Input JSON:
{ "level":{"productReference":{
"prodID":"1234",
"unitOfMeasure":"EA"
},
"states":[
{
"state":"SELL",
"effectiveDateTime":"2015-10-09T00:55:23.6345Z",
"stockQuantity":{
"quantity":1400.0,
"stockKeepingLevel":"A"
}
},
{
"state":"HELD",
"effectiveDateTime":"2015-10-09T00:55:23.6345Z",
"stockQuantity":{
"quantity":800.0,
"stockKeepingLevel":"B"
}
}
] }}
Expected Output:
I tried Below Spark code . But getting output like this and Row() object is not able to parse this.
079562193,EA,List(SELLABLE, HELD),List(2015-10-09T00:55:23.6345Z, 2015-10-09T00:55:23.6345Z),List(1400.0, 800.0),List(SINGLE, SINGLE)
def main(Args : Array[String]): Unit = {
val conf = new SparkConf().setAppName("JSON Read and Write using Spark RDD").setMaster("local[1]")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
val salesSchema = StructType(Array(
StructField("prodID", StringType, true),
StructField("unitOfMeasure", StringType, true),
StructField("state", StringType, true),
StructField("effectiveDateTime", StringType, true),
StructField("quantity", StringType, true),
StructField("stockKeepingLevel", StringType, true)
))
val ReadAlljsonMessageInFile_RDD = sc.textFile("product_rdd.json")
val x = ReadAlljsonMessageInFile_RDD.map(eachJsonMessages => {
parse(eachJsonMessages)
}).map(insideEachJson=>{
implicit val formats = org.json4s.DefaultFormats
val prodID = (insideEachJson\ "level" \"productReference" \"TPNB").extract[String].toString
val unitOfMeasure = (insideEachJson\ "level" \ "productReference" \"unitOfMeasure").extract[String].toString
val state= (insideEachJson \ "level" \"states").extract[List[JValue]].
map(x=>(x\"state").extract[String]).toString()
val effectiveDateTime= (insideEachJson \ "level" \"states").extract[List[JValue]].
map(x=>(x\"effectiveDateTime").extract[String]).toString
val quantity= (insideEachJson \ "level" \"states").extract[List[JValue]].
map(x=>(x\"stockQuantity").extract[JValue]).map(x=>(x\"quantity").extract[Double]).
toString
val stockKeepingLevel= (insideEachJson \ "level" \"states").extract[List[JValue]].
map(x=>(x\"stockQuantity").extract[JValue]).map(x=>(x\"stockKeepingLevel").extract[String]).
toString
//Row(prodID,unitOfMeasure,state,effectiveDateTime,quantity,stockKeepingLevel)
println(prodID,unitOfMeasure,state,effectiveDateTime,quantity,stockKeepingLevel)
}).collect()
// sqlContext.createDataFrame(x,salesSchema).show(truncate = false)
}
HI below is the "DATAFRAME" ONLY Solution which I developed. Looking for complete "RDD ONLY" solution
def main (Args : Array[String]):Unit = {
val conf = new SparkConf().setAppName("JSON Read and Write using Spark DataFrame few more options").setMaster("local[1]")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
val sourceJsonDF = sqlContext.read.json("product.json")
val jsonFlatDF_level = sourceJsonDF.withColumn("explode_states",explode($"level.states"))
.withColumn("explode_link",explode($"level._link"))
.select($"level.productReference.TPNB".as("TPNB"),
$"level.productReference.unitOfMeasure".as("level_unitOfMeasure"),
$"level.locationReference.location".as("level_location"),
$"level.locationReference.type".as("level_type"),
$"explode_states.state".as("level_state"),
$"explode_states.effectiveDateTime".as("level_effectiveDateTime"),
$"explode_states.stockQuantity.quantity".as("level_quantity"),
$"explode_states.stockQuantity.stockKeepingLevel".as("level_stockKeepingLevel"),
$"explode_link.rel".as("level_rel"),
$"explode_link.href".as("level_href"),
$"explode_link.method".as("level_method"))
jsonFlatDF_oldLevel.show()
}
DataFrame and DataSet are much more optimized than rdd and there are a lot of options to try with to reach to the solution we desire.
In my opinion, DataFrame is developed to make the developers comfortable viewing data in tabular form so that logics can be implemented with ease. So I always suggest users to use dataframe or dataset.
Talking much less, I am posting you the solution below using dataframe. Once you have a dataframe, switching to rdd is very easy.
Your desired solution is below (you will have to find a way to read json file as its done with json string below : thats an assignment for you :) good luck)
import org.apache.spark.sql.functions._
val json = """ { "level":{"productReference":{
"prodID":"1234",
"unitOfMeasure":"EA"
},
"states":[
{
"state":"SELL",
"effectiveDateTime":"2015-10-09T00:55:23.6345Z",
"stockQuantity":{
"quantity":1400.0,
"stockKeepingLevel":"A"
}
},
{
"state":"HELD",
"effectiveDateTime":"2015-10-09T00:55:23.6345Z",
"stockQuantity":{
"quantity":800.0,
"stockKeepingLevel":"B"
}
}
] }}"""
val rddJson = sparkContext.parallelize(Seq(json))
var df = sqlContext.read.json(rddJson)
df = df.withColumn("prodID", df("level.productReference.prodID"))
.withColumn("unitOfMeasure", df("level.productReference.unitOfMeasure"))
.withColumn("states", explode(df("level.states")))
.drop("level")
df = df.withColumn("state", df("states.state"))
.withColumn("effectiveDateTime", df("states.effectiveDateTime"))
.withColumn("quantity", df("states.stockQuantity.quantity"))
.withColumn("stockKeepingLevel", df("states.stockQuantity.stockKeepingLevel"))
.drop("states")
df.show(false)
This will give out put as
+------+-------------+-----+-------------------------+--------+-----------------+
|prodID|unitOfMeasure|state|effectiveDateTime |quantity|stockKeepingLevel|
+------+-------------+-----+-------------------------+--------+-----------------+
|1234 |EA |SELL |2015-10-09T00:55:23.6345Z|1400.0 |A |
|1234 |EA |HELD |2015-10-09T00:55:23.6345Z|800.0 |B |
+------+-------------+-----+-------------------------+--------+-----------------+
Now that you have desired output as dataframe converting to rdd is just calling .rdd
df.rdd.foreach(println)
will give output as below
[1234,EA,SELL,2015-10-09T00:55:23.6345Z,1400.0,A]
[1234,EA,HELD,2015-10-09T00:55:23.6345Z,800.0,B]
I hope this is helpful
There are 2 versions of solutions to your question.
Version 1:
def main(Args : Array[String]): Unit = {
val conf = new SparkConf().setAppName("JSON Read and Write using Spark RDD").setMaster("local[1]")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
val salesSchema = StructType(Array(
StructField("prodID", StringType, true),
StructField("unitOfMeasure", StringType, true),
StructField("state", StringType, true),
StructField("effectiveDateTime", StringType, true),
StructField("quantity", StringType, true),
StructField("stockKeepingLevel", StringType, true)
))
val ReadAlljsonMessageInFile_RDD = sc.textFile("product_rdd.json")
val x = ReadAlljsonMessageInFile_RDD.map(eachJsonMessages => {
parse(eachJsonMessages)
}).map(insideEachJson=>{
implicit val formats = org.json4s.DefaultFormats
val prodID = (insideEachJson\ "level" \"productReference" \"prodID").extract[String].toString
val unitOfMeasure = (insideEachJson\ "level" \ "productReference" \"unitOfMeasure").extract[String].toString
val state= (insideEachJson \ "level" \"states").extract[List[JValue]].
map(x=>(x\"state").extract[String]).toString()
val effectiveDateTime= (insideEachJson \ "level" \"states").extract[List[JValue]].
map(x=>(x\"effectiveDateTime").extract[String]).toString
val quantity= (insideEachJson \ "level" \"states").extract[List[JValue]].
map(x=>(x\"stockQuantity").extract[JValue]).map(x=>(x\"quantity").extract[Double]).
toString
val stockKeepingLevel= (insideEachJson \ "level" \"states").extract[List[JValue]].
map(x=>(x\"stockQuantity").extract[JValue]).map(x=>(x\"stockKeepingLevel").extract[String]).
toString
Row(prodID,unitOfMeasure,state,effectiveDateTime,quantity,stockKeepingLevel)
})
sqlContext.createDataFrame(x,salesSchema).show(truncate = false)
}
This would give you following output:
+------+-------------+----------------+----------------------------------------------------------+-------------------+-----------------+
|prodID|unitOfMeasure|state |effectiveDateTime |quantity |stockKeepingLevel|
+------+-------------+----------------+----------------------------------------------------------+-------------------+-----------------+
|1234 |EA |List(SELL, HELD)|List(2015-10-09T00:55:23.6345Z, 2015-10-09T00:55:23.6345Z)|List(1400.0, 800.0)|List(A, B) |
+------+-------------+----------------+----------------------------------------------------------+-------------------+-----------------+
Version 2:
def main(Args : Array[String]): Unit = {
val conf = new SparkConf().setAppName("JSON Read and Write using Spark RDD").setMaster("local[1]")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
val salesSchema = StructType(Array(
StructField("prodID", StringType, true),
StructField("unitOfMeasure", StringType, true),
StructField("state", ArrayType(StringType, true), true),
StructField("effectiveDateTime", ArrayType(StringType, true), true),
StructField("quantity", ArrayType(DoubleType, true), true),
StructField("stockKeepingLevel", ArrayType(StringType, true), true)
))
val ReadAlljsonMessageInFile_RDD = sc.textFile("product_rdd.json")
val x = ReadAlljsonMessageInFile_RDD.map(eachJsonMessages => {
parse(eachJsonMessages)
}).map(insideEachJson=>{
implicit val formats = org.json4s.DefaultFormats
val prodID = (insideEachJson\ "level" \"productReference" \"prodID").extract[String].toString
val unitOfMeasure = (insideEachJson\ "level" \ "productReference" \"unitOfMeasure").extract[String].toString
val state= (insideEachJson \ "level" \"states").extract[List[JValue]].
map(x=>(x\"state").extract[String])
val effectiveDateTime= (insideEachJson \ "level" \"states").extract[List[JValue]].
map(x=>(x\"effectiveDateTime").extract[String])
val quantity= (insideEachJson \ "level" \"states").extract[List[JValue]].
map(x=>(x\"stockQuantity").extract[JValue]).map(x=>(x\"quantity").extract[Double])
val stockKeepingLevel= (insideEachJson \ "level" \"states").extract[List[JValue]].
map(x=>(x\"stockQuantity").extract[JValue]).map(x=>(x\"stockKeepingLevel").extract[String])
Row(prodID,unitOfMeasure,state,effectiveDateTime,quantity,stockKeepingLevel)
})
sqlContext.createDataFrame(x,salesSchema).show(truncate = false)
}
This would give you following output:
+------+-------------+------------+------------------------------------------------------+---------------+-----------------+
|prodID|unitOfMeasure|state |effectiveDateTime |quantity |stockKeepingLevel|
+------+-------------+------------+------------------------------------------------------+---------------+-----------------+
|1234 |EA |[SELL, HELD]|[2015-10-09T00:55:23.6345Z, 2015-10-09T00:55:23.6345Z]|[1400.0, 800.0]|[A, B] |
+------+-------------+------------+------------------------------------------------------+---------------+-----------------+
The difference between Version 1 & 2 is of schema. In Version 1 you are casting every column into String whereas in Version 2 they are being casted into Array.