I have a file which has bunch of columns and one column called jsonstring is of string type which has json strings in it… let's say the format is the following:
{
"key1": "value1",
"key2": {
"level2key1": "level2value1",
"level2key2": "level2value2"
}
}
I want to parse this column something like this: jsonstring.key1,jsonstring.key2.level2key1 to return value1, level2value1
How can I do that in scala or spark sql.
With Spark 2.2 you could use the function from_json which does the JSON parsing for you.
from_json(e: Column, schema: String, options: Map[String, String]): Column parses a column containing a JSON string into a StructType or ArrayType of StructTypes with the specified schema.
With the support for flattening nested columns by using * (star) that seems the best solution.
// the input dataset (just a single JSON blob)
val jsonstrings = Seq("""{
"key1": "value1",
"key2": {
"level2key1": "level2value1",
"level2key2": "level2value2"
}
}""").toDF("jsonstring")
// define the schema of JSON messages
import org.apache.spark.sql.types._
val key2schema = new StructType()
.add($"level2key1".string)
.add($"level2key2".string)
val schema = new StructType()
.add($"key1".string)
.add("key2", key2schema)
scala> schema.printTreeString
root
|-- key1: string (nullable = true)
|-- key2: struct (nullable = true)
| |-- level2key1: string (nullable = true)
| |-- level2key2: string (nullable = true)
val messages = jsonstrings
.select(from_json($"jsonstring", schema) as "json")
.select("json.*") // <-- flattening nested fields
scala> messages.show(truncate = false)
+------+---------------------------+
|key1 |key2 |
+------+---------------------------+
|value1|[level2value1,level2value2]|
+------+---------------------------+
scala> messages.select("key1", "key2.*").show(truncate = false)
+------+------------+------------+
|key1 |level2key1 |level2key2 |
+------+------------+------------+
|value1|level2value1|level2value2|
+------+------------+------------+
You can use withColumn + udf + json4s:
import org.json4s.{DefaultFormats, MappingException}
import org.json4s.jackson.JsonMethods._
import org.apache.spark.sql.functions._
def getJsonContent(jsonstring: String): (String, String) = {
implicit val formats = DefaultFormats
val parsedJson = parse(jsonstring)
val value1 = (parsedJson \ "key1").extract[String]
val level2value1 = (parsedJson \ "key2" \ "level2key1").extract[String]
(value1, level2value1)
}
val getJsonContentUDF = udf((jsonstring: String) => getJsonContent(jsonstring))
df.withColumn("parsedJson", getJsonContentUDF(df("jsonstring")))
Related
I have a scala dataframe with the following schema:
root
|-- time: string (nullable = true)
|-- itemId: string (nullable = true)
|-- itemFeatures: map (nullable = true)
| |-- key: string
| |-- value: string (valueContainsNull = true)
I want to explode the itemFeatures column and then send my dataframe to a UDF. But as soon as I include the explode, calling the UDF results in this error:
org.apache.spark.SparkException: Task not serializable
I can't figure out why???
Environment: Scala 2.11.12, Spark 2.4.4
Full example:
val dataList = List(
("time1", "id1", "map1"),
("time2", "id2", "map2"))
val df = dataList.toDF("time", "itemId", "itemFeatures")
val dfExploded = df.select(col("time"), col("itemId"), explode("itemFeatures"))
val doNextThingUDF: UserDefinedFunction = udf(doNextThing _)
val dfNextThing = dfExploded.withColumn("nextThing", doNextThingUDF(col("time"))
where my UDF looks like this:
val doNextThing(time: String): String = {
time+"blah"
}
If I remove the explode, everything works fine, or if I don't call the UDF after the explode, everything works fine. I could imagine Spark is somehow unable to send each row to a UDF if it is dynamically executing the explode and doesn't know how many rows that are going to exist, but even when I add ex dfExploded.cache() and dfExploded.count() I still get the error. Is this a known issue? What am I missing?
I think the issue come from how you define your donextThing function. Also
there is couple of typos in your "full example".
Especially the itemFeatures column is a string in your example, I understand it should be a Map.
But here is a working example:
val dataList = List(
("time1", "id1", Map("map1" -> 1)),
("time2", "id2", Map("map2" -> 2)))
val df = dataList.toDF("time", "itemId", "itemFeatures")
val dfExploded = df.select(col("time"), col("itemId"), explode($"itemFeatures"))
val doNextThing = (time: String) => {time+"blah"}
val doNextThingUDF = udf(doNextThing)
val dfNextThing = dfExploded.withColumn("nextThing", doNextThingUDF(col("time")))
I have a list of defined columns as:
case class ExcelColumn(colName: String, colType: String, colCode: String)
val cols = List(
ExcelColumn("Products Selled", "text", "products_selled"),
ExcelColumn("Total Value", "int", "total_value"),
)
And a file (csv with header columns Products Selled, Total Value) which is readed as dataframe.
val df = spark.read
.option("header", "true")
.option("inferSchema", "true")
.csv(filePath)
// csv file have header as colNames
var finalDf = df
.withColumn("row_id", monotonically_increasing_id)
.select(cols
.map(_.name.trim)
.map(col): _*)
// convert df col names as colCodes (for kudu table columns)
cols.foreach(col => finalDf = finalDf.withColumnRenamed(col.name.trim, col.colCode.trim))
In last line, I change the dataframe column name from Products Selled into products_selled. Due of this, finalDf is a var.
I want to know if is a solution to declare finalDf as val, and not var.
I tried something like below code, but withColumnRenamed return a new DataFrame, but I can not do this outside cols.foreach
cols.foreach(col => finalDf.withColumnRenamed(col.name.trim, col.colCode.trim))
Using select You can rename columns.
renaming columns inside select is faster than foldLeft, check post for comparison.
Try below code.
case class ExcelColumn(colName: String, colType: String, colCode: String)
val cols = List(
ExcelColumn("Products Selled", "string", "products_selled"),
ExcelColumn("Total Value", "int", "total_value"),
)
val colExpr = cols.map(c => trim(col(c.colName)).as(c.colCode.trim))
If you are storing valid column data type in ExcelColumn case class, you can use column data type like below.
val colExpr = cols.map(c => trim(col(c.colName).cast(c.colType)).as(c.colCode.trim))
finalDf.select(colExpr:_*)
The better way is to use foldLeft with withColumnRenamed
case class ExcelColumn(colName: String, colType: String, colCode: String)
val cols = List(
ExcelColumn("Products Selled", "text", "products_selled"),
ExcelColumn("Total Value", "int", "total_value"),
)
val resultDF = cols.foldLeft(df){(acc, name ) =>
acc.withColumnRenamed(name.colName.trim, name.colCode.trim)
}
Original Schema:
root
|-- Products Selled: integer (nullable = false)
|-- Total Value: string (nullable = true)
|-- value: integer (nullable = false)
New Schema:
root
|-- products_selled: integer (nullable = false)
|-- total_value: string (nullable = true)
|-- value: integer (nullable = false)
I need help converting a flat dataset into a nested format using Apache Spark / Scala.
Is it possible to automatically create a nested structure derived from input column namespaces
[level 1].[level 2]? In my example, the nesting level is determined by the period symbol '.' within the column headers.
I assuming this is possible to achieve using a map function. I am open to alternative solutions, particularly if there is a more elegant way of achieving the same outcome.
package org.acme.au
import org.apache.spark.sql.types.StructType
import org.apache.spark.sql.types.StructField
import org.apache.spark.sql.types.StringType
import org.apache.spark.sql.Row
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.SQLContext
import scala.collection.Seq
object testNestedObject extends App {
// Configure spark
val spark = SparkSession.builder()
.appName("Spark batch demo")
.master("local[*]")
.config("spark.driver.host", "localhost")
.getOrCreate()
// Start spark
val sc = spark.sparkContext
sc.setLogLevel("ERROR")
val sqlContext = new SQLContext(sc)
// Define schema for input data
val flatSchema = new StructType()
.add(StructField("id", StringType, false))
.add(StructField("name", StringType, false))
.add(StructField("custom_fields.fav_colour", StringType, true))
.add(StructField("custom_fields.star_sign", StringType, true))
// Create a row with dummy data
val row1 = Row("123456", "John Citizen", "Blue", "Scorpio")
val row2 = Row("990087", "Jane Simth", "Green", "Taurus")
val flatData = Seq(row1, row2)
// Convert into dataframe
val dfIn = spark.createDataFrame(spark.sparkContext.parallelize(flatData), flatSchema)
// Print to console
dfIn.printSchema()
dfIn.show()
// Convert flat data into nested structure as either Parquet or JSON format
val dfOut = dfIn.rdd
.map(
row => ( /* TODO: Need help with mapping flat data to nested structure derived from input column namespaces
*
* For example:
*
* <id>12345<id>
* <name>John Citizen</name>
* <custom_fields>
* <fav_colour>Blue</fav_colour>
* <star_sign>Scorpio</star_sign>
* </custom_fields>
*
*/ ))
// Stop spark
sc.stop()
}
This solution is for the revised requirement that the JSON output would consist of an array of {K:valueK, V:valueV} rather than {valueK1: valueV1, valueK2: valueV2, ...}. For example:
// FROM:
"custom_fields":{"fav_colour":"Blue", "star_sign":"Scorpio"}
// TO:
"custom_fields":[{"key":"fav_colour", "value":"Blue"}, {"key":"star_sign", "value":"Scorpio"}]
Sample code below:
import org.apache.spark.sql.functions._
val dfIn = Seq(
(123456, "John Citizen", "Blue", "Scorpio"),
(990087, "Jane Simth", "Green", "Taurus")
).toDF("id", "name", "custom_fields.fav_colour", "custom_fields.star_sign")
val structCols = dfIn.columns.filter(_.contains("."))
// structCols: Array[String] =
// Array(custom_fields.fav_colour, custom_fields.star_sign)
val structColsMap = structCols.map(_.split("\\.")).
groupBy(_(0)).mapValues(_.map(_(1)))
// structColsMap: scala.collection.immutable.Map[String,Array[String]] =
// Map(custom_fields -> Array(fav_colour, star_sign))
val dfExpanded = structColsMap.foldLeft(dfIn){ (accDF, kv) =>
val cols = kv._2.map( v =>
struct(lit(v).as("key"), col("`" + kv._1 + "." + v + "`").as("value"))
)
accDF.withColumn(kv._1, array(cols: _*))
}
val dfResult = structCols.foldLeft(dfExpanded)(_ drop _)
dfResult.show(false)
// +------+------------+----------------------------------------+
// |id |name |custom_fields |
// +------+------------+----------------------------------------+
// |123456|John Citizen|[[fav_colour,Blue], [star_sign,Scorpio]]|
// |990087|Jane Simth |[[fav_colour,Green], [star_sign,Taurus]]|
// +------+------------+----------------------------------------+
dfResult.printSchema
// root
// |-- id: integer (nullable = false)
// |-- name: string (nullable = true)
// |-- custom_fields: array (nullable = false)
// | |-- element: struct (containsNull = false)
// | | |-- key: string (nullable = false)
// | | |-- value: string (nullable = true)
dfResult.toJSON.show(false)
// +-------------------------------------------------------------------------------------------------------------------------------+
// |value |
// +-------------------------------------------------------------------------------------------------------------------------------+
// |{"id":123456,"name":"John Citizen","custom_fields":[{"key":"fav_colour","value":"Blue"},{"key":"star_sign","value":"Scorpio"}]}|
// |{"id":990087,"name":"Jane Simth","custom_fields":[{"key":"fav_colour","value":"Green"},{"key":"star_sign","value":"Taurus"}]} |
// +-------------------------------------------------------------------------------------------------------------------------------+
Note that we cannot make value type Any to accommodate a mix of different types, as Spark DataFrame API doesn't support type Any. As a consequence, the value in the array must be of a given type (e.g. String). Like the previous solution, this also handles only up to one nested level.
This can be solved with a dedicated case class and a UDF that converts the input data into case class instances. For example:
Define the case class
case class NestedFields(fav_colour: String, star_sign: String)
Define the UDF that takes the original column values as input and returns an instance of NestedFields:
private val asNestedFields = udf((fc: String, ss: String) => NestedFields(fc, ss))
Transform the original DataFrame and drop the flat columns:
val res = dfIn.withColumn("custom_fields", asNestedFields($"`custom_fields.fav_colour`", $"`custom_fields.star_sign`"))
.drop($"`custom_fields.fav_colour`")
.drop($"`custom_fields.star_sign`")
It produces
root
|-- id: string (nullable = false)
|-- name: string (nullable = false)
|-- custom_fields: struct (nullable = true)
| |-- fav_colour: string (nullable = true)
| |-- star_sign: string (nullable = true)
+------+------------+---------------+
| id| name| custom_fields|
+------+------------+---------------+
|123456|John Citizen|[Blue, Scorpio]|
|990087| Jane Simth|[Green, Taurus]|
+------+------------+---------------+
Here's a generalized solution that first assembles a Map of column names that contain the ., traverses the Map to add converted struct columns to the DataFrame, and finally drop the original columns with the .. A slightly more generalized dfIn is used as the sample data.
import org.apache.spark.sql.functions._
val dfIn = Seq(
(123456, "John Citizen", "Blue", "Scorpio", "a", 1),
(990087, "Jane Simth", "Green", "Taurus", "b", 2)
).
toDF("id", "name", "custom_fields.fav_colour", "custom_fields.star_sign", "s.c1", "s.c2")
val structCols = dfIn.columns.filter(_.contains("."))
// structCols: Array[String] =
// Array(custom_fields.fav_colour, custom_fields.star_sign, s.c1, s.c2)
val structColsMap = structCols.map(_.split("\\.")).
groupBy(_(0)).mapValues(_.map(_(1)))
// structColsMap: scala.collection.immutable.Map[String,Array[String]] =
// Map(s -> Array(c1, c2), custom_fields -> Array(fav_colour, star_sign))
val dfExpanded = structColsMap.foldLeft(dfIn){ (accDF, kv) =>
val cols = kv._2.map(v => col("`" + kv._1 + "." + v + "`").as(v))
accDF.withColumn(kv._1, struct(cols: _*))
}
val dfResult = structCols.foldLeft(dfExpanded)(_ drop _)
dfResult.show
// +------+------------+-----+--------------+
// |id |name |s |custom_fields |
// +------+------------+-----+--------------+
// |123456|John Citizen|[a,1]|[Blue,Scorpio]|
// |990087|Jane Simth |[b,2]|[Green,Taurus]|
// +------+------------+-----+--------------+
dfResult.printSchema
// root
// |-- id: integer (nullable = false)
// |-- name: string (nullable = true)
// |-- s: struct (nullable = false)
// | |-- c1: string (nullable = true)
// | |-- c2: integer (nullable = false)
// |-- custom_fields: struct (nullable = false)
// | |-- fav_colour: string (nullable = true)
// | |-- star_sign: string (nullable = true)
Note that this solution handles only up to one nested level.
To convert each row to JSON format, consider using toJSON as follows:
dfResult.toJSON.show(false)
// +---------------------------------------------------------------------------------------------------------------------+
// |value |
// +---------------------------------------------------------------------------------------------------------------------+
// |{"id":123456,"name":"John Citizen","s":{"c1":"a","c2":1},"custom_fields":{"fav_colour":"Blue","star_sign":"Scorpio"}}|
// |{"id":990087,"name":"Jane Simth","s":{"c1":"b","c2":2},"custom_fields":{"fav_colour":"Green","star_sign":"Taurus"}} |
// +---------------------------------------------------------------------------------------------------------------------+
I'm reading a JSON file into a spark data frame in Scala. I have a JSON field like
"areaGlobalIdList":[2389,3,2,1,2147,2142,2518]
Spark is automatically inferring the datatype of this field as Array[long]. I tried concat_ws, but it seems only works with array[string]. When I tried converting this to array[string], the output is showing as
scala> val cmrdd = sc.textFile("/user/nkthn/cm.json")
scala> val cmdf = sqlContext.read.json(cmrdd)
scala> val dfResults = cmdf.select($"areaGlobalIdList".cast(StringType)).withColumn("AREAGLOBALIDLIST", regexp_replace($"areaGlobalIdList" , ",", "." ))
scala> dfResults.show(20,false)
+------------------------------------------------------------------+
|AREAGLOBALIDLIST |
+------------------------------------------------------------------+
|org.apache.spark.sql.catalyst.expressions.UnsafeArrayData#6364b584|
+------------------------------------------------------------------+
I'm expecting the output to be
[2389.3.2.1.2147.2142.2518]
Any assistance is greatly helpful.
Given the schema of the areaGlobalIdList column as
|-- areaGlobalIdList: array (nullable = true)
| |-- element: long (containsNull = false)
You can achieve this with simple udf function as
import org.apache.spark.sql.functions._
val concatWithDot = udf((array: collection.mutable.WrappedArray[Long]) => array.mkString("."))
df.withColumn("areaGlobalIdList", concatWithDot($"areaGlobalIdList")).show(false)
I have text file with String REC as the record delimiter and line break as the column delimiter, and every data has column name attached to it with comma as delimiter, below is the sample data format
REC
Id,19048
Term,milk
Rank,1
REC
Id,19049
Term,corn
Rank,5
Used REC as the record delimiter.Now, i want to create the spark data frame with column names ID, Term and Rank.Please Assist me on this.
here is working code
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.io.{LongWritable, Text}
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat
import org.apache.spark.{SparkConf, SparkContext}
object RecordSeparator extends App {
var conf = new
SparkConf().setAppName("test").setMaster("local[1]")
.setExecutorEnv("executor- cores", "2")
var sc = new SparkContext(conf)
val hconf = new Configuration
hconf.set("textinputformat.record.delimiter", "REC")
val data = sc.newAPIHadoopFile("data.txt",
classOf[TextInputFormat], classOf[LongWritable],
classOf[Text], hconf).map(x => x._2.toString.trim).filter(x => x != "")
.map(x => getRecord(x)).map(x => x.split(","))
.map(x => record(x(0), x(2), x(2)))
val sqlContext = new SQLContext(sc)
val df = data.toDF()
df.printSchema()
df.show(false)
def getRecord(in: String): String = {
val ar = in.split("\n").mkString(",").split(",")
val data = Array(ar(1), ar(3), ar(5))
data.mkString(",")
}
}
case class record(Id: String, Term: String, Rank: String)
Output:
root
|-- Id: string (nullable = true)
|-- Term: string (nullable = true)
|-- Rank: string (nullable = true)
+-----+----+----+
|Id |Term|Rank|
+-----+----+----+
|19048|1 |1 |
|19049|5 |5 |
+-----+----+----+
Supposing you have your file on the "normal" filesystem (not HDFS), you have to write a file parser and then use sc.parallelize to create a RDD and then a DataFrame:
import org.apache.spark.sql.SQLContext
import org.apache.spark.{SparkConf, SparkContext}
import scala.collection.mutable
object Demo extends App {
val conf = new SparkConf().setMaster("local[1]").setAppName("Demo")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._
case class Record(
var id:Option[Int] = None,
var term:Option[String] = None,
var rank:Option[Int] = None)
val filename = "data.dat"
val records = readFile(filename)
val df = sc.parallelize(records).toDF
df.printSchema()
df.show()
def readFile(filename:String) : Seq[Record] = {
import scala.io.Source
val records = mutable.ArrayBuffer.empty[Record]
var currentRecord: Record = null
for (line <- Source.fromFile(filename).getLines) {
val tokens = line.split(',')
currentRecord = tokens match {
case Array("REC") => Record()
case Array("Id", id) => {
currentRecord.id = Some(id.toInt); currentRecord
}
case Array("Term", term) => {
currentRecord.term = Some(term); currentRecord
}
case Array("Rank", rank) => {
currentRecord.rank = Some(rank.toInt); records += currentRecord;
null
}
}
}
return records
}
}
this gives
root
|-- id: integer (nullable = true)
|-- term: string (nullable = true)
|-- rank: integer (nullable = true)
+-----+----+----+
| id|term|rank|
+-----+----+----+
|19048|milk| 1|
|19049|corn| 5|
+-----+----+----+