Dataframe foreach loop - better way to extract result? - scala

I have a DataFrame in scala which from which I need to create a new DataFrame for distinct values of SourceHash field.
var myProductsList = List[ProductInfo]()
val distinctFiles = dfDateFiltered.select(col("SourceHash")).distinct()
distinctFiles.foreach(rowFilter => {
val productInfo = createProductInfo(validFrom, validTo, dfDateFiltered, rowFilter.getString(0))
myProductsList = myProductsList :+ productInfo
})
myProductsList.toDF()
The problem is, this code throws java.lang.NullPointerException inside createProductInfo for any invocation on the dataframe dfDateFiltered.
The only way I can overcome this is using collect() before foreach like :
distinctFiles.collect().foreach(rowFilter => {...
}
But collect is expensive call, so this must be avoided.
How can I efficiently extract a new DataSet without losing on performance?
Below is the createProductInfo code:
private def createProductInfo(validFrom: String, validTo: String, dfDateFiltered: Dataset[Row], rowFilter: String) : ProductInfo = {
val dfPerFile = dfDateFiltered.filter(col("SourceHash") === rowFilter)
val dfRow = dfPerFile.head
val clientCount = dfPerFile.filter(col("ServerOrClient") === "Client").count
val buildVersion = dfPerFile.filter(col("ServerOrClient") === "Server").select(col("BuildVersion")).head.getString(0)
val productInfo = ProductInfo(dfRow.getInt(0),
dfRow.getInt(1),
dfRow.getString(12),
dfRow.getString(13),
dfRow.getString(14),
validFrom,
validTo,
dfRow.getString(8),
dfRow.getTimestamp(9),
clientCount,
buildVersion
)
productInfo
}

Function "createProductInfo" can be avoided, values can be collected by grouping. Original dataset don't exists in question, approach can be shown on such data:
val dfDateFiltered = Seq(
(1, "Server", 1),
(1, "Client", 2),
(2, "Client", 3)
).toDF("SourceHash", "ServerOrClient", "BuildVersion")
val validFrom = "Today"
dfDateFiltered
.groupBy("SourceHash")
.agg(sum(when($"ServerOrClient" === lit("Client"), 1).otherwise(0)).alias("clientCount"),
first(when($"ServerOrClient" === lit("Server"), col("BuildVersion")).otherwise(null), true).alias("buildVersion")
)
.withColumn("validFrom", lit(validFrom))
.as[Product]
Output:
+----------+-----------+------------+---------+
|SourceHash|clientCount|buildVersion|validFrom|
+----------+-----------+------------+---------+
|1 |1 |1 |Today |
|2 |1 |null |Today |
+----------+-----------+------------+---------+

Related

create view for two different dataframe in scala spark

I have a code snippet that will read a Json array of the file path and then union the output and gives me two different tables. So I want to create two different createOrReplaceview(name) for those two tables and the name will be available in json array like below:
{
"source": [
{
"name": "testPersons",
"data": [
"E:\\dataset\\2020-05-01\\",
"E:\\dataset\\2020-05-02\\"
],
"type": "json"
},
{
"name": "testPets",
"data": [
"E:\\dataset\\2020-05-01\\078\\",
"E:\\dataset\\2020-05-02\\078\\"
],
"type": "json"
}
]
}
My output:
testPersons
+---+------+
|name |age|
+---+------+
|John |24 |
|Cammy |20 |
|Britto|30 |
|George|23 |
|Mikle |15 |
+---+------+
testPets
+---+------+
|name |age|
+---+------+
|piku |2 |
|jimmy |3 |
|rapido|1 |
+---+------+
Above is my Output and Json array my code iterate through each array and read the data section and read the data.
But how to change my below code to create a temp view for each output table.
for example i want to create .createOrReplaceTempView(testPersons) and .createOrReplaceTempView(testPets)
view name as per in Json array
if (dataArr(counter)("type").value.toString() == "json") {
val name = dataArr(counter)("name").value.toString()
val dataPath = dataArr(counter)("data").arr
val input = dataPath.map(item => {
val rdd = spark.sparkContext.wholeTextFiles(item.str).map(i => "[" + i._2.replaceAll("\\}.*\n{0,}.*\\{", "},{") + "]")
spark
.read
.schema(Schema.getSchema(name))
.option("multiLine", true)
.json(rdd)
})
val emptyDF = spark.createDataFrame(spark.sparkContext.emptyRDD[Row], Schema.getSchema(name))
val finalDF = input.foldLeft(emptyDF)((x, y) => x.union(y))
finalDF.show()
Expected output:
spark.sql("SELECT * FROM testPersons").show()
spark.sql("SELECT * FROM testPets").show()
It should give me the table for each one.
Since you already have your data wrangled into shape and have your rows in DataFrames and simply want to access them as temporary views, I suppose you are looking for the function(s):
createOrReplaceGlobalTempView
createOrReplaceTempView
They can be invoked from a DataFrame/Dataset.
df.createOrReplaceGlobalTempView("testPersons")
spark.sql("SELECT * FROM global_temp.testPersons").show()
df.createOrReplaceTempView("testPersons")
spark.sql("SELECT * FROM testPersons").show()
For an explanation about the difference between the two, you can take a look at this question.
If you are trying to dynamically read the JSON, get the files in data into DataFrames and then save them into their own table.
import net.liftweb.json._
import net.liftweb.json.DefaultFormats
case class Source(name: String, data: List[String], `type`: String)
val file = scala.io.Source.fromFile("path/to/your/file").mkString
implicit val formats: DefaultFormats.type = DefaultFormats
val json = parse(file)
val sourceList = (json \ "source").children
for (source <- sourceList) {
val s = source.extract[Source]
val df = s.data.map(d => spark.read(d)).reduce(_ union _)
df.createOrReplaceTempView(s.name)
}

Spark Best way groupByKey, orderBy and filter

I have 50GB of data with this schema [ID, timestamp, countryId] and I would like to get each "change" of each person in all of their events ordered by timestamp using spark 2.2.1. I mean if I have this events:
1,20180101,2
1,20180102,3
1,20180105,3
2,20180105,3
1,20180108,4
1,20180109,3
2,20180108,3
2,20180109,6
I would like to obtain this:
1,20180101,2
1,20180102,3
1,20180108,4
1,20180109,3
2,20180105,3
2,20180109,6
For this I have developed this code:
val eventsOrdened = eventsDataFrame.orderBy("ID", "timestamp")
val grouped = eventsOrdened
.rdd.map(x => (x.getString(0), x))
.groupByKey(300)
.mapValues(y => cleanEvents(y))
.flatMap(_._2)
where "cleanEvents" is:
def cleanEvents(ordenedEvents: Iterable[Row]): Iterable[Row] = {
val ordered = ordenedEvents.toList
val cleanedList: ListBuffer[Row] = ListBuffer.empty[Row]
ordered.map {
x => {
val next = if (ordered.indexOf(x) != ordered.length - 1) ordered(ordered.indexOf(x) + 1) else x
val country = x.get(2)
val nextountry = next.get(2)
val isFirst = if (cleanedList.isEmpty) true else false
val isLast = if (ordered.indexOf(x) == ordered.length - 1) true else false
if (isFirst) {
cleanedList.append(x)
} else {
if (cleanedList.size >= 1 && cleanedList.last.get(2) != country && country != nextCountry) {
cleanedList.append(x)
} else {
if (isLast && cleanedList.last.get(2) != zipCode) cleanedList.append(x)
}
}
}
}
cleanedList
}
It works but it's too slow, any optimization are welcome!!
Thanks!
Window function "lag" can be used:
case class Details(id: Int, date: Int, cc: Int)
val list = List[Details](
Details(1, 20180101, 2),
Details(1, 20180102, 3),
Details(1, 20180105, 3),
Details(2, 20180105, 3),
Details(1, 20180108, 4),
Details(1, 20180109, 3),
Details(2, 20180108, 3),
Details(2, 20180109, 6))
val ds = list.toDS()
// action
val window = Window.partitionBy("id").orderBy("date")
val result = ds.withColumn("lag", lag($"cc", 1).over(window)).where(isnull($"lag") || $"lag" =!= $"cc").orderBy("id", "date")
result.show(false)
Result is (lag column can be removed):
|id |date |cc |lag |
+---+--------+---+----+
|1 |20180101|2 |null|
|1 |20180102|3 |2 |
|1 |20180108|4 |3 |
|1 |20180109|3 |4 |
|2 |20180105|3 |null|
|2 |20180109|6 |3 |
+---+--------+---+----+
You might want to try the following:
Secondary sorting. It's does low-level partitioning and sorting and you will create a customize partition. More info here: http://codingjunkie.net/spark-secondary-sort/
Use combineByKey
case class Details(id: Int, date: Int, cc: Int)
val sc = new SparkContext("local[*]", "App")
val list = List[Details](
Details(1,20180101,2),
Details(1,20180102,3),
Details(1,20180105,3),
Details(2,20180105,3),
Details(1,20180108,4),
Details(1,20180109,3),
Details(2,20180108,3),
Details(2,20180109,6))
val rdd = sc.parallelize(list)
val createCombiner = (v: (Int, Int)) => List[(Int, Int)](v)
val combiner = (c: List[(Int, Int)], v: (Int, Int)) => (c :+ v).sortBy(_._1)
val mergeCombiner = (c1: List[(Int, Int)], c2: List[(Int, Int)]) => (c1 ++ c2).sortBy(_._1)
rdd
.map(det => (det.id, (det.date, det.cc)))
.combineByKey(createCombiner, combiner, mergeCombiner)
.collect()
.foreach(println)
the output would be something like this:
(1,List((20180101,2), (20180102,3), (20180105,3), (20180108,4), (20180109,3)))
(2,List((20180105,3), (20180108,3), (20180109,6)))

NullPointerException when using UDF in Spark

I have a DataFrame in Spark such as this one:
var df = List(
(1,"{NUM.0002}*{NUM.0003}"),
(2,"{NUM.0004}+{NUM.0003}"),
(3,"END(6)"),
(4,"END(4)")
).toDF("CODE", "VALUE")
+----+---------------------+
|CODE| VALUE|
+----+---------------------+
| 1|{NUM.0002}*{NUM.0003}|
| 2|{NUM.0004}+{NUM.0003}|
| 3| END(6)|
| 4| END(4)|
+----+---------------------+
My task is to iterate through the VALUE column and do the following: check if there is a substring such as {NUM.XXXX}, get the XXXX number, get the row where $"CODE" === XXXX, and replace the {NUM.XXX} substring with the VALUE string in that row.
I would like the dataframe to look like this in the end:
+----+--------------------+
|CODE| VALUE|
+----+--------------------+
| 1|END(4)+END(6)*END(6)|
| 2| END(4)+END(6)|
| 3| END(6)|
| 4| END(4)|
+----+--------------------+
This is the best I've come up with:
val process = udf((ln: String) => {
var newln = ln
while(newln contains "{NUM."){
var num = newln.slice(newln.indexOf("{")+5, newln.indexOf("}")).toInt
var new_value = df.where($"CODE" === num).head.getAs[String](1)
newln = newln.replace(newln.slice(newln.indexOf("{"),newln.indexOf("}")+1), new_value)
}
newln
})
var df2 = df.withColumn("VALUE", when('VALUE contains "{NUM.",process('VALUE)).otherwise('VALUE))
Unfortunately, I get a NullPointerException when I try to filter/select/save df2, and no error when I just show df2. I believe the error appears when I access the DataFrame df within the UDF, but I need to access it every iteration, so I can't pass it as an input. Also, I've tried saving a copy of df inside the UDF but I don't know how to do that. What can I do here?
Any suggestions to improve the algorithm are very welcome! Thanks!
I wrote something which works but not very optimized I think. I actually do recursive joins on the initial DataFrame to replace the NUMs by END. Here is the code :
case class Data(code: Long, value: String)
def main(args: Array[String]): Unit = {
val sparkSession: SparkSession = SparkSession.builder().master("local").getOrCreate()
val data = Seq(
Data(1,"{NUM.0002}*{NUM.0003}"),
Data(2,"{NUM.0004}+{NUM.0003}"),
Data(3,"END(6)"),
Data(4,"END(4)"),
Data(5,"{NUM.0002}")
)
val initialDF = sparkSession.createDataFrame(data)
val endDF = initialDF.filter(!(col("value") contains "{NUM"))
val numDF = initialDF.filter(col("value") contains "{NUM")
val resultDF = endDF.union(replaceNumByEnd(initialDF, numDF))
resultDF.show(false)
}
val parseNumUdf = udf((value: String) => {
if (value.contains("{NUM")) {
val regex = """.*?\{NUM\.(\d+)\}.*""".r
value match {
case regex(code) => code.toLong
}
} else {
-1L
}
})
val replaceUdf = udf((value: String, replacement: String) => {
val regex = """\{NUM\.(\d+)\}""".r
regex.replaceFirstIn(value, replacement)
})
def replaceNumByEnd(initialDF: DataFrame, currentDF: DataFrame): DataFrame = {
if (currentDF.count() == 0) {
currentDF
} else {
val numDFWithCode = currentDF
.withColumn("num_code", parseNumUdf(col("value")))
.withColumnRenamed("code", "code_original")
.withColumnRenamed("value", "value_original")
val joinedDF = numDFWithCode.join(initialDF, numDFWithCode("num_code") === initialDF("code"))
val replacedDF = joinedDF.withColumn("value_replaced", replaceUdf(col("value_original"), col("value")))
val nextDF = replacedDF.select(col("code_original").as("code"), col("value_replaced").as("value"))
val endDF = nextDF.filter(!(col("value") contains "{NUM"))
val numDF = nextDF.filter(col("value") contains "{NUM")
endDF.union(replaceNumByEnd(initialDF, numDF))
}
}
If you need more explanation, don't hesitate.

Spliting columns in a Spark dataframe in to new rows [Scala]

I have output from a spark data frame like below:
Amt |id |num |Start_date |Identifier
43.45|19840|A345|[2014-12-26, 2013-12-12]|[232323,45466]|
43.45|19840|A345|[2010-03-16, 2013-16-12]|[34343,45454]|
My requirement is to generate output in below format from the above output
Amt |id |num |Start_date |Identifier
43.45|19840|A345|2014-12-26|232323
43.45|19840|A345|2013-12-12|45466
43.45|19840|A345|2010-03-16|34343
43.45|19840|A345|2013-16-12|45454
Can somebody help me to achieve this.
Is this the thing you're looking for?
import org.apache.spark.sql._
import org.apache.spark.sql.functions._
val sparkSession = ...
import sparkSession.implicits._
val input = sc.parallelize(Seq(
(43.45, 19840, "A345", Seq("2014-12-26", "2013-12-12"), Seq(232323,45466)),
(43.45, 19840, "A345", Seq("2010-03-16", "2013-16-12"), Seq(34343,45454))
)).toDF("amt", "id", "num", "start_date", "identifier")
val zipArrays = udf { (dates: Seq[String], identifiers: Seq[Int]) =>
dates.zip(identifiers)
}
val output = input.select($"amt", $"id", $"num", explode(zipArrays($"start_date", $"identifier")))
.select($"amt", $"id", $"num", $"col._1".as("start_date"), $"col._2".as("identifier"))
output.show()
Which returns:
+-----+-----+----+----------+----------+
| amt| id| num|start_date|identifier|
+-----+-----+----+----------+----------+
|43.45|19840|A345|2014-12-26| 232323|
|43.45|19840|A345|2013-12-12| 45466|
|43.45|19840|A345|2010-03-16| 34343|
|43.45|19840|A345|2013-16-12| 45454|
+-----+-----+----+----------+----------+
EDIT:
Since you would like to have multiple columns that should be zipped, you should try something like this:
val input = sc.parallelize(Seq(
(43.45, 19840, "A345", Seq("2014-12-26", "2013-12-12"), Seq("232323","45466"), Seq("123", "234")),
(43.45, 19840, "A345", Seq("2010-03-16", "2013-16-12"), Seq("34343","45454"), Seq("345", "456"))
)).toDF("amt", "id", "num", "start_date", "identifier", "another_column")
val zipArrays = udf { seqs: Seq[Seq[String]] =>
for(i <- seqs.head.indices) yield seqs.fold(Seq.empty)((accu, seq) => accu :+ seq(i))
}
val columnsToSelect = Seq($"amt", $"id", $"num")
val columnsToZip = Seq($"start_date", $"identifier", $"another_column")
val outputColumns = columnsToSelect ++ columnsToZip.zipWithIndex.map { case (column, index) =>
$"col".getItem(index).as(column.toString())
}
val output = input.select($"amt", $"id", $"num", explode(zipArrays(array(columnsToZip: _*)))).select(outputColumns: _*)
output.show()
/*
+-----+-----+----+----------+----------+--------------+
| amt| id| num|start_date|identifier|another_column|
+-----+-----+----+----------+----------+--------------+
|43.45|19840|A345|2014-12-26| 232323| 123|
|43.45|19840|A345|2013-12-12| 45466| 234|
|43.45|19840|A345|2010-03-16| 34343| 345|
|43.45|19840|A345|2013-16-12| 45454| 456|
+-----+-----+----+----------+----------+--------------+
*/
If I understand correctly, you want the first elements of col 3 and 4.
Does this make sense?
val newDataFrame = for {
row <- oldDataFrame
} yield {
val zro = row(0) // 43.45
val one = row(1) // 19840
val two = row(2) // A345
val dates = row(3) // [2014-12-26, 2013-12-12]
val numbers = row(4) // [232323,45466]
Row(zro, one, two, dates(0), numbers(0))
}
You could use SparkSQL.
First you create a view with the information we need to process:
df.createOrReplaceTempView("tableTest")
Then you can select the data with the expansions:
sparkSession.sqlContext.sql(
"SELECT Amt, id, num, expanded_start_date, expanded_id " +
"FROM tableTest " +
"LATERAL VIEW explode(Start_date) Start_date AS expanded_start_date " +
"LATERAL VIEW explode(Identifier) AS expanded_id")
.show()

Better way to convert a string field into timestamp in Spark

I have a CSV in which a field is datetime in a specific format. I cannot import it directly in my Dataframe because it needs to be a timestamp. So I import it as string and convert it into a Timestamp like this
import java.sql.Timestamp
import java.text.SimpleDateFormat
import java.util.Date
import org.apache.spark.sql.Row
def getTimestamp(x:Any) : Timestamp = {
val format = new SimpleDateFormat("MM/dd/yyyy' 'HH:mm:ss")
if (x.toString() == "")
return null
else {
val d = format.parse(x.toString());
val t = new Timestamp(d.getTime());
return t
}
}
def convert(row : Row) : Row = {
val d1 = getTimestamp(row(3))
return Row(row(0),row(1),row(2),d1)
}
Is there a better, more concise way to do this, with the Dataframe API or spark-sql? The above method requires the creation of an RDD and to give the schema for the Dataframe again.
Spark >= 2.2
Since you 2.2 you can provide format string directly:
import org.apache.spark.sql.functions.to_timestamp
val ts = to_timestamp($"dts", "MM/dd/yyyy HH:mm:ss")
df.withColumn("ts", ts).show(2, false)
// +---+-------------------+-------------------+
// |id |dts |ts |
// +---+-------------------+-------------------+
// |1 |05/26/2016 01:01:01|2016-05-26 01:01:01|
// |2 |#$#### |null |
// +---+-------------------+-------------------+
Spark >= 1.6, < 2.2
You can use date processing functions which have been introduced in Spark 1.5. Assuming you have following data:
val df = Seq((1L, "05/26/2016 01:01:01"), (2L, "#$####")).toDF("id", "dts")
You can use unix_timestamp to parse strings and cast it to timestamp
import org.apache.spark.sql.functions.unix_timestamp
val ts = unix_timestamp($"dts", "MM/dd/yyyy HH:mm:ss").cast("timestamp")
df.withColumn("ts", ts).show(2, false)
// +---+-------------------+---------------------+
// |id |dts |ts |
// +---+-------------------+---------------------+
// |1 |05/26/2016 01:01:01|2016-05-26 01:01:01.0|
// |2 |#$#### |null |
// +---+-------------------+---------------------+
As you can see it covers both parsing and error handling. The format string should be compatible with Java SimpleDateFormat.
Spark >= 1.5, < 1.6
You'll have to use use something like this:
unix_timestamp($"dts", "MM/dd/yyyy HH:mm:ss").cast("double").cast("timestamp")
or
(unix_timestamp($"dts", "MM/dd/yyyy HH:mm:ss") * 1000).cast("timestamp")
due to SPARK-11724.
Spark < 1.5
you should be able to use these with expr and HiveContext.
I haven't played with Spark SQL yet but I think this would be more idiomatic scala (null usage is not considered a good practice):
def getTimestamp(s: String) : Option[Timestamp] = s match {
case "" => None
case _ => {
val format = new SimpleDateFormat("MM/dd/yyyy' 'HH:mm:ss")
Try(new Timestamp(format.parse(s).getTime)) match {
case Success(t) => Some(t)
case Failure(_) => None
}
}
}
Please notice I assume you know Row elements types beforehand (if you read it from a csv file, all them are String), that's why I use a proper type like String and not Any (everything is subtype of Any).
It also depends on how you want to handle parsing exceptions. In this case, if a parsing exception occurs, a None is simply returned.
You could use it further on with:
rows.map(row => Row(row(0),row(1),row(2), getTimestamp(row(3))
I have ISO8601 timestamp in my dataset and I needed to convert it to "yyyy-MM-dd" format. This is what I did:
import org.joda.time.{DateTime, DateTimeZone}
object DateUtils extends Serializable {
def dtFromUtcSeconds(seconds: Int): DateTime = new DateTime(seconds * 1000L, DateTimeZone.UTC)
def dtFromIso8601(isoString: String): DateTime = new DateTime(isoString, DateTimeZone.UTC)
}
sqlContext.udf.register("formatTimeStamp", (isoTimestamp : String) => DateUtils.dtFromIso8601(isoTimestamp).toString("yyyy-MM-dd"))
And you can just use the UDF in your spark SQL query.
Spark Version: 2.4.4
scala> import org.apache.spark.sql.types.TimestampType
import org.apache.spark.sql.types.TimestampType
scala> val df = Seq("2019-04-01 08:28:00").toDF("ts")
df: org.apache.spark.sql.DataFrame = [ts: string]
scala> val df_mod = df.select($"ts".cast(TimestampType))
df_mod: org.apache.spark.sql.DataFrame = [ts: timestamp]
scala> df_mod.printSchema()
root
|-- ts: timestamp (nullable = true)
I would like to move the getTimeStamp method wrote by you into rdd's mapPartitions and reuse GenericMutableRow among rows in an iterator:
val strRdd = sc.textFile("hdfs://path/to/cvs-file")
val rowRdd: RDD[Row] = strRdd.map(_.split('\t')).mapPartitions { iter =>
new Iterator[Row] {
val row = new GenericMutableRow(4)
var current: Array[String] = _
def hasNext = iter.hasNext
def next() = {
current = iter.next()
row(0) = current(0)
row(1) = current(1)
row(2) = current(2)
val ts = getTimestamp(current(3))
if(ts != null) {
row.update(3, ts)
} else {
row.setNullAt(3)
}
row
}
}
}
And you should still use schema to generate a DataFrame
val df = sqlContext.createDataFrame(rowRdd, tableSchema)
The usage of GenericMutableRow inside an iterator implementation could be find in Aggregate Operator, InMemoryColumnarTableScan, ParquetTableOperations etc.
I would use https://github.com/databricks/spark-csv
This will infer timestamps for you.
import com.databricks.spark.csv._
val rdd: RDD[String] = sc.textFile("csvfile.csv")
val df : DataFrame = new CsvParser().withDelimiter('|')
.withInferSchema(true)
.withParseMode("DROPMALFORMED")
.csvRdd(sqlContext, rdd)
I had some issues with to_timestamp where it was returning an empty string. After a lot of trial and error, I was able to get around it by casting as a timestamp, and then casting back as a string. I hope this helps for anyone else with the same issue:
df.columns.intersect(cols).foldLeft(df)((newDf, col) => {
val conversionFunc = to_timestamp(newDf(col).cast("timestamp"), "MM/dd/yyyy HH:mm:ss").cast("string")
newDf.withColumn(col, conversionFunc)
})