convert my below code into pure scala function - scala

def buildDf(df:DataFrame,platformKey: PlatformKey):DataFrame=
{
var dataF = df
val selectedColumns = getSelectedColumns(platformKey)
for((newStructCol,colAtribute) <- selectedColumns._2 )
{
dataF = dataF.withColumn(newStructCol,struct(colAtribute:_*))
}
dataF
}
How can i make this method to only use val . I am trying to add columns to my spark dataframe . is there a better way to code this and what issues I can face with my code?

Related

Unable to flatten array of DataFrames

I have an array of DataFrames that I obtain by using randomSplit() in this manner:
val folds = df.randomSplit(Array.fill(5)(1.0/5)) //Array[Dataset[Row]]
I'll be iterating over folds using a for loop, where I will be dropping the ith entry inside folds and store it separately. Then I will be using all the others as another DataFrame as in my code below:
val df = spark.read.format("csv").load("xyz")
val folds = df.randomSplit(Array.fill(5)(1.0/5))
for (i <- folds.indices) {
var ts = folds
val testSet = ts(i)
ts = ts.drop(i)
var trainSet = spark.createDataFrame(spark.sparkContext.emptyRDD[Row], testSet.schema)
for (j <- ts.indices) {
trainSet = trainSet.union(ts(j))
}
}
While this does serve my purpose, I was also trying another approach where I would still separate folds into ts and testSet, and then use the flatten function for the remaining inside ts to create another DataFrame using something like this:
val df = spark.read.format("csv").load("xyz")
val folds = df.randomSplit(Array.fill(5)(1.0/5))
for (i <- folds.indices) {
var ts = folds
val testSet = ts(i)
ts = ts.drop(i)
var trainSet = ts.flatten
}
But at the initialization of the trainSet line, I get an error that: No Implicits Found for parameter asTrav: Dataset[Row] => Traversable[U_]. I have also done import spark.implicits._ after initializing the SparkSession.
My end goal with the creation of trainSet after flatten is to retrieve a DataFrame created after joining (union) the other Dataset[Row]s inside ts. I'm not sure where I'm going wrong.
I'm using Spark 2.4.5 with Scala 2.11.12
EDIT 1: Added how I read the Dataframe
I'm not sure what's your intention here but instead of using mutable variables and flattening you can do recursive iteration like this:
val folds = df.randomSplit(Array.fill(5)(1.0/5)) //Array[Dataset[Row]]
val testSet = spark.createDataFrame(Seq.empty)
val trainSet = spark.createDataFrame(spark.sparkContext.emptyRDD[Row], testSet.schema)
go(folds, Array.empty)
def go(items: Array[Dataset[Row]], result: Array[Dataset[Row]]): Array[Dataset[Row]] = items match {
case arr # Array(_, _*) =>
val res = arr.map { t =>
trainSet.union(t)
}
go(arr.tail, result ++ res)
case Array() => result
}
As I have seen the use case of testSet, there is no usage of it in the method body
I have replaced that for loop with a simple reduce:
val trainSet = ts.reduce((a,b) => a.union(b))

Writing udf function in scala and using them in pyspark job

We are trying to write a scala udf function and call it from a map function in pyspark. The dateframe schema is quite complex the columns we want to pass to this function are array of StructType.
trip_force_speeds = trip_details.groupby("vehicle_id","driver_id", "StartDtLocal", "EndDtLocal")\
.agg(collect_list(struct(col("event_start_dt_local"),
col("force"),
col("speed"),
col("sec_from_start"),
col("sec_from_end"),
col("StartDtLocal"),
col("EndDtLocal"),
col("verisk_vehicle_id"),
col("trip_duration_sec")))\
.alias("trip_details"))
In our map function we need to do some computation.
def calculateVariables(rec: Row):HashMap[String,Float] = {
val trips = rec.getAs[List]("trips")
val base_variables = new HashMap[String, Float]()
val entropy_variables = new HashMap[String, Float]()
val week_day_list = List("monday", "tuesday", "wednesday", "thursday", "friday")
for (trip <- trips)
{
if (trip("start_dt_local") >= trip("StartDtLocal") && trip("start_dt_local") <= trip("EndDtLocal"))
{
base_variables("trip_summary_count") += 1
if (trip("duration_sec").toFloat >= 300 && trip("duration_sec").toFloat <= 1800) {
base_variables ("bounded_trip") += 1
base_variables("bounded_trip_duration") = trip("duration_sec") + base_variables("bounded_trip_duration")
base_variables("total_bin_1") += 30
base_variables("total_bin_2") += 30
base_variables("total_bin_3") += 60
base_variables("total_bin_5") += 60
base_variables("total_bin_6") += 30
base_variables("total_bin_7") += 30
}
if (trip("duration_sec") > 120 && trip("duration_sec") < 21600 )
{
base_variables("trip_count") += 1
}
base_variables("trip_distance") += trip("distance_km")
base_variables("trip_duration") = trip("duration_sec") + base_variables("trip_duration")
base_variables("speed_event_distance") = trip("speed_event_distance_km") + base_variables("speed_event_distance")
base_variables("speed_event_duration") = trip("speed_event_duration_sec") + base_variables("speed_event_duration")
base_variables("speed_event_distance_ratio") = trip("speed_distance_ratio") + base_variables("speed_event_distance_ratio")
base_variables("speed_event_duration_ratio") = trip("speed_duration_ratio") + base_variables("speed_event_duration_ratio")
}
}
return base_variables
}
when we tried to compile the scala code we got thie error
i tried using Row but got this error
"error: kinds of the type arguments (List) do not conform to the expected kinds of the type parameters (type T). List's type parameters do not match type T's expected parameters: type List has one type parameter, but type T has none – "
in my case the trip is a list of rows. this is the schema
StructType(List(StructField(verisk_vehicle_id,StringType,true),StructField(verisk_driver_id,StringType,false),StructField(StartDtLocal,TimestampType,true),StructField(EndDtLocal,TimestampType,true),StructField(trips,ArrayType(StructType(List(StructField(week_start_dt_local,TimestampType,true),StructField(week_end_dt_local,TimestampType,true),StructField(start_dt_local,TimestampType,true),StructField(end_dt_local,TimestampType,true),StructField(StartDtLocal,TimestampType,true),StructField(EndDtLocal,TimestampType,true),StructField(verisk_vehicle_id,StringType,true),StructField(duration_sec,FloatType,true),StructField(distance_km,FloatType,true),StructField(speed_distance_ratio,FloatType,true),StructField(speed_duration_ratio,FloatType,true),StructField(speed_event_distance_km,FloatType,true),StructField(speed_event_duration_sec,FloatType,true))),true),true),StructField(trip_details,ArrayType(StructType(List(StructField(event_start_dt_local,TimestampType,true),StructField(force,FloatType,true),StructField(speed,FloatType,true),StructField(sec_from_start,FloatType,true),StructField(sec_from_end,FloatType,true),StructField(StartDtLocal,TimestampType,true),StructField(EndDtLocal,TimestampType,true),StructField(verisk_vehicle_id,StringType,true),StructField(trip_duration_sec,FloatType,true))),true),true)))
is there something wrong in the way we defined the function signature we tried overriding the spark structtype but that didn't work for me.
i am from a python background and am facing some performance issues in python job, that's why i decided to write this map function in Scala.
You must work with the Row type instead of the StructType in your udf. StructType represents the schema itself not the data. A little example in Scala that you can use:
object test{
import org.apache.spark.sql.functions.{udf, collect_list, struct}
val hash = HashMap[String, Float]("start_dt_local" -> 0)
// This simple type to store you results
val sampleDataset = Seq(Row(Instant.now().toEpochMilli, Instant.now().toEpochMilli))
implicit val spark: SparkSession =
SparkSession
.builder()
.appName("Test")
.master("local[*]")
.getOrCreate()
def calculateVariablesUdf = udf { trip: Row =>
if(trip.getAs[Long]("start_dt_local") >= trip.getAs[Long]("StartDtLocal")) {
// crate a new instance with your results
hash("start_dt_local") + 1
} else {
hash("start_dt_local") + 0
}
}
def main(args: Array[String]) : Unit = {
Logger.getLogger("org").setLevel(Level.OFF)
Logger.getLogger("akka").setLevel(Level.OFF)
val rdd = spark.sparkContext.parallelize(sampleDataset)
val df = spark.createDataFrame(rdd, StructType(List(StructField("start_dt_local", LongType, false), StructField("StartDtLocal", LongType, false))))
df.agg(collect_list(calculateVariablesUdf(struct(col("start_dt_local"), col("StartDtLocal")))).as("result")).show(false)
}
}
Edit. For a better understanding:
You are wrong when you consider a schema description: StructType(List(StructField)) as the type of your field. There is not List type in your DataFrame.
If you treat your calculateVariables as an udf you don´t need the for loop. I mean:
def calculateVariables = udf { trip: Row =>
trip("start_dt_local").getAs[Long]
// your logic ....
}
As I put in the example you can return your updated Hash directly in the udf

Looping through Map Spark Scala

Within this code we have two files: athletes.csv that contains names, and twitter.test that contains the tweet message. We want to find name for every single line in the twitter.test that match the name in athletes.csv We applied map function to store the name from athletes.csv and want to iterate all of the name to all of the line in the test file.
object twitterAthlete {
def loadAthleteNames() : Map[String, String] = {
// Handle character encoding issues:
implicit val codec = Codec("UTF-8")
codec.onMalformedInput(CodingErrorAction.REPLACE)
codec.onUnmappableCharacter(CodingErrorAction.REPLACE)
// Create a Map of Ints to Strings, and populate it from u.item.
var athleteInfo:Map[String, String] = Map()
//var movieNames:Map[Int, String] = Map()
val lines = Source.fromFile("../athletes.csv").getLines()
for (line <- lines) {
var fields = line.split(',')
if (fields.length > 1) {
athleteInfo += (fields(1) -> fields(7))
}
}
return athleteInfo
}
def parseLine(line:String): (String)= {
var athleteInfo = loadAthleteNames()
var hello = new String
for((k,v) <- athleteInfo){
if(line.toString().contains(k)){
hello = k
}
}
return (hello)
}
def main(args: Array[String]){
Logger.getLogger("org").setLevel(Level.ERROR)
val sc = new SparkContext("local[*]", "twitterAthlete")
val lines = sc.textFile("../twitter.test")
var athleteInfo = loadAthleteNames()
val splitting = lines.map(x => x.split(";")).map(x => if(x.length == 4 && x(2).length <= 140)x(2))
var hello = new String()
val container = splitting.map(x => for((key,value) <- athleteInfo)if(x.toString().contains(key)){key}).cache
container.collect().foreach(println)
// val mapping = container.map(x => (x,1)).reduceByKey(_+_)
//mapping.collect().foreach(println)
}
}
the first file look like:
id,name,nationality,sex,height........
001,Michael,USA,male,1.96 ...
002,Json,GBR,male,1.76 ....
003,Martin,female,1.73 . ...
the second file look likes:
time, id , tweet .....
12:00, 03043, some message that contain some athletes names , .....
02:00, 03023, some message that contain some athletes names , .....
some thinks like this ...
but i got empty result after running this code, any suggestions is much appreciated
result i got is empty :
()....
()...
()...
but the result that i expected something like:
(name,1)
(other name,1)
You need to use yield to return value to your map
val container = splitting.map(x => for((key,value) <- athleteInfo ; if(x.toString().contains(key)) ) yield (key, 1)).cache
I think you should just start with the simplest option first...
I would use DataFrames so you can use the built-in CSV parsing and leverage Catalyst, Tungsten, etc.
Then you can use the built-in Tokenizer to split the tweets into words, explode, and do a simple join. Depending how big/small the data with athlete names is you'll end up with a more optimized broadcast join and avoid a shuffle.
import org.apache.spark.sql.functions._
import org.apache.spark.ml.feature.Tokenizer
val tweets = spark.read.format("csv").load(...)
val athletes = spark.read.format("csv").load(...)
val tokenizer = new Tokenizer()
tokenizer.setInputCol("tweet")
tokenizer.setOutputCol("words")
val tokenized = tokenizer.transform(tweets)
val exploded = tokenized.withColumn("word", explode('words))
val withAthlete = exploded.join(athletes, 'word === 'name)
withAthlete.select(exploded("id"), 'name).show()

String filter using Spark UDF

input.csv:
200,300,889,767,9908,7768,9090
300,400,223,4456,3214,6675,333
234,567,890
123,445,667,887
What I want:
Read input file and compare with set "123,200,300" if match found, gives matching data
200,300 (from 1 input line)
300 (from 2 input line)
123 (from 4 input line)
What I wrote:
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.rdd.RDD
object sparkApp {
val conf = new SparkConf()
.setMaster("local")
.setAppName("CountingSheep")
val sc = new SparkContext(conf)
def parseLine(invCol: String) : RDD[String] = {
println(s"INPUT, $invCol")
val inv_rdd = sc.parallelize(Seq(invCol.toString))
val bs_meta_rdd = sc.parallelize(Seq("123,200,300"))
return inv_rdd.intersection(bs_meta_rdd)
}
def main(args: Array[String]) {
val filePathName = "hdfs://xxx/tmp/input.csv"
val rawData = sc.textFile(filePathName)
val datad = rawData.map{r => parseLine(r)}
}
}
I get the following exception:
java.lang.NullPointerException
Please suggest where I went wrong
Problem is solved. This is very simple.
val pfile = sc.textFile("/FileStore/tables/6mjxi2uz1492576337920/input.csv")
case class pSchema(id: Int, pName: String)
val pDF = pfile.map(_.split("\t")).map(p => pSchema(p(0).toInt,p(1).trim())).toDF()
pDF.select("id","pName").show()
Define UDF
val findP = udf((id: Int,
pName: String
) => {
val ids = Array("123","200","300")
var idsFound : String = ""
for (id <- ids){
if (pName.contains(id)){
idsFound = idsFound + id + ","
}
}
if (idsFound.length() > 0) {
idsFound = idsFound.substring(0,idsFound.length -1)
}
idsFound
})
Use UDF in withCoulmn()
pDF.select("id","pName").withColumn("Found",findP($"id",$"pName")).show()
For simple answer, why we are making it so complex? In this case we don't require UDF.
This is your input data:
200,300,889,767,9908,7768,9090|AAA
300,400,223,4456,3214,6675,333|BBB
234,567,890|CCC
123,445,667,887|DDD
and you have to match it with 123,200,300
val matchSet = "123,200,300".split(",").toSet
val rawrdd = sc.textFile("D:\\input.txt")
rawrdd.map(_.split("|"))
.map(arr => arr(0).split(",").toSet.intersect(matchSet).mkString(",") + "|" + arr(1))
.foreach(println)
Your output:
300,200|AAA
300|BBB
|CCC
123|DDD
What you are trying to do can't be done the way you are doing it.
Spark does not support nested RDDs (see SPARK-5063).
Spark does not support nested RDDs or performing Spark actions inside of transformations; this usually leads to NullPointerExceptions (see SPARK-718 as one example). The confusing NPE is one of the most common sources of Spark questions on StackOverflow:
call of distinct and map together throws NPE in spark library
NullPointerException in Scala Spark, appears to be caused be collection type?
Graphx: I've got NullPointerException inside mapVertices
(those are just a sample of the ones that I've answered personally; there are many others).
I think we can detect these errors by adding logic to RDD to check whether sc is null (e.g. turn sc into a getter function); we can use this to add a better error message.

Writing this while-loop in for-loop

I'm working with StanfordNLP to extract data from a parsed Tree.
I'm using Scala for coding.
val tp = TregexPattern.compile("SOME_PATTERN")
val res = tp.matcher("SOME_TREE")
to read the results of this I use
while (res.find()) {
println(res.getMatch.getLeaves.mkString(" "))
}
I want to rewrite this while-loop in for-loop.
How about this:
val tp = TregexPattern.compile("SOME_PATTERN")
val res = tp.matcher("SOME_TREE")
for(it <- Iterator.continually(res.getMatch).takeWhile(_ => res.find)) {
println(it.getLeaves.mkString(" "))
}