Does withColumn() function of spark checks every row of a Dataset or Dataframe - scala

We generally add columns to existing spark dataframes by using withColumn function.
Just wanted to know that if we have millions of rows in a Dataset will the
withColumn("columnName", when(condition1, valueA).when(condition2, valueB))
method checks the conditions for each row of the Dataset ??
If Yes then is it not poor performance ?? & is there any better way

Yes withColumn("columnName", column expression) will be evaluated for each and every row, millions of them. This is a map operation so it is linearly scalable. I wouldn't worry about the performance which will depend on the complexity of the operation.
If your operation needs data from each row then you must execute it for each row.
If your operation is same per dataframe or partition of the dataframe then you can execute that operation once per dataframe or partition and write the result on each row, this can reduce some overhead.

Related

Spark - adding multiple columns under the same when condition

I need to add a couple of columns to a Spark DataFrame.
The value for both columns is conditional, using a when clause, but the condition is the same for both of them.
val df: DataFrame = ???
df
.withColumn("colA", when(col("condition").isNull, f1).otherwise(f2))
.withColumn("colB", when(col("condition").isNull, f3).otherwise(f4))
Since the condition in both when clauses is the same, is there a way I can rewrite this without repeating myself? I don't mean just extracting the condition to a variable, but actually reducing it to a single when clause, to avoid having to run the test multiple times on the DataFrame.
Also, in case I leave it like that, will Spark calculate the condition twice, or will it be able to optimize the work plan and run it only once?
The corresponding columns f1/f3 and f2/f4 can be packed into an array and then separated into two different columns after evaluating the condition.
df.withColumn("colAB", when(col("condition").isNull, array('f1, 'f3)).otherwise(array('f2, 'f4)))
.withColumn("colA", 'colAB(0))
.withColumn("colB", 'colAB(1))
The physical plans for my code and the code in the question are (ignoring the intermediate column colAB) the same:
== Physical Plan ==
LocalTableScan [f1#16, f2#17, f3#18, f4#19, condition#20, colA#71, colB#78]
== Physical Plan ==
LocalTableScan [f1#16, f2#17, f3#18, f4#19, condition#20, colAB#47, colA#54, colB#62]
so in both cases the condition is evaluated only once. This is at least true if condition is a regular column.
A reason to combine the two when statements could be that the code is better readable, although this judgement depends on the reader.

Loop through df rows or apply an UDF?

I have a metadata file to apply data quality checks on another df. They are small files.
Each row specifies the type of check and column to have the check performed.
What is the best approach in terms of performance in Spark? Loop through every row to apply the check of use a UDF on a metadata file-converted-to-DF?

SCALA: How to use collect function to get the latest modified entry from a dataframe?

I have a scala dataframe with two columns:
id: String
updated: Timestamp
From this dataframe I just want to get out the latest date, for which I use the following code at the moment:
df.agg(max("updated")).head()
// returns a row
I've just read about the collect() function, which I'm told to be
safer to use for such a problem - when it runs as a job, it appears it is not aggregating the max on the whole dataset, it looks perfectly fine when it is running in a notebook -, but I don't understand how it should
be used.
I found an implementation like the following, but I could not figure how it should be used...
df1.agg({"x": "max"}).collect()[0]
I tried it like the following:
df.agg(max("updated")).collect()(0)
Without (0) it returns an Array, which actually looks good. So idea is, we should apply the aggregation on the whole dataset loaded in the drive, not just the partitioned version, otherwise it seems to not retrieve all the timestamps. My question now is, how is collect() actually supposed to work in such a situation?
Thanks a lot in advance!
I'm assuming that you are talking about a spark dataframe (not scala).
If you just want the latest date (only that column) you can do:
df.select(max("updated"))
You can see what's inside the dataframe with df.show(). Since df are immutable you need to assign the result of the select to another variable or add the show after the select().
This will return a dataframe with just one row with the max value in "updated" column.
To answer to your question:
So idea is, we should apply the aggregation on the whole dataset loaded in the drive, not just the partitioned version, otherwise it seems to not retrieve all the timestamp
When you select on a dataframe, spark will select data from the whole dataset, there is not a partitioned version and a driver version. Spark will shard your data across your cluster and all the operations that you define will be done on the entire dataset.
My question now is, how is collect() actually supposed to work in such a situation?
The collect operation is converting from a spark dataframe into an array (which is not distributed) and the array will be in the driver node, bear in mind that if your dataframe size exceed the memory available in the driver you will have an outOfMemoryError.
In this case if you do:
df.select(max("Timestamp")).collect().head
You DF (that contains only one row with one column which is your date), will be converted to a scala array. In this case is safe because the select(max()) will return just one row.
Take some time to read more about spark dataframe/rdd and the difference between transformation and action.
It sounds weird. First of all you don´t need to collect the dataframe to get the last element of a sorted dataframe. There are many answers to this topics:
How to get the last row from DataFrame?

Why is dataset.count causing a shuffle! (spark 2.2)

Here is my dataframe:
The underlying RDD has 2 partitions
When I do a df.count, the DAG produced is
When I do a df.rdd.count, the DAG produced is:
Ques: Count is an action in spark, the official definition is ‘Returns the number of rows in the DataFrame.’. Now, when I perform the count on the dataframe why is a shuffle occurring? Besides, when I do the same on the underlying RDD no shuffle occurs.
It makes no sense to me why a shuffle would occur anyway. I tried to go through the source code of count here spark github But it doesn’t make sense to me fully. Is the “groupby” being supplied to the action the culprit?
PS. df.coalesce(1).count does not cause any shuffle
It seems that DataFrame's count operation uses groupBy resulting in shuffle. Below is the code from https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala
* Returns the number of rows in the Dataset.
* #group action
* #since 1.6.0
*/
def count(): Long = withAction("count", groupBy().count().queryExecution) {
plan =>
plan.executeCollect().head.getLong(0)
}
While if you look at RDD's count function, it passes on the aggregate function to each of the partitions, which returns the sum of each partition as Array and then use .sum to sum elements of array.
Code snippet from this link:
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/rdd/RDD.scala
/**
* Return the number of elements in the RDD.
*/
def count(): Long = sc.runJob(this, Utils.getIteratorSize _).sum
When spark is doing dataframe operation, it does first compute partial counts for every partition and then having another stage to sum those up together. This is particularly good for large dataframes, where distributing counts to multiple executors actually adds to performance.
The place to verify this is SQL tab of Spark UI, which would have some sort of the following physical plan description:
*HashAggregate(keys=[], functions=[count(1)], output=[count#202L])
+- Exchange SinglePartition
+- *HashAggregate(keys=[], functions=[partial_count(1)], output=[count#206L])
In the shuffle stage, the key is empty, and the value is count of the partition, and all these (key,value) pairs are shuffled to one single partition.
That is, the data moved in the shuffle stage is very little.

Filter columns in large dataset with Spark

I have a dataset which is 1,000,000 rows by about 390,000 columns. The fields are all binary, either 0 or 1. The data is very sparse.
I've been using Spark to process this data. My current task is to filter the data--I only want data in 1000 columns that have been preselected. This is the current code that I'm using to achieve this task:
val result = bigdata.map(_.zipWithIndex.filter{case (value, index) => selectedColumns.contains(index)})
bigdata is just an RDD[Array[Int]]
However, this code takes quite a while to run. I'm sure there's a more efficient way to filter my dataset that doesn't involve going in and filtering every single row separately. Would loading my data into a DataFrame, and maniuplating it through the DataFrame API make things faster/easier? Should I be looking into column-store based databases?
You can start with making your filter a little bit more efficient. Please note that:
your RDD contains Array[Int]. It means you can access nth element of each row in O(1) time
#selectedColumns << #columns
Considering these two facts it should be obvious that it doesn't make sense to iterate over all elements for each row not to mention contains calls. Instead you can simply map over selectedColumns
// Optional if selectedColumns are not ordered
val orderedSelectedColumns = selectedColumns.toList.sorted.toArray
rdd.map(row => selectedColumns.map(row))
Comparing time complexity:
zipWithIndex + filter (assuming best case scenario when contains is O(1)) - O(#rows * # columns)
map - O(#rows * #selectedColumns)
The easiest way to speed up execution is to parallelize it with partitionBy:
bigdata.partitionBy(new HashPartitioner(numPartitions)).foreachPartition(...)
foreachPartition receives a Iterator over which you can map and filter.
numPartitions is a val which you can set with the amount of desired parallel partitions.