How to apply sparksql queries with mappartitions - scala

I am using mapPartitions() in spark i want to create views for each partition and then apply spark.sql() on it but i dont know how to create dataframe from iterator in mapPartitions

Related

Windowing Aggregations in Spark

I am learning spark structured streaming and I am a bit confused about windowing aggregations. I have following questions.
Does Windowing aggregation always considers time which is passed as a column in window function?
If so then are we always have to provide timestamp column in dataset?
What happens when dataset doesn't contain timestamp column?

Improve groupby operation in Spark 1.5.2

We are facing poor performance using Spark.
I have 2 specific questions:
When debugging we noticed that a few of the groupby operations done on Rdd are taking more time
Also a few of the stages are appearing twice, some finishing very quickly, some taking more time
Here is a screenshot of .
Currently running locally, having shuffle partitions set to 2 and number of partitions set to 5, data is around 1,00,000 records.
Speaking of groupby operation, we are grouping a dataframe (which is a result of several joins) based on two columns, and then applying a function to get some result.
val groupedRows = rows.rdd.groupBy(row => (
row.getAs[Long](Column1),
row.getAs[Int](Column2)
))
val rdd = groupedRows.values.map(Criteria)
Where Criteria is some function acted on the grouped resultant rows. Can we optimize this group by in any way?
Here is a screenshot of the .
I would suggest you not to convert the existing dataframe to rdd and do the complex process you are performing.
If you want to perform Criteria function on two columns (Column1 and Column2), you can do this directly on dataframe. Moreover, if your Criteria can be reduced to combination of inbuilt functions then it would be great. But you can always use udf functions for custom rules.
What I would suggest you to do is groupBy on the dataframe and apply aggregation functions
rows.groupBy("Column1", "Column2").agg(Criteria function)
You can use Window functions if you want multiple rows from the grouped dataframe. more info here
.groupBy is known to be not the most efficient approach:
Note: This operation may be very expensive. If you are grouping in
order to perform an aggregation (such as a sum or average) over each
key, using PairRDDFunctions.aggregateByKey or
PairRDDFunctions.reduceByKey will provide much better performance.
Sometimes it is better to use .reduceByKey or .aggregateByKey, as explained here:
While both of these functions will produce the correct answer, the
reduceByKey example works much better on a large dataset. That's
because Spark knows it can combine output with a common key on each
partition before shuffling the data.
Why .reduceByKey, .aggregateByKey work faster than .groupBy? Because part of the aggregation happens during map phase and less data is shuffled around worker nodes during reduce phase. Here is a good explanation on how does aggregateByKey work.

What is difference between transformations and rdd functions in spark?

I am reading spark textbooks and I see that transformations and actions and again I read rdd functions , so I am confuse, can anyone explain what is the basic difference between transformations and spark rdd functions.
Both are used to change the rdd data contents and return a new rdd but I want to know the precise explantion.
Spark rdd functions are transformations and actions both. Transformation is function that changes rdd data and Action is a function that doesn't change the data but gives an output.
For example :
map, filter, union etc are all transformation as they help in changing the existing data.
reduce, collect, count are all action as they give output and not change data.
for more info visit Spark and Jacek
RDDs support only two types of operations: transformations, which create a new dataset from an existing one, and actions, which return a value to the driver program after running a computation on the dataset.
RDD Functions is a generic term used in textbook for internal mechanism.
For example, MAP is a transformation that passes each dataset element through a function and returns a new RDD representing the results. REDUCE is an action that aggregates all the elements of the RDD using some function and returns the final result to the driver program.
Since Spark's collections are immutable in nature, we can't change the data once the RDD is created.
Transformations are function that apply to RDDs and produce other RDDs in output (ie: map, flatMap, filter, join, groupBy, ...).
Actions are the functions that apply to RDDs and produce non-RDD (Array,List...etc) data as output (ie: count, saveAsText, foreach, collect, ...).

Confusion about spark streaming's transform function

I am a bit confused about the transform function of a DStream. For example, if I have the following.
val statusesSorted = statuses.transform(rdd => rdd.sortByKey())
Would the whole DStream be sorted by key or the individual RDDs inside the DStream would be sorted separately. If that is indeed the case, how can I sort keys of the whole DStream.
The transform function in Spark allows you to perform any Spark transformation on the RDDs within your DStream.
The map transform does a similar operation but on an element to element basis, whereas the transform operation on dstream allows you do the transformation on a complete RDD.
To answer your questions,
Would the whole DStream be sorted by key or the individual RDDs inside
the DStream would be sorted separately.
It will sort the individual RDDs in your dstream.
If that is indeed the case, how can I sort keys of the whole DStream.
To answer this, understand that Spark processes one batch at a time and the records in a batch correspond to the RDDs. So sorting the records in a batch(i.e. an RDD) will make sense because they form the data for computation. Sorting a dstream is not logical.

Is it possible to iteratively collect each partition of rdd?

I have a rdd which I need to store in mongoDB.
I tried use rdd.map to write each row of the rdd to mongoDB, using pymongo. But I encountered pickle error as it seems that pickling pymongo object to the workers is not supported.
Hence, I do a rdd.collect() to get the rdd to driver, and write it to mongoDB.
Is it possible to iteratively collect each partition of the rdd instead? This will minimize the changes of out of memory at the driver.
Yes, it is possible. You can use RDD.toLocalIterator(). You should remember though that it doesn't come for free. Each partition will require a separate job so you should consider persisting your data before you use it.