Apply groupby in udf from a increase function Pyspark - pyspark

I have the follow function:
import copy
rn = 0
def check_vals(x, y):
global rn
if (y != None) & (int(x)+1) == int(y):
return rn + 1
else:
# Using copy to deepcopy and not forming a shallow one.
res = copy.copy(rn)
# Increment so that the next value with start form +1
rn += 1
# Return the same value as we want to group using this
return res + 1
return 0
#pandas_udf(IntegerType(), functionType=PandasUDFType.GROUPED_AGG)
def check_final(x, y):
return lambda x, y: check_vals(x, y)
I need apply this function in a follow df:
index initial_range final_range
1 1 299
1 300 499
1 500 699
1 800 1000
2 10 99
2 100 199
So I need that follow output:
index min_val max_val
1 1 699
1 800 1000
2 10 199
See, that the grouping field there are a news abrangencies, that are the values min(initial) and max(final), until the sequence is broken, applying the groupBy.
I tried:
w = Window.partitionBy('index').orderBy(sf.col('initial_range'))
df = (df.withColumn('nextRange', sf.lead('initial_range').over(w))
.fillna(0,subset=['nextRange'])
.groupBy('index')
.agg(check_final("final_range", "nextRange").alias('check_1'))
.withColumn('min_val', sf.min("initial_range").over(Window.partitionBy("check_1")))
.withColumn('max_val', sf.max("final_range").over(Window.partitionBy("check_1")))
)
But, don't worked.
Anyone can help me?

I think pure Spark SQL API can solve your question and it doesn't need to use any UDF, which might be an impact of your Spark performance. Also, I think two window function is enough to solve this question:
df.withColumn(
'next_row_initial_diff', func.col('initial_range')-func.lag('final_range', 1).over(Window.partitionBy('index').orderBy('initial_range'))
).withColumn(
'group', func.sum(
func.when(func.col('next_row_initial_diff').isNull()|(func.col('next_row_initial_diff')==1), func.lit(0))
.otherwise(func.lit(1))
).over(
Window.partitionBy('index').orderBy('initial_range')
)
).groupBy(
'group', 'index'
).agg(
func.min('initial_range').alias('min_val'),
func.max('final_range').alias('max_val')
).drop(
'group'
).show(100, False)
Column next_row_initial_diff: Just like the lead you use to shift/lag the row and check if it's in sequence.
Column group: To group the sequence in index partition.

Related

How to efficiently perform nested-loop in Spark/Scala?

So I have this main dataframe, called main_DF which contain all measurement values:
main_DF
group index width height
--------------------------------
1 1 21.3 15.2
1 2 11.3 45.1
2 3 23.2 25.2
2 4 26.1 85.3
...
23 986453 26.1 85.3
And another table called selected_DF, derived from main_DF, which contain the start & end index of important rows in main_DF, along with the length (end_index - start_index). The fields start_index and end_index correspond with field index in main_DF.
selected_DF
group start_index end_index length
--------------------------------
1 1 154 153
2 236 312 76
3 487 624 137
...
238 17487 18624 1137
Now, for each row in selected_DF, I need to perform filtering for all measurement values between the start_index and end_index. For example, let's say row 1 is for index = 1 until 154. After some filtering, dataframe derived from this row is:
peak_DF
peak_start peak_end
--------------------------------
1 12
15 21
27 54
86 91
...
143 150
peak_start and peak_end indicate the area where width exceeds the threshold. It was obtained by selecting all width > threshold, and then check the position of its index (sorry but it's kind of hard to explain, even with the code)
Then I need to take the measurement value (width) based on peak_DF and calculate the average, making it something like:
peak_DF_summary
peak_start peak_end avg_width
--------------------------------
1 12 25.6
15 21 35.7
27 54 24.2
86 91 76.6
...
143 150 13.1
And, lastly, calculate the average of avg_width, and save the result.
After that, the curtain moves to the next row in selected_DF, and so on.
So far I somehow managed to obtain what I want with this code:
val main_DF = spark.read.parquet("hdfs_path_here")
df.createOrReplaceTempView("main_DF")
val selected_DF = spark.read.parquet("hdfs_path_here").collect.par //parallelized array
val final_result_array = scala.collection.mutable.ArrayBuffer.empty[Array[Double]] //for final result
selected_DF.foreach{x =>
val temp = x.split(',')
val start_index = temp(1)
val end_index = temp(2)
//obtain peak_start and peak_end (START)
val temp_df_1 = spark.sql( " (SELECT index, width, height FROM main_DF WHERE width > 25 index BETWEEN " + start_index + " AND " + end_index + ")")
val temp_df_2 = temp_df_1.withColumn("next_index", lead(temp_df("index"), 1).over(window_spec) ).withColumn("previous_index", lag(temp_df("index"), 1).over(window_spec) )
val temp_df_3 = temp_df_2.withColumn("rear_gap", temp_df_2.col("index") - temp_df_2.col("previous_index") ).withColumn("front_gap", temp_df_2.col("next_index") - temp_df_2.col("index") )
val temp_df_4 = temp_df_3.filter("front_gap > 9 or rear_gap > 9")
val temp_df_5 = temp_df_4.withColumn("next_front_gap", lead(temp_df_4("front_gap"), 1).over(window_spec) ).withColumn("next_front_gap_index", lead(temp_df_4("index"), 1).over(window_spec) )
val temp_df_6 = temp_df_5.filter("rear_gap > 9 and next_front_gap > 9").sort("index")
//obtain peak_start and peak_end (END)
val peak_DF = temp_df_6.select("index" , "next_front_gap_index").toDF("peak_start", "peak_end").collect
val peak_DF_temp = peak_DF.map { y =>
spark.sql( " (SELECT avg(width) as avg_width FROM main_DF WHERE index BETWEEN " + y(0) + " AND " + y(1) + ")")
}
val peak_DF_summary = peak_DF_temp.reduceLeft( (dfa, dfb) => dfa.unionAll(dfb) )
val avg_width = peak_DF_summary.agg(mean("avg_width")).as[(Double)].first
final_result_array += avg_width._1
}
spark.catalog.dropTempView("main_DF")
(reference)
The problem is, the code can only run until around halfway (after 20-30 iterations) until it crashed and give out java.lang.OutOfMemoryError: Java heap space. It runs okay when I ran the iterations 1-by-1, though.
So my questions are:
How can there be insufficient memory? I thought the reason should be
accumulated usage of memory, so I add .unpersist() for every
dataframe inside foreach loop (even though I do no .persist()) to no avail.
But then, every memory consumption should be reset along with
re-initiation of variables when we enter new iteration in foreach
loop, no?
Is there any efficient way to do this kind of calculation? I am
doing nested-loop in Spark and I feel like this is a very
inefficient way to do this, but so far it's the only way I can get
result.
I'm using CDH 5.7 with Spark 2.1.0. My cluster has 6 nodes with 32GB memory (each) & 40 cores (total). main_DF is based on 30GB parquet file.

Cryptic TypeError: 'decimal.Decimal' object cannot be interpreted as an integer

I am struggling to understand why this function apparently fails in the Jupyter Notebook, but not in the IPython shell:
def present_value( r, n, fv = None, pmt = None ):
'''
Function to compute the Present Value based on interest rate and
a given future value.
Arguments accepted
------------------
* r = interest rate,
which should be given in its original percentage, eg.
5% instead of 0.05
* n = number of periods for which the cash flow,
either as annuity or single flow from one present value
* fv = future value in dollars,
if problem is annuity based, leave this empty
* pmt = each annuity payment in dollars,
if problem is single cash flow based, leave this empty
'''
original_args = [r, n, fv, pmt]
dec_args = [Decimal( arg ) if arg != None
else arg
for arg in original_args
]
if dec_args[3] == None:
return dec_args[2] / ( ( 1 + ( dec_args[0] / 100 ) )**dec_args[1] )
elif dec_args[2] == None:
# annuity_length = range( 1, dec_args[1] + 1 )
# Not allowed to add a Decimal object
# with an integer and to use it
# in the range() function,
# so we dereference the integer from original_args
annuity_length = range( 1, original_args[1] + 1 )
# Apply discounting to each annuity payment made
# according to number of years left till end
all_compounded_pmt = [dec_args[3] * ( 1 / ( ( 1 + dec_args[0] / 100 ) ** time_left ) ) \
for time_left in annuity_length
]
return sum( all_compounded_pmt )
When I imported the module that this function resides in, named functions.py, using from functions import *, and then executed present_value(r=7, n=35, pmt = 11000), I got the error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-93-c1cc587f7e27> in <module>()
----> 1 present_value(r=7, n=35, pmt = 11000)
/path_to_file/functions.py in present_value(r, n, fv, pmt)
73 if dec_args[3] == None:
74 return dec_args[2]/((1 + (dec_args[0]/100))**dec_args[1])
---> 75
76 elif dec_args[2] == None:
77 # annuity_length = range(1, dec_args[1]+1)
TypeError: 'decimal.Decimal' object cannot be interpreted as an integer
but in the IPython shell, evaluating this function it works perfectly fine:
In [42]: functions.present_value(r=7, n=35, pmt = 11000)
Out[42]: Decimal('142424.39530474029537')
Can anyone please help me with this really confusing and obscure issue?

How to use GroupByKey in Spark to calculate nonlinear-groupBy task

I have a table looks like
Time ID Value1 Value2
1 a 1 4
2 a 2 3
3 a 5 9
1 b 6 2
2 b 4 2
3 b 9 1
4 b 2 5
1 c 4 7
2 c 2 0
Here is the tasks and requirements:
I want to set the column ID as the key, not the column Time, but I don't want to delete the column Time. Is there a way in Spark to set Primary Key?
The aggregation function is non-linear, which means you can not use "reduceByKey". All the data must be shuffled to one single node before calculation. For example, the aggregation function may looks like root N of the sum values, where N is the number of records (count) for each ID :
output = root(sum(value1), count(*)) + root(sum(value2), count(*))
To make it clear, for ID="a", the aggregated output value should be
output = root(1 + 2 + 5, 3) + root(4 + 3 + 9, 3)
the later 3 is because we have 3 record for a. For ID='b', it is:
output = root(6 + 4 + 9 + 2, 4) + root(2 + 2 + 1 + 5, 4)
The combination is non-linear. Therefore, in order to get correct results, all the data with the same "ID" must be in one executor.
I checked UDF or Aggregator in Spark 2.0. Based on my understanding, they all assume "linear combination"
Is there a way to handle such nonlinear combination calculation? Especially, taking the advantage of parallel computing with Spark?
Function you use doesn't require any special treatment. You can use plain SQL with join
import org.apache.spark.sql.Column
import org.apache.spark.sql.functions.{count, lit, sum, pow}
def root(l: Column, r: Column) = pow(l, lit(1) / r)
val out = root(sum($"value1"), count("*")) + root(sum($"value2"), count("*"))
df.groupBy("id").agg(out.alias("outcome")).join(df, Seq("id"))
or window functions:
import org.apache.spark.sql.expressions.Window
val w = Window.partitionBy("id")
val outw = root(sum($"value1").over(w), count("*").over(w)) +
root(sum($"value2").over(w), count("*").over(w))
df.withColumn("outcome", outw)

RankingMetrics in Spark (Scala)

I am trying to use spark RankingMetrics.meanAveragePrecision.
However it seems like its not working as expected.
val t2 = (Array(0,0,0,0,1), Array(1,1,1,1,1))
val r = sc.parallelize(Seq(t2))
val rm = new RankingMetrics[Int](r)
rm.meanAveragePrecision // Double = 0.2
rm.precisionAt(5) // Double = 0.2
t2 is a tuple where the left array indicates the real values and the right array the predicted values (1 - relevant document, 0- non relevant)
If we calculate the average precision for t2 we get :
(0/1 + 0/2 + 0/3 + 0/4 + 1/5 )/5 = 1/25
But the RankingMetric returns 0.2 for MeanAveragePrecision which should be 1/25.
Thanks.
I think that the problem is your input data. Since your predicted/actual data contains relevance scores, I think you should be looking at binary classification metrics rather than ranking metrics if you want to evaluate using the 0/1 scores.
RankingMetrics is expecting two lists/arrays of ranked items instead, so if you replace the scores with the document ids it should work as expected. Here is an example in PySpark, with two lists that only match on the 5th item:
from pyspark.mllib.evaluation import RankingMetrics
rdd = sc.parallelize([(['a','b','c','d','z'], ['e','f','g','h','z'])])
metrics = RankingMetrics(rdd)
for i in range(1, 6):
print i, metrics.precisionAt(i)
print 'meanAveragePrecision', metrics.meanAveragePrecision
print 'Mean precisionAt', sum([0, 0, 0, 0, 0.2]) / 5
Which produced:
1 0.0
2 0.0
3 0.0
4 0.0
5 0.2
meanAveragePrecision 0.04
Mean precisionAt 0.04
Basically how the RankingMetrics function works is with two lists on each row,
First list is the items being recommended order matters here
Second list is the relevant items
For example in PySpark (But should be equivalent for Scala or Java),
recs_rdd = sc.parallelize([
(
['item1', 'item2', 'item3'], # Recommendations in order
['item3', 'item2'] # Relevant items - Unordered
),
(
['item3', 'item1', 'item2'], # Recommendations in order
['item3', 'item2'] # Relevant items - Unordered
),
])
from pyspark.mllib.evaluation import RankingMetrics
rankingMetrics = RankingMetrics(recs_rdd)
print("MAP: ", rankingMetrics.meanAveragePrecision)
This prints the MAP value of 0.7083333333333333 and is calculated by
(
(1/2 + 2/3) / 2
+ (1/1 + 2/3) / 2
) / 2
Which equals 0.708333
With
row 1 as (1/2 + 2/3) / 2
1/2 : 1 item in positions 2 or less are relevant
2/3 : 2 items in positions 3 or less are relevant
2 : Row 1 has 2 relevant items
row 2 as (1/1 + 2/3) / 2
1/1 : 1 item in position 1 or less is relevant
2/3 : 2 items in positions 3 or less are relevant
2 : Row 2 has 2 relevant items
And / 2 as there are 2 rows

PySpark : how to split data without randomnize

there are function that can randomize spilt data
trainingRDD, validationRDD, testRDD = RDD.randomSplit([6, 2, 2], seed=0L)
I'm curious if there a way that we generate data the same partition ( train 60 / valid 20 / test 20 ) but without randommize ( let's just say use the current data to split first 60 = train, next 20 =valid and last 20 are for test data)
is there a possible way to split data similar way to split but not randomize?
The basic issue here is that unless you have an index column in your data, there is no concept of "first rows" and "next rows" in your RDD, it's just an unordered set. If you have an integer index column you could do something like this:
train = RDD.filter(lambda r: r['index'] % 5 <= 3)
validation = RDD.filter(lambda r: r['index'] % 5 == 4)
test = RDD.filter(lambda r: r['index'] % 5 == 5)