How can i achieve below with multiple when conditions.
from pyspark.sql import functions as F
df = spark.createDataFrame([(5000, 'US'),(2500, 'IN'),(4500, 'AU'),(4500, 'NZ')],["Sales", "Region"])
df.withColumn('Commision',
F.when(F.col('Region')=='US',F.col('Sales')*0.05).\
F.when(F.col('Region')=='IN',F.col('Sales')*0.04).\
F.when(F.col('Region')in ('AU','NZ'),F.col('Sales')*0.04).\
otherwise(F.col('Sales'))).show()
Use otherwise after when:
df.withColumn('Commision',
F.when(F.col('Region') == 'US', F.col('Sales') * 0.05).otherwise(
F.when(F.col('Region') == 'IN', F.col('Sales') * 0.04).otherwise(
F.when(F.col('Region').isin('AU', 'NZ'), F.col('Sales') * 0.04).otherwise(
F.col('Sales'))))).show()
+-----+------+---------+
|Sales|Region|Commision|
+-----+------+---------+
| 5000| US| 250.0|
| 2500| IN| 100.0|
| 4500| AU| 180.0|
| 4500| NZ| 180.0|
+-----+------+---------+
I think you are missing .isin in when condition and Use only F.when for first when condition only (or) use .when.
from pyspark.sql import functions as F
df = spark.createDataFrame([(5000, 'US'),(2500, 'IN'),(4500, 'AU'),(4500, 'NZ')],["Sales", "Region"])
df.withColumn('Commision',
F.when(F.col('Region')=='US',F.col('Sales')*0.05).\
when(F.col('Region')=='IN',F.col('Sales')*0.04).\
when(F.col('Region').isin ('AU','NZ'),F.col('Sales')*0.04).\
otherwise(F.col('Sales'))).show()
#+-----+------+---------+
#|Sales|Region|Commision|
#+-----+------+---------+
#| 5000| US| 250.0|
#| 2500| IN| 100.0|
#| 4500| AU| 180.0|
#| 4500| NZ| 180.0|
#+-----+------+---------+
Related
I try to fill missing data in a pyspark dataframe. The pyspark dataframe looks as such:
+---------+---------+-------------------+----+
| latitude|longitude| timestamplast|name|
+---------+---------+-------------------+----+
| | 4.905615|2019-08-01 00:00:00| 1|
|51.819645| |2019-08-01 00:00:00| 1|
| 51.81964| 4.961713|2019-08-01 00:00:00| 2|
| | |2019-08-01 00:00:00| 3|
| 51.82918| 4.911187| | 3|
| 51.82385| 4.901488|2019-08-01 00:00:03| 5|
+---------+---------+-------------------+----+
Within the column "name" I want to either forward fill or backward fill (whichever is necessary) to fill only "latitude" and "longitude" ("timestamplast" should not be filled). How do I do this?
Output will be:
+---------+---------+-------------------+----+
| latitude|longitude| timestamplast|name|
+---------+---------+-------------------+----+
|51.819645| 4.905615|2019-08-01 00:00:00| 1|
|51.819645| 4.905615|2019-08-01 00:00:00| 1|
| 51.81964| 4.961713|2019-08-01 00:00:00| 2|
| 51.82918| 4.911187|2019-08-01 00:00:00| 3|
| 51.82918| 4.911187| | 3|
| 51.82385| 4.901488|2019-08-01 00:00:03| 5|
+---------+---------+-------------------+----+
In Pandas this would be done as such:
df = df.groupby("name")['longitude','latitude'].apply(lambda x : x.ffill().bfill())
How would this be done in Pyspark?
I suggest you use the following two Window Specs:
from pyspark.sql import Window
w1 = Window.partitionBy('name').orderBy('timestamplast')
w2 = w1.rowsBetween(Window.unboundedPreceding, Window.unboundedFollowing)
Where:
w1 is the regular WinSpec we use to calculate the forward-fill which is the same as the following:
w1 = Window.partitionBy('name').orderBy('timestamplast').rowsBetween(Window.unboundedPreceding,0)
see the following note from the documentation for default window frames:
Note: When ordering is not defined, an unbounded window frame (rowFrame, unboundedPreceding, unboundedFollowing) is used by default. When ordering is defined, a growing window frame (rangeFrame, unboundedPreceding, currentRow) is used by default.
after ffill, we only need to fix the null values at the very front if exists, so we can use a fixed Window frame(Between Window.unboundedPreceding and Window.unboundedFollowing), this is more efficient than using a running Window frame since it requires only one aggregate, see SPARK-8638
Then the x.ffill().bfill() can be handled by using coalesce + last + first based on the above two WindowSpecs:
from pyspark.sql.functions import coalesce, last, first
df.withColumn('latitude_new', coalesce(last('latitude',True).over(w1), first('latitude',True).over(w2))) \
.select('name','timestamplast', 'latitude','latitude_new') \
.show()
+----+-------------------+---------+------------+
|name| timestamplast| latitude|latitude_new|
+----+-------------------+---------+------------+
| 1|2019-08-01 00:00:00| null| 51.819645|
| 1|2019-08-01 00:00:01| null| 51.819645|
| 1|2019-08-01 00:00:02|51.819645| 51.819645|
| 1|2019-08-01 00:00:03| 51.81964| 51.81964|
| 1|2019-08-01 00:00:04| null| 51.81964|
| 1|2019-08-01 00:00:05| null| 51.81964|
| 1|2019-08-01 00:00:06| null| 51.81964|
| 1|2019-08-01 00:00:07| 51.82385| 51.82385|
+----+-------------------+---------+------------+
Edit: to process (ffill+bfill) on multiple columns, use a list comprehension:
cols = ['latitude', 'longitude']
df_new = df.select([ c for c in df.columns if c not in cols ] + [ coalesce(last(c,True).over(w1), first(c,True).over(w2)).alias(c) for c in cols ])
I got a working solution for either forward or backward fill of one target name "longitude". I guess I could repeat the procedure for also "latitude" and then again for backward fill. Is there a more efficient way?
window = Window.partitionBy('name')\
.orderBy('timestamplast')\
.rowsBetween(-sys.maxsize, 0) # this is for forward fill
# .rowsBetween(0,sys.maxsize) # this is for backward fill
# define the forward-filled column
filled_column = last(df['longitude'], ignorenulls=True).over(window) # this is for forward fill
# filled_column = first(df['longitude'], ignorenulls=True).over(window) # this is for backward fill
df = df.withColumn('mmsi_filled', filled_column) # do the fill
Hello
I wanted to convert some of the cm values (ref_low = 20 and ref_low <40, & ref_high > 70 and ref_high <90,) to meter using the formula (cm/100). I tried to use Pyspark UDF
c_udf = udf(lambda val: val/100 if ref_low = 20 and ref_low <40 else
val) df = df.withColumn("new", c_udf("ref_low")).withColumn("new", c_udf("ref_high"))
Question1: How to add unit = Cm to the UDF? and want to keep all other values as such.
Thanks
I think this is what you want. Spark in-built when/otherwise are adequate. You just have to express the boolean appropriately.
from pyspark.sql import functions as F
df.withColumn("ref_low", F.when((F.col("unit")=='cm')&((F.col("ref_low")<40)|\
(F.col("ref_low")==20)), F.col("ref_low")/100)\
.otherwise(F.col("ref_low")))\
.withColumn("ref_high", F.when((F.col("unit")=='cm')&((F.col("ref_high")<90)&\
(F.col("ref_high")>70)),F.col("ref_high")/100)\
.otherwise(F.col("ref_high"))).show()
#+-----+-------+--------+
#| unit|ref_low|ref_high|
#+-----+-------+--------+
#| cm| 0.3| 50.0|
#| cm| 40.0| 70.0|
#| cm| 0.2| 0.85|
#| cm| 0.2| 0.85|
#| cm| 0.3| 0.76|
#| cm| 43.0| 65.0|
#|Meter| 0.2| 0.65|
#|Meter| 0.4| 0.68|
#|Meter| 0.5| 0.8|
#+-----+-------+--------+
I have a dataframe, in which I want to groupBy column A then find different stats like mean, min, max, std dev and quantiles.
I am able to find min, max and mean using the following code:
df.groupBy("A").agg(min("B"), max("B"), mean("B")).show(50, false)
But I am unable to find the quantiles(0.25, 0.5, 0.75). I tried approxQuantile and percentile but it gives the following error:
error: not found: value approxQuantile
if you have Hive in classpath, you can use many UDAF like percentile_approx and stddev_samp, see https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-Built-inAggregateFunctions(UDAF)
You can call these functions using callUDF:
import ss.implicits._
import org.apache.spark.sql.functions.callUDF
val df = Seq(1.0,2.0,3.0).toDF("x")
df.groupBy()
.agg(
callUDF("percentile_approx",$"x",lit(0.5)).as("median"),
callUDF("stddev_samp",$"x").as("stdev")
)
.show()
Here is a code that I have tested on Spark 3.1
val simpleData = Seq(("James","Sales","NY",90000,34,10000),
("Michael","Sales","NY",86000,56,20000),
("Robert","Sales","CA",81000,30,23000),
("Maria","Finance","CA",90000,24,23000),
("Raman","Finance","CA",99000,40,24000),
("Scott","Finance","NY",83000,36,19000),
("Jen","Finance","NY",79000,53,15000),
("Jeff","Marketing","CA",80000,25,18000),
("Kumar","Marketing","NY",91000,50,21000)
)
val df = simpleData.toDF("employee_name","department","state","salary","age","bonus")
df.show()
df.groupBy($"department")
.agg(
percentile_approx($"salary",lit(0.5), lit(10000))
)
.show(false)
Output
+-------------+----------+-----+------+---+-----+
|employee_name|department|state|salary|age|bonus|
+-------------+----------+-----+------+---+-----+
| James| Sales| NY| 90000| 34|10000|
| Michael| Sales| NY| 86000| 56|20000|
| Robert| Sales| CA| 81000| 30|23000|
| Maria| Finance| CA| 90000| 24|23000|
| Raman| Finance| CA| 99000| 40|24000|
| Scott| Finance| NY| 83000| 36|19000|
| Jen| Finance| NY| 79000| 53|15000|
| Jeff| Marketing| CA| 80000| 25|18000|
| Kumar| Marketing| NY| 91000| 50|21000|
+-------------+----------+-----+------+---+-----+
+----------+-------------------------------------+
|department|percentile_approx(salary, 0.5, 10000)|
+----------+-------------------------------------+
|Sales |86000 |
|Finance |83000 |
|Marketing |80000 |
+----------+-------------------------------------+
I have two DataFrames with one columns each (300 rows each) :
df_realite.take(1)
[Row(realite=1.0)]
df_proba_classe_1.take(1)
[Row(probabilite=0.6196931600570679)]
I would like to do one DataFrame with the two columns.
I tried :
_ = spark.createDataFrame([df_realite.rdd, df_proba_classe_1.rdd] ,
schema=StructType([ StructField('realite' , FloatType() ) ,
StructField('probabilite' , FloatType() ) ]))
But
_.take(10)
gives me empty values:
[Row(realite=None, probabilite=None), Row(realite=None, probabilite=None)]
There may be a more concise way (or a way without a join), but you could always just give them both an id and join them like:
from pyspark.sql import functions
df1 = df_realite.withColumn('id', functions.monotonically_increasing_id())
df2 = df_proba_classe_1.withColumn('id', functions.monotonically_increasing_id())
df1.join(df2, on='id').select('realite', 'probabilite'))
i think this is what you are looking for and i would only recommend this method if your data is very small like it is in your case (300 rows) because collect() is not a good practice on tons of data otherwise go the join route with dummy cols and do a broadcast join so no shuffle occurs
from pyspark.sql.functions import *
from pyspark.sql.types import *
df1 = spark.range(10).select(col("id").cast("float"))
df2 = spark.range(10).select(col("id").cast("float"))
l1 = df1.rdd.flatMap(lambda x: x).collect()
l2 = df2.rdd.flatMap(lambda x: x).collect()
list_df = zip(l1, l2)
schema=StructType([ StructField('realite', FloatType() ) ,
StructField('probabilite' , FloatType() ) ])
df = spark.createDataFrame(list_df, schema=schema)
df.show()
+-------+-----------+
|realite|probabilite|
+-------+-----------+
| 0.0| 0.0|
| 1.0| 1.0|
| 2.0| 2.0|
| 3.0| 3.0|
| 4.0| 4.0|
| 5.0| 5.0|
| 6.0| 6.0|
| 7.0| 7.0|
| 8.0| 8.0|
| 9.0| 9.0|
+-------+-----------+
I have a df whose 'products' column are lists like below:
+----------+---------+--------------------+
|member_srl|click_day| products|
+----------+---------+--------------------+
| 12| 20161223| [2407, 5400021771]|
| 12| 20161226| [7320, 2407]|
| 12| 20170104| [2407]|
| 12| 20170106| [2407]|
| 27| 20170104| [2405, 2407]|
| 28| 20161212| [2407]|
| 28| 20161213| [2407, 100093]|
| 28| 20161215| [1956119]|
| 28| 20161219| [2407, 100093]|
| 28| 20161229| [7905970]|
| 124| 20161011| [5400021771]|
| 6963| 20160101| [103825645]|
| 6963| 20160104|[3000014912, 6626...|
| 6963| 20160111|[99643224, 106032...|
How to add a new column product_cnt which are the length of products list? And how to filter df to get specified rows with condition of given products length ?
Thanks.
Pyspark has a built-in function to achieve exactly what you want called size. http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.size .
To add it as column, you can simply call it during your select statement.
from pyspark.sql.functions import size
countdf = df.select('*',size('products').alias('product_cnt'))
Filtering works exactly as #titiro89 described. Furthermore, you can use the size function in the filter. This will allow you to bypass adding the extra column (if you wish to do so) in the following way.
filterdf = df.filter(size('products')==given_products_length)
First question:
How to add a new column product_cnt which are the length of products list?
>>> a = [(12,20161223, [2407,5400021771]),(12,20161226,[7320,2407])]
>>> df = spark.createDataFrame(a,
["member_srl","click_day","products"])
>>> df.show()
+----------+---------+------------------+
|member_srl|click_day| products|
+----------+---------+------------------+
| 12| 20161223|[2407, 5400021771]|
| 12| 20161226|[7320, 2407, 4344]|
+----------+---------+------------------+
You can find a similar example here
>>> from pyspark.sql.types import IntegerType
>>> from pyspark.sql.functions import udf
>>> slen = udf(lambda s: len(s), IntegerType())
>>> df2 = df.withColumn("product_cnt", slen(df.products))
>>> df2.show()
+----------+---------+------------------+-----------+
|member_srl|click_day| products|product_cnt|
+----------+---------+------------------+-----------+
| 12| 20161223|[2407, 5400021771]| 2|
| 12| 20161226|[7320, 2407, 4344]| 3|
+----------+---------+------------------+-----------+
Second question:
And how to filter df to get specified rows with condition of given products length ?
You can use filter function docs here
>>> givenLength = 2
>>> df3 = df2.filter(df2.product_cnt==givenLength)
>>> df3.show()
+----------+---------+------------------+-----------+
|member_srl|click_day| products|product_cnt|
+----------+---------+------------------+-----------+
| 12| 20161223|[2407, 5400021771]| 2|
+----------+---------+------------------+-----------+