Let's assume I have the following table:
time| id| value
1| 1| 1
3| 1| 1
1| 2| 2
The result of selecting a regular series would be:
time| id| value
1| 1| 1
2| 1| Null
3| 1| 1
1| 2| 2
2| 2| Null
3| 2| NUll
Normally, I would either just store theNULL values or create two additional tables: one holding all times, one table holding all id's and than join.
Problem with the first approach is that the table becomes quite big, because each new id forces me to insert NULL values for all previous times and each new time forces me to insert NULL values for all id's.
Problem with the second approach is that the join takes too long.
My idea is to implement a custom set returning function like the crosstab example in contrib\tablefunc.
My question is if I can expect this to be faster.
Related
I got the following DataFrame:
>>> df.show(50)
+--------------------+-------------+----------------+----+
| User Hash ID| Word|sum(Total Count)|rank|
+--------------------+-------------+----------------+----+
|00095808cdc611fb5...| errors| 5| 1|
|00095808cdc611fb5...| text| 3| 2|
|00095808cdc611fb5...| information| 3| 3|
|00095808cdc611fb5...| department| 2| 4|
|00095808cdc611fb5...| error| 2| 5|
|00095808cdc611fb5...| data| 2| 6|
|00095808cdc611fb5...| web| 2| 7|
|00095808cdc611fb5...| list| 2| 8|
|00095808cdc611fb5...| recognition| 2| 9|
|00095808cdc611fb5...| pipeline| 2| 10|
|000ac87bf9c1623ee...|consciousness| 14| 1|
|000ac87bf9c1623ee...| book| 3| 2|
|000ac87bf9c1623ee...| place| 2| 3|
|000ac87bf9c1623ee...| mystery| 2| 4|
|000ac87bf9c1623ee...| mental| 2| 5|
|000ac87bf9c1623ee...| flanagan| 2| 6|
|000ac87bf9c1623ee...| account| 2| 7|
|000ac87bf9c1623ee...| world| 2| 8|
|000ac87bf9c1623ee...| problem| 2| 9|
|000ac87bf9c1623ee...| theory| 2| 10|
This shows some for each user the 10 most frequent words he read.
I would like to create a dictionary, which then can be saved to a file, with the following format:
User : <top 1 word>, <top 2 word> .... <top 10 word>
To achieve this, I thought it might be more efficient to cut down the df as much as possible, before converting it. Thus, I tried:
>>> df.groupBy("User Hash ID").agg(collect_list("Word")).show(20)
+--------------------+--------------------+
| User Hash ID| collect_list(Word)|
+--------------------+--------------------+
|00095808cdc611fb5...|[errors, text, in...|
|000ac87bf9c1623ee...|[consciousness, b...|
|0038ccf6e16121e7c...|[potentials, orga...|
|0042bfbafc6646f47...|[fuel, car, consu...|
|00a19396b7bb52e40...|[face, recognitio...|
|00cec95a2c007b650...|[force, energy, m...|
|00df9406cbab4575e...|[food, history, w...|
|00e6e2c361f477e1c...|[image, based, al...|
|01636d715de360576...|[functional, lang...|
|01a778c390e44a8c3...|[trna, genes, pro...|
|01ab9ade07743d66b...|[packaging, car, ...|
|01bdceea066ec01c6...|[anthropology, de...|
|020c643162f2d581b...|[laser, electron,...|
|0211604d339d0b3db...|[food, school, ve...|
|0211e8f09720c7f47...|[privacy, securit...|
|021435b2c4523dd31...|[life, rna, origi...|
|0239620aa740f1514...|[method, image, d...|
|023ad5d85a948edfc...|[web, user, servi...|
|02416836b01461574...|[parts, based, ad...|
|0290152add79ae1d8...|[data, score, de,...|
+--------------------+--------------------+
From here, it should be more straight forward to generate that dictionary However, I cannot be sure if by using this agg function I am guaranteed that the words are in the correct order! That is why I am hesitant and wanted to get some feedback on maybe better options
Based on answers provided here - collect_list by preserving order based on another variable
you can write below query to make sure you have top 5 in correct order
import pyspark.sql.functions as F
grouped_df = dft.groupby("userid") \
.agg(F.sort_array(F.collect_list(F.struct("rank", "word"))) \
.alias("collected_list")) \
.withColumn("sorted_list",F.slice(F.col("collected_list.word"),start=1,length=5)) \
.drop("collected_list")\
.show(truncate=False)
First of all, if you go from a dataframe to a dictionary, you may have to face some memory issue as you will bring all the content of the dataframe to your driver (dictionary is a python object, not a spark object).
You are not that far away from a working solution. I'd do it that way :
from pyspark.sql import functions as F
df.groupBy("User Hash ID").agg(
F.collect_list(F.struct("Word", "sum(Total Count)", "rank")).alias("data")
)
This will create a data column where you have your 3 fields, aggregated by user id.
Then, to go from a dataframe to a dict object, you can use for example toJSON or Row object method asDict
Scala 2.12 and Spark 2.2.1 here. I have the following code:
myDf.show(5)
myDf.withColumn("rank", myDf("rank") * 10)
myDf.withColumn("lastRanOn", current_date())
println("And now:")
myDf.show(5)
When I run this, in the logs I see:
+---------+-----------+----+
|fizz|buzz|rizzrankrid|rank|
+---------+-----------+----+
| 2| 5| 1440370637| 128|
| 2| 5| 2114144780|1352|
| 2| 8| 199559784|3233|
| 2| 5| 1522258372| 895|
| 2| 9| 918480276| 882|
+---------+-----------+----+
And now:
+---------+-----------+-----+
|fizz|buzz|rizzrankrid| rank|
+---------+-----------+-----+
| 2| 5| 1440370637| 1280|
| 2| 5| 2114144780|13520|
| 2| 8| 199559784|32330|
| 2| 5| 1522258372| 8950|
| 2| 9| 918480276| 8820|
+---------+-----------+-----+
So, interesting:
The first withColumn works, transforming each row's rank value by multiplying itself by 10
However the second withColumn fails, which is just adding the current date/time to all rows as a new lastRanOn column
What do I need to do to get the lastRanOn column addition working?
Your example is probably too simple, because modifying rank should also not work.
withColumn does not update DataFrame, it's create a new DataFrame.
So you must do:
// if myDf is a var
myDf.show(5)
myDf = myDf.withColumn("rank", myDf("rank") * 10)
myDf = myDf.withColumn("lastRanOn", current_date())
println("And now:")
myDf.show(5)
or for example:
myDf.withColumn("rank", myDf("rank") * 10).withColumn("lastRanOn", current_date()).show(5)
Only then you will have new column added - after reassigning new DataFrame reference
This question already has answers here:
How do I add a new column to a Spark DataFrame (using PySpark)?
(10 answers)
Closed 5 years ago.
I am bit new on pyspark. I have a spark dataframe with about 5 columns and 5 records. I have list of 5 records.
Now I want to add these 5 static records from the list to the existing dataframe using withColumn. I did that, but its not working.
Any suggestions are greatly appreciated.
Below is my sample:
dq_results=[]
for a in range(0,len(dq_results)):
dataFile_df=dataFile_df.withColumn("dq_results",lit(dq_results[a]))
print lit(dq_results[a])
thanks,
Sreeram
dq_results=[]
Create one data frame from list dq_results:
df_list=spark.createDataFrame(dq_results_list,schema=dq_results_col)
Add one column for df_list id (it will be row id)
df_list_id = df_list.withColumn("id", monotonically_increasing_id())
Add one column for dataFile_df id (it will be row id)
dataFile_df= df_list.withColumn("id", monotonically_increasing_id())
Now we can join the both dataframe df_list and dataFile_df.
dataFile_df.join(df_list,"id").show()
So dataFile_df is final data frame
withColumn will add a new Column, but I guess you might want to append Rows instead. Try this:
df1 = spark.createDataFrame([(a, a*2, a+3, a+4, a+5) for a in range(5)], "A B C D E".split(' '))
new_data = [[100 + i*j for i in range(5)] for j in range(5)]
df1.unionAll(spark.createDataFrame(new_data)).show()
+---+---+---+---+---+
| A| B| C| D| E|
+---+---+---+---+---+
| 0| 0| 3| 4| 5|
| 1| 2| 4| 5| 6|
| 2| 4| 5| 6| 7|
| 3| 6| 6| 7| 8|
| 4| 8| 7| 8| 9|
|100|100|100|100|100|
|100|101|102|103|104|
|100|102|104|106|108|
|100|103|106|109|112|
|100|104|108|112|116|
+---+---+---+---+---+
I have the following dataframe:
df.show
+----------+-----+
| createdon|count|
+----------+-----+
|2017-06-28| 1|
|2017-06-17| 2|
|2017-05-20| 1|
|2017-06-23| 2|
|2017-06-16| 3|
|2017-06-30| 1|
I want to replace the count values by 0, where it is greater than 1, i.e., the resultant dataframe should be:
+----------+-----+
| createdon|count|
+----------+-----+
|2017-06-28| 1|
|2017-06-17| 0|
|2017-05-20| 1|
|2017-06-23| 0|
|2017-06-16| 0|
|2017-06-30| 1|
I tried the following expression:
df.withColumn("count", when(($"count" > 1), 0)).show
but the output was
+----------+--------+
| createdon| count|
+----------+--------+
|2017-06-28| null|
|2017-06-17| 0|
|2017-05-20| null|
|2017-06-23| 0|
|2017-06-16| 0|
|2017-06-30| null|
I am not able to understand, why for the value 1, null is getting displayed and how to overcome that. Can anyone help me?
You need to chain otherwise after when to specify the values where the conditions don't hold; In your case, it would be count column itself:
df.withColumn("count", when(($"count" > 1), 0).otherwise($"count"))
This can be done using udf function too
def replaceWithZero = udf((col: Int) => if(col > 1) 0 else col) //udf function
df.withColumn("count", replaceWithZero($"count")).show(false) //calling udf function
Note : udf functions should always be the choice only when there is no inbuilt functions as it requires serialization and deserialization of column data.
I have an eventlog in csv consisting of three columns timestamp, eventId and userId.
What I would like to do is append a new column nextEventId to the dataframe.
An example eventlog:
eventlog = sqlContext.createDataFrame(Array((20160101, 1, 0),(20160102,3,1),(20160201,4,1),(20160202, 2,0))).toDF("timestamp", "eventId", "userId")
eventlog.show(4)
|timestamp|eventId|userId|
+---------+-------+------+
| 20160101| 1| 0|
| 20160102| 3| 1|
| 20160201| 4| 1|
| 20160202| 2| 0|
+---------+-------+------+
The desired endresult would be:
|timestamp|eventId|userId|nextEventId|
+---------+-------+------+-----------+
| 20160101| 1| 0| 2|
| 20160102| 3| 1| 4|
| 20160201| 4| 1| Nil|
| 20160202| 2| 0| Nil|
+---------+-------+------+-----------+
So far I've been messing around with sliding windows but can't figure out how to compare 2 rows...
val w = Window.partitionBy("userId").orderBy(asc("timestamp")) //should be a sliding window over 2 rows...
val nextNodes = second($"eventId").over(w) //should work if there are only 2 rows
What you're looking for is lead (or lag). Using window you already defined:
import org.apache.spark.sql.functions.lead
eventlog.withColumn("nextEventId", lead("eventId", 1).over(w))
For true sliding window (like sliding average) you can use rowsBetween or rangeBetween clauses of the window definition but it is not really required here. Nevertheless example usage could be something like this:
val w2 = Window.partitionBy("userId")
.orderBy(asc("timestamp"))
.rowsBetween(-1, 0)
avg($"foo").over(w2)