I have a function that returns specific date, that looks like this:
def specific_date(date_input):
specificdate= """select *
from vw
where date = {date_1}
""".format(date_1 = date_input)
day_result = sqlContext.sql(specificdate)
return day_result
and i have a dataframe that looks like this:
df1_schema = StructType([StructField("Date", StringType(), True),\
StructField("col1", IntegerType(), True),\
StructField("id", StringType(), True),\
StructField("col2", IntegerType(), True),\
StructField("col3", IntegerType(), True),\
StructField("col4", IntegerType(), True),\
StructField("coln", IntegerType(), True)])
df_data = [('2020-08-01',0,'M1',3,3,2,2),('2020-08-02',0,'M1',2,3,0,1),\
('2020-08-03',0,'M1',3,3,2,3),('2020-08-04',0,'M1',3,3,2,1),\
('2020-08-01',0,'M2',1,3,3,1),('2020-08-02',0,'M2',-1,3,1,2)]
rdd = sc.parallelize(df_data)
df1 = sqlContext.createDataFrame(df_data, df1_schema)
df1 = df1.withColumn("Date",to_date("Date", 'yyyy-MM-dd'))
df1.show()
+----------+----+---+----+----+----+----+
| Date|col1| id|col2|col3|col4|coln|
+----------+----+---+----+----+----+----+
|2020-08-01| 0| M1| 3| 3| 2| 2|
|2020-08-02| 0| M1| 2| 3| 0| 1|
|2020-08-03| 0| M1| 3| 3| 2| 3|
|2020-08-04| 0| M1| 3| 3| 2| 1|
|2020-08-01| 0| M2| 1| 3| 3| 1|
|2020-08-02| 0| M2| -1| 3| 1| 2|
+----------+----+---+----+----+----+----+
df1.createOrReplaceTempView("vw")
Then if I call a function specific_date(F.date_add('2020-08-01' , 1))
this would give me the dataframe where dates are '2020-08-02'
+----------+----+---+----+----+----+----+
| Date|col1| id|col2|col3|col4|coln|
+----------+----+---+----+----+----+----+
|2020-08-02| 0| M1| 2| 3| 0| 1|
|2020-08-02| 0| M2| -1| 3| 1| 2|
+----------+----+---+----+----+----+----+
I tried many methods to do this, but didn't seem to work, any help would be appreciated..
If you really want to use a function to add days to given datetime and also use the SQL query:
def specific_date(date_input, days_to_add):
start_date = datetime.datetime.strptime(date_input, "%Y-%m-%d")
end_date = start_date + datetime.timedelta(days = days_to_add)
specificdate= "SELECT * FROM vw WHERE Date = date_format('{date_1}', 'yyyy-MM-dd')".format(date_1 = end_date)
day_result = sqlContext.sql(specificdate)
return day_result
And just use it as, where you provide the date_input and days_to_add
specific_date('2020-08-01', 1)
which will give you dataframe
+----------+----+---+----+----+----+----+
| Date|col1| id|col2|col3|col4|coln|
+----------+----+---+----+----+----+----+
|2020-08-02| 0| M1| 2| 3| 0| 1|
|2020-08-02| 0| M2| -1| 3| 1| 2|
+----------+----+---+----+----+----+----+
But far better would be to just use
day_result = df1.filter(df1.Date == '2020-08-02')
If you do not need a function that uses a tempview you could easily do something like that by:
import datetime
d = datetime.datetime.strptime("2020-08-01", "%Y-%m-%d")
d += datetime.timedelta(days=+1)
df1.where(col('Date') == d).show()
+----------+----+---+----+----+----+----+
| Date|col1| id|col2|col3|col4|coln|
+----------+----+---+----+----+----+----+
|2020-08-02| 0| M1| 2| 3| 0| 1|
|2020-08-02| 0| M2| -1| 3| 1| 2|
+----------+----+---+----+----+----+----+
One issue with the code you provided is that the spark function F.date_add returns a column object. This cannot be directly used in the where statement.
Related
This question already has an answer here:
Round double values and cast as integers
(1 answer)
Closed 2 years ago.
I have a df that looks like this
TEST_schema = StructType([StructField("date", StringType(), True),\
StructField("col1", FloatType(), True),\
])
TEST_data = [('2020-08-01',1.22),('2020-08-02',1.15),('2020-08-03',5.4),('2020-08-04',2.6),('2020-08-05',3.5),\
('2020-08-06',2.2),('2020-08-07',2.7),('2020-08-08',-1.6),('2020-08-09',1.3)]
rdd3 = sc.parallelize(TEST_data)
TEST_df = sqlContext.createDataFrame(TEST_data, TEST_schema)
TEST_df = TEST_df.withColumn("date",to_date("date", 'yyyy-MM-dd'))
TEST_df.show()
+----------+-----+
| date|col1 |
+----------+-----+
|2020-08-01| 1.22|
|2020-08-02| 1.15|
|2020-08-03| 5.4 |
|2020-08-04| 2.6 |
|2020-08-05| 3.5 |
|2020-08-06| 2.2 |
|2020-08-07| 2.7 |
|2020-08-08|-1.6 |
|2020-08-09| 1.3 |
+----------+-----+
Logic : round col1 to the nearest and return as integer , and max( rounded value , 0)
the resulted df looks like this:
+----------+----+----+
| date|col1|want|
+----------+----+----+
|2020-08-01| 1.2| 1|
|2020-08-02| 1.1| 1|
|2020-08-03| 5.4| 5|
|2020-08-04| 2.6| 3|
|2020-08-05| 3.5| 4|
|2020-08-06| 2.2| 2|
|2020-08-07| 2.7| 3|
|2020-08-08|-1.6| 0|
|2020-08-09| 1.3| 1|
+----------+----+----+
Check the duplicated question that gives you all.
data = [('2020-08-01',1.22),('2020-08-02',1.15),('2020-08-03',5.4),('2020-08-04',2.6),('2020-08-05',3.5),('2020-08-06',2.2),('2020-08-07',2.7),('2020-08-08',-1.6),('2020-08-09',1.3)]
df = spark.createDataFrame(data, ['date', 'col1'])
df.withColumn('want', expr('ROUND(col1, 0)').cast('int')).show()
+----------+----+----+
| date|col1|want|
+----------+----+----+
|2020-08-01|1.22| 1|
|2020-08-02|1.15| 1|
|2020-08-03| 5.4| 5|
|2020-08-04| 2.6| 3|
|2020-08-05| 3.5| 4|
|2020-08-06| 2.2| 2|
|2020-08-07| 2.7| 3|
|2020-08-08|-1.6| -2|
|2020-08-09| 1.3| 1|
+----------+----+----+
First, here i am checking whether it's lessthan zero or not. Here we are using
when method in pyspark functions, first we check whether the value in the column
is lessthan zero, if it is will make it to zero, otherwise we take the actual value in the column then cast to int
from pyspark.sql import functions as F
TEST_df.withColumn("want", F.bround(F.when(TEST_df["col1"] < 0, 0).otherwise(TEST_df["col1"])).cast("int")).show()
+----------+----+----+
| date|col1|want|
+----------+----+----+
|2020-08-01|1.22| 1|
|2020-08-02|1.15| 1|
|2020-08-03| 5.4| 5|
|2020-08-04| 2.6| 3|
|2020-08-05| 3.5| 4|
|2020-08-06| 2.2| 2|
|2020-08-07| 2.7| 3|
|2020-08-08|-1.6| 0|
|2020-08-09| 1.3| 1|
+----------+----+----+
What I want is create a new row based on the given dataframe I have and It looks like the following:
TEST_schema = StructType([StructField("date", StringType(), True),\
StructField("col1", IntegerType(), True),
StructField("col2", IntegerType(), True)\
])
TEST_data = [('2020-08-17',0,0),('2020-08-18',2,1),('2020-08-19',0,2),('2020-08-20',3,0),('2020-08-21',4,2),\
('2020-08-22',1,3),('2020-08-23',2,2),('2020-08-24',1,2),('2020-08-25',3,1)]
rdd3 = sc.parallelize(TEST_data)
TEST_df = sqlContext.createDataFrame(TEST_data, TEST_schema)
TEST_df = TEST_df.withColumn("date",to_date("date", 'yyyy-MM-dd'))
TEST_df.show()
+----------+----+----+
| date|col1|col2|
+----------+----+----+
|2020-08-17| 0| 0|
|2020-08-18| 2| 1|
|2020-08-19| 0| 2|
|2020-08-20| 3| 0|
|2020-08-21| 4| 2|
|2020-08-22| 1| 3|
|2020-08-23| 2| 2|
|2020-08-24| 1| 2|
|2020-08-25| 3| 1|
+----------+----+----+
Let's say I want to calculate for today's date which is current_date() and let's say i want to calculate col1 as follows: If col1 >0 return col1+col2, otherwise 0 where date == yesturday 's date which is going to be current_date() -1
calculate col2 as follow, coalesce( lag(col2),0)
so my result dataframe would be something like this:
+----------+----+----+
| date|col1|want|
+----------+----+----+
|2020-08-17| 0| 0|
|2020-08-18| 2| 0|
|2020-08-19| 0| 1|
|2020-08-20| 3| 2|
|2020-08-21| 4| 0|
|2020-08-22| 1| 2|
|2020-08-23| 2| 3|
|2020-08-24| 1| 2|
|2020-08-25| 3| 2|
|2020-08-26| 4| 1|
+----------+----+----+
This would be so easy if we use withcolumn (column based) method but I want to know how to do this with rows. My initial idea is calculate by column first and transpose it and make it rowbased.
IIUC, you can try the following:
Step-1: create a new dataframe with a single row having current_date() as date, nulls for col1 and col2 and then union it back to the TEST_df (Note: change all 2020-08-26 to current_date() in your final code):
df_new = TEST_df.union(spark.sql("select '2020-08-26', null, null"))
Edit: Practically, data are partitioned and each partition should add one row, you can do something like the following:
from pyspark.sql.functions import current_date, col, lit
#columns used for Window partitionBy
cols_part = ['pcol1', 'pcol2']
df_today = TEST_df.select([
(current_date() if c == 'date' else col(c) if c in cols_part else lit(None)).alias(c)
for c in TEST_df.columns
]).distinct()
df_new = TEST_df.union(df_today)
Step-2: do calculations to fill the above null values:
df_new.selectExpr(
"date",
"IF(date < '2020-08-26', col1, lag(IF(col1>0, col1+col2,0)) over(order by date)) as col1",
"lag(col2,1,0) over(order by date) as col2"
).show()
+----------+----+----+
| date|col1|col2|
+----------+----+----+
|2020-08-17| 0| 0|
|2020-08-18| 2| 0|
|2020-08-19| 0| 1|
|2020-08-20| 3| 2|
|2020-08-21| 4| 0|
|2020-08-22| 1| 2|
|2020-08-23| 2| 3|
|2020-08-24| 1| 2|
|2020-08-25| 3| 2|
|2020-08-26| 4| 1|
+----------+----+----+
I need to check a condition over a window:
- If the column IND_DEF is 20, then I want to change the value of the column premium for the window to which this register belongs to, and set it to 1.
My initial Dataframe looks like this:
+--------+----+-------+-----+-------+
|policyId|name|premium|state|IND_DEF|
+--------+----+-------+-----+-------+
| 1| BK| null| KT| 40|
| 1| AK| -31| null| 30|
| 1| VZ| null| IL| 20|
| 2| VK| 32| LI| 7|
| 2| CK| 25| YNZ| 10|
| 2| CK| 0| null| 5|
| 2| VK| 30| IL| 25|
+--------+----+-------+-----+-------+
And I want to achieve this:
+--------+----+-------+-----+-------+
|policyId|name|premium|state|IND_DEF|
+--------+----+-------+-----+-------+
| 1| BK| 1| KT| 40|
| 1| AK| 1| null| 30|
| 1| VZ| 1| IL| 20|
| 2| VK| 32| LI| 7|
| 2| CK| 25| YNZ| 10|
| 2| CK| 0| null| 5|
| 2| VK| 30| IL| 25|
+--------+----+-------+-----+-------+
I am trying the following code but does not work...
val df_946 = Seq [(Int, String, Integer, String, Int)]((1,"VZ",null,"IL",20),(1, "AK", -31,null,30),(1,"BK", null,"KT",40),(2,"CK",0,null,5),(2,"CK",25,"YNZ",10),(2,"VK",30,"IL",25),(2,"VK",32,"LI",7)).toDF("policyId", "name", "premium", "state","IND_DEF").orderBy("policyId")
val winSpec = Window.partitionBy("policyId").orderBy("policyId")
val df_947 = df_946.withColumn("premium",when(col("IND_DEF") === 20,lit(1).over(winSpec)).otherwise(col("premium")))
You can generate an array of IND_DEF values via collect_list for each window partition and recreate column premium based on the array_contains condition:
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._
import spark.implicits._
val df = Seq(
(1, None, 40),
(1, Some(-31), 30),
(1, None, 20),
(2, Some(32), 7),
(2, Some(30), 10)
).toDF("policyId", "premium", "IND_DEF")
val win = Window.partitionBy($"policyId")
df.
withColumn("indList", collect_list($"IND_DEF").over(win)).
withColumn("premium", when(array_contains($"indList", 20), 1).otherwise($"premium")).
drop($"indList").
show
// +--------+-------+-------+
// |policyId|premium|IND_DEF|
// +--------+-------+-------+
// | 1| 1| 40|
// | 1| 1| 30|
// | 1| 1| 20|
// | 2| 32| 7|
// | 2| 30| 10|
// +--------+-------+-------+
I'm trying to extend the results of my previous question, but haven't been able to figure out how to achieve my new goal.
Before, I wanted to key on either a flag match or a string match. Now, I want to create a unique grouping key from a run starting with either a flag being true or the first string match preceding a run of true flag values.
Here's some example data:
val msgList = List("b", "f")
val df = spark.createDataFrame(Seq(("a", false), ("b", false), ("c", false), ("b", false), ("c", true), ("d", false), ("e", true), ("f", true), ("g", false)))
.toDF("message", "flag")
.withColumn("index", monotonically_increasing_id)
df.show
+-------+-----+-----+
|message| flag|index|
+-------+-----+-----+
| a|false| 0|
| b|false| 1|
| c|false| 2|
| b|false| 3|
| c| true| 4|
| d|false| 5|
| e| true| 6|
| f| true| 7|
| g|false| 8|
+-------+-----+-----+
The desired output is something equivalent to either of key1 or key2:
+-------+-----+-----+-----+-----+
|message| flag|index| key1| key2|
+-------+-----+-----+-----+-----+
| a|false| 0| 0| null|
| b|false| 1| 1| 1|
| c|false| 2| 1| 1|
| b|false| 3| 1| 1|
| c| true| 4| 1| 1|
| d|false| 5| 2| null|
| e| true| 6| 3| 2|
| f| true| 7| 3| 2|
| g|false| 8| 4| null|
+-------+-----+-----+-----+-----+
From the answer to my previous question, I already have a precursor:
import org.apache.spark.sql.expressions.Window
val checkMsg = udf { (s: String) => s != null && msgList.exists(s.contains(_)) }
val df2 = df.withColumn("message_match", checkMsg($"message"))
.withColumn("match_or_flag", when($"message_match" || $"flag", 1).otherwise(0))
.withColumn("lead", lead("match_or_flag", -1, 1).over(Window.orderBy("index")))
.withColumn("switched", when($"match_or_flag" =!= $"lead", $"index"))
.withColumn("base_key", last("switched", ignoreNulls = true).over(Window.orderBy("index").rowsBetween(Window.unboundedPreceding, 0)))
df2.show
+-------+-----+-----+-------------+-------------+----+--------+--------+
|message| flag|index|message_match|match_or_flag|lead|switched|base_key|
+-------+-----+-----+-------------+-------------+----+--------+--------+
| a|false| 0| false| 0| 1| 0| 0|
| b|false| 1| true| 1| 0| 1| 1|
| c|false| 2| false| 0| 1| 2| 2|
| b|false| 3| true| 1| 0| 3| 3|
| c| true| 4| false| 1| 1| null| 3|
| d|false| 5| false| 0| 1| 5| 5|
| e| true| 6| false| 1| 0| 6| 6|
| f| true| 7| true| 1| 1| null| 6|
| g|false| 8| false| 0| 1| 8| 8|
+-------+-----+-----+-------------+-------------+----+--------+--------+
base_key here is somewhat close to key1 above, but assigns separate keys to rows 1 and rows 3-4. I want rows 1-4 to get a single key based on the fact that row 1 contains the first msgList match within or preceding a run of flag = true.
Looking at the Spark window function API, it looks like there might be some way to use rangeBetween to accomplish this as of Spark 2.3.0, but the docs are bare enough that I haven't been able to figure out how to make it work.
I have 2 dataframes and I wanted to do .filter($"item" === "a") while keeping the "S/N" in number numbers.
I tried the following but it ended up with additional rows when I use union. Is there a way to union 2 dataframes without creating additional rows?
var DF1 = Seq(
("1","a",2),
("2","a",3),
("3","b",3),
("4","b",4),
("5","a",2)).
toDF("S/N","item", "value")
var DF2 = Seq(
("1","a",2),
("2","a",3),
("3","b",3),
("4","b",4),
("5","a",2)).
toDF("S/N","item", "value")
DF2 = DF2.filter($"item"==="a")
DF3=DF1.withColumn("item",lit(0)).withColumn("value",lit(0))
DF1.show()
+---+----+-----+
|S/N|item|value|
+---+----+-----+
| 1| a| 2|
| 2| a| 3|
| 3| b| 3|
| 4| b| 4|
| 5| a| 2|
+---+----+-----+
DF2.show()
+---+----+-----+
|S/N|item|value|
+---+----+-----+
| 1| a| 2|
| 2| a| 3|
| 5| a| 2|
+---+----+-----+
DF3.show()
+---+----+-----+
|S/N|item|value|
+---+----+-----+
| 1| 0| 0|
| 2| 0| 0|
| 3| 0| 0|
| 4| 0| 0|
| 5| 0| 0|
+---+----+-----+
DF2.union(someDF3).show()
+---+----+-----+
|S/N|item|value|
+---+----+-----+
| 1| a| 2|
| 2| a| 3|
| 5| a| 2|
| 1| 0| 0|
| 2| 0| 0|
| 3| 0| 0|
| 4| 0| 0|
| 5| 0| 0|
+---+----+-----+
Left outer join your S/Ns with filtered dataframe, then use coalesce to get rid of nulls:
val DF3 = DF1.select("S/N")
val DF4 = (DF3.join(DF2, Seq("S/N"), joinType="leftouter")
.withColumn("item", coalesce($"item", lit(0)))
.withColumn("value", coalesce($"value", lit(0))))
DF4.show
+---+----+-----+
|S/N|item|value|
+---+----+-----+
| 1| a| 2|
| 2| a| 3|
| 3| 0| 0|
| 4| 0| 0|
| 5| a| 2|
+---+----+-----+