I have two dataframes structured the following way:
|Source|#Users|#Clicks|Hour|Type
and
Type|Total # Users|Hour
I'd like to join these columns based on hour however the first dataframe is at a deeper granularity in the second and therefore has more rows. Basically I want a dataframe where I have
|Source|#Users|#Clicks|Hour|Type|Total # Users
where the total # users is from the second dataframe. Any suggestions? I think I maybe want to use map?
Edit:
Here's an example
DF1
|Source|#Users|#Clicks|Hour|Type
|Prod1 |50 |3 |01 |Internet
|Prod2 |10 |2 |07 |iOS
|Prod3 |1 |50 |07 |Internet
|Prod2 |3 |2 |07 |Internet
|Prod3 |8 |2 |05 |Internet
DF2
|Type |Total #Users|Hour
|Internet|100 |01
|iOS |500 |01
|Internet|300 |07
|Internet|15 |05
|iOS |20 |07
Result
|Source|#Users|#Clicks|Hour|Type |Total #Users
|Prod1 |50 |3 |01 |Internet|100
|Prod2 |10 |2 |07 |iOS |20
|Prod3 |1 |50 |07 |Internet|300
|Prod2 |3 |2 |07 |Internet|300
|Prod3 |8 |2 |05 |Internet|15
That's a left join you're trying to do :
df1.join(df2, (df1.Hour === df2.Hour) & (df1.Type === df2.Type), "left_outer")
Short version : a left join keep all the rows from df1 and join on condition with matching rows of df2 if there is a match (null if not, duplicate if multiple matches).
More info on Pyspark join
More info on SQL Joins types
Related
Hi EveryOne I'm new in Spark scala. I want to find the nearest values by partition using spark scala. My input is something like this:
first row for example: value 1 is between 2 and 7 in the value2 columns
+--------+----------+----------+
|id |value1 |value2 |
+--------+----------+----------+
|1 |3 |1 |
|1 |3 |2 |
|1 |3 |7 |
|2 |4 |2 |
|2 |4 |3 |
|2 |4 |8 |
|3 |5 |3 |
|3 |5 |6 |
|3 |5 |7 |
|3 |5 |8 |
My output should like this:
+--------+----------+----------+
|id |value1 |value2 |
+--------+----------+----------+
|1 |3 |2 |
|1 |3 |7 |
|2 |4 |3 |
|2 |4 |8 |
|3 |5 |3 |
|3 |5 |6 |
Can someone guide me how to resolve this please.
Instead of providing a code answer as you appear to want to learn I've provided you pseudo code and references to allow you to find the answers for yourself.
Group the elements (select id, value1) (aggregate on value2
with collect_list) so you can collect all the value2 into an
array.
select (id, and (add(concat) value1 to the collect_list array)) Sorting the array .
find( array_position ) value1 in the array.
splice the array. retrieving value before and value after
the result of (array_position)
If the array is less than 3 elements do error handling
now the last value in the array and the first value in the array are your 'closest numbers'.
You will need window functions for this.
val window = Window
.partitionBy("id", "value1")
.orderBy(asc("value2"))
val result = df
.withColumn("prev", lag("value2").over(window))
.withColumn("next", lead("value2").over(window))
.withColumn("dist_prev", col("value2").minus(col("prev")))
.withColumn("dist_next", col("next").minus(col("value2")))
.withColumn("min", min(col("dist_prev")).over(window))
.filter(col("dist_prev") === col("min") || col("dist_next") === col("min"))
.drop("prev", "next", "dist_prev", "dist_next", "min")
I haven't tested it, so think about it more as an illustration of the concept than a working ready-to-use example.
Here is what's going on here:
First, create a window that describes your grouping rule: we want the rows grouped by the first two columns, and sorted by the third one within each group.
Next, add prev and next columns to the dataframe that contain the value of value2 column from previous and next row within the group respectively. (prev will be null for the first row in the group, and next will be null for the last row – that is ok).
Add dist_prev and dist_next to contain the distance between value2 and prev and next value respectively. (Note that dist_prev for each row will have the same value as dist_next for the previous row).
Find the minimum value for dist_prev within each group, and add it as min column (note, that the minimum value for dist_next is the same by construction, so we only need one column here).
Filter the rows, selecting those that have the minimum value in either dist_next or dist_prev. This finds the tightest pair unless there are multiple rows with the same distance from each other – this case was not accounted for in your question, so we don't know what kind of behavior you want in this case. This implementation will simply return all of these rows.
Finally, drop all extra columns that were added to the dataframe to return it to its original shape.
I have these 4 spark dataframes:
order,device,count_1
101,201,2
102,202,4
order,device,count_2
101,201,10
103,203,100
order,device,count_3
104,204,111
103,203,10
order,device,count_4
101,201,4
104,204,11
I want to create a resultant dataframe as:
order,device,count_1,count_2,count_3,count_4
101,201,2,10,,4,
102,202,4,,,,
103,203,,100,10,,
104,204,,,111,11
Is this a case of UNION or JOIN or APPEND? How to get the final resultant df?
You can think of UNION as combining tables by rows, so the number of rows will likely increase. JOIN combines tables by columns. I'm not sure what you mean by APPEND, but in this case, you would want JOIN.
Try:
val df1 = Seq((101,201,2), (102,202,4)).toDF("order" ,"device", "count_1")
val df2 = Seq((101,201,10), (103,203,100)).toDF("order" ,"device", "count_2")
val df3 = Seq((104,204,111), (103,203,10)).toDF("order" ,"device", "count_3")
val df4 = Seq((101,201,4), (104,204,11)).toDF("order" ,"device", "count_4")
val df12 = df1.join(df2, Seq("order", "device"),"fullouter")
df12.show(false)
val df123 = df12.join(df3, Seq("order", "device"),"fullouter")
df123.show(false)
val df1234 = df123.join(df4, Seq("order", "device"),"fullouter")
df1234.show(false)
returns:
+-----+------+-------+-------+-------+-------+
|order|device|count_1|count_2|count_3|count_4|
+-----+------+-------+-------+-------+-------+
|101 |201 |2 |10 |null |4 |
|102 |202 |4 |null |null |null |
|103 |203 |null |100 |10 |null |
|104 |204 |null |null |111 |11 |
+-----+------+-------+-------+-------+-------+
As you can see the comments are flawed and the 1st answer incorrect.
Did in Scala, should be easy to do in pyspark.
I have a spark dataframe which has two column one is id and secound id col_datetime. as you can see the dataframe given below. how can i filter the dataframe based on col_datetime to get the oldest month data. I want to achieve the result dynamically because I have 20 odd dataframes.
INPUT DF:-
import spark.implicits._
val data = Seq((1 , "2020-07-02 00:00:00.0"),(2 , "2020-08-02 00:00:00.0"),(3 , "2020-09-02 00:00:00.0"),(4 , "2020-10-02 00:00:00.0"),(5 , "2020-11-02 00:00:00.0"),(6 , "2020-12-02 00:00:00.0"),(7 , "2021-01-02 00:00:00.0"),(8 , "2021-02-02 00:00:00.0"),(9 , "2021-03-02 00:00:00.0"),(10, "2021-04-02 00:00:00.0"),(11, "2021-05-02 00:00:00.0"),(12, "2021-06-02 00:00:00.0"),(13, "2021-07-22 00:00:00.0"))
val dfFromData1 = data.toDF("ID","COL_DATETIME").withColumn("COL_DATETIME",to_timestamp(col("COL_DATETIME")))
+------+---------------------+
|ID |COL_DATETIME |
+------+---------------------+
|1 |2020-07-02 00:00:00.0|
|2 |2020-08-02 00:00:00.0|
|3 |2020-09-02 00:00:00.0|
|4 |2020-10-02 00:00:00.0|
|5 |2020-11-02 00:00:00.0|
|6 |2020-12-02 00:00:00.0|
|7 |2021-01-02 00:00:00.0|
|8 |2021-02-02 00:00:00.0|
|9 |2021-03-02 00:00:00.0|
|10 |2021-04-02 00:00:00.0|
|11 |2021-05-02 00:00:00.0|
|12 |2021-06-02 00:00:00.0|
|13 |2021-07-22 00:00:00.0|
+------+---------------------+
OUTPUT:-
DF1 : - Oldest month data
+------+---------------------+
|ID |COL_DATETIME |
+------+---------------------+
|1 |2020-07-02 00:00:00.0|
+------+---------------------+
DF2:- lastest months data after removing oldest month data from orginal DF.
+------+---------------------+
|ID |COL_DATETIME |
+------+---------------------+
|2 |2020-08-02 00:00:00.0|
|3 |2020-09-02 00:00:00.0|
|4 |2020-10-02 00:00:00.0|
|5 |2020-11-02 00:00:00.0|
|6 |2020-12-02 00:00:00.0|
|7 |2021-01-02 00:00:00.0|
|8 |2021-02-02 00:00:00.0|
|9 |2021-03-02 00:00:00.0|
|10 |2021-04-02 00:00:00.0|
|11 |2021-05-02 00:00:00.0|
|12 |2021-06-02 00:00:00.0|
|13 |2021-07-22 00:00:00.0|
+------+---------------------+
logic/approach:-
step1 :- calculate the minimum datetime for col_datetime column for given dataframe and assign to mindate variable.
Lets assume I will get
var mindate = "2020-07-02 00:00:00.0"
val mindate = dfFromData1.select(min("COL_DATETIME")).first()
print(mindate)
result:-
mindate : org.apache.spark.sql.Row = [2020-07-02 00:00:00.0]
[2020-07-02 00:00:00.0]
Step2:- to get the end date of month using mindate.I haven’t code for this part to get enddatemonth using mindate.
Val enddatemonth = "2020-07-31 00:00:00.0"
Step3 : - Now I can use enddatemonth variable to filter the spark dataframe in DF1 and DF2 based on conditions.
Even if I tried to filter the dataframe based on mindate I am getting error
val DF1 = dfFromData1.where(col("COL_DATETIME") <= enddatemonth)
val DF2 = dfFromData1.where(col("COL_DATETIME") > enddatemonth)
Error : <console>:166: error: type mismatch;
found : org.apache.spark.sql.Row
required: org.apache.spark.sql.Column val DF1 = dfFromData1.where(col("COL_DATETIME" )<= mindate)
Thanks...!!
A Similar approach , but I find it cleaner just to deal with MONTHS.
Idea : like we have epoch for seconds, compute it for months
val dfWithEpochMonth = dfFromData1.
withColumn("year",year($"COL_DATETIME")).
withColumn("month",month($"COL_DATETIME")).
withColumn("epochMonth", (($"year" - 1970 - 1) * 12) + $"month")
Now your df will look like :
+---+-------------------+----+-----+----------+
| ID| COL_DATETIME|year|month|epochMonth|
+---+-------------------+----+-----+----------+
| 1|2020-07-02 00:00:00|2020| 7| 595|
| 2|2020-08-02 00:00:00|2020| 8| 596|
| 3|2020-09-02 00:00:00|2020| 9| 597|
| 4|2020-10-02 00:00:00|2020| 10| 598|
Now, you can calculate min epochMonth and filter directly.
val minEpochMonth = dfWithEpochMonth.select(min("epochMonth")).first().apply(0).toString().toInt
val df1 = dfWithEpochMonth.where($"epochMonth" <= minEpochMonth)
val df2 = dfWithEpochMonth.where($"epochMonth" > minEpochMonth)
You can drop unnecessary columns.
To address your error message :
val mindate = dfFromData1.select(min("COL_DATETIME")).first()
val mindateString = mindate.apply(0).toString()
Now you can use mindateString to filter.
I have a scenario where the XML column response_output have ordercount and orders with corresponding order details.
For example xml is like below, the count of OrderCount is 4 , and under orders we have 4 order details
<USR_ORD><OrderResponse><OrderResult>
<OrderCount>4</OrderCount>
<ORDTime>2021-02-02 21:13:12</ORDTime><ORDStatus>COMPLETE</ORDStatus>
<ORDValue>
<USR1OrderTotalTime>221</USR1OrderTotalTime><USR1OrderKYC>{ND}</USR1OrderKYC><USR1OrderLoc>{ND}</USR1OrderLoc>
<orders>
<order><name>MR RITA SOMA</name><address>606 JAL TXS</address><tracknumber>7825225</tracknumber><status>UNK</status></order>
<order><name>MR RITA SOMA</name><address>1 BAL, HAL</address><tracknumber>7825226</tracknumber><status>FAIL</status></order>
<order><name>MR RODREX SOMA</name><address>18, GHC,BAN</address><tracknumber>7825224</tracknumber><status>SUC</status></order>
<order><name>MR RITA SOMA</name><address>1 BAL, HAL</address><tracknumber>7825223</tracknumber><status>SUC</status></order>
</orders>
<USR1Orderqnt>10</USR1Orderqnt><USR1Orderxyz>0</USR1Orderxyz><USR1OrderD>{ND}</USR1OrderD>
</ORDValue>
</OrderResult></OrderResponse></USR_ORD>
I need to retrieve the records based on the ordercount, if the ordercount is 4 then I need to iterate 4 times on orders and fetch 4 records with all order details and if the ordercount is 1 then I need to fetch 1 record with order details respectively.
Could anyone help me with this with spark2, scala solution?
SourceData:
+-----------+-----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|customer_id|response_id|response_output |
+-----------+-----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|100 |1 |<USR_ORD><OrderResponse><OrderResult><OrderCount>1</OrderCount><ORDTime>2021-02-02 10:34:19</ORDTime><ORDStatus>COMPLETE</ORDStatus><ORDValue><USR1OrderTotalTime>321</USR1OrderTotalTime><USR1OrderKYC>{ND}</USR1OrderKYC><USR1OrderLoc>{ND}</USR1OrderLoc><orders><order><name>MRS MITA PERS</name><address>17 MAXI RD CHN</address><tracknumber>7825222</tracknumber><status>FAIL</status><amount>4500</amount><orderdate>2019-10-18</orderdate></order></orders><USR1Orderqnt>10</USR1Orderqnt><USR1Orderxyz>0</USR1Orderxyz><USR1OrderD>{ND}</USR1OrderD></ORDValue></OrderResult></OrderResponse></USR_ORD> |
|200 |1 |<USR_ORD><OrderResponse><OrderResult><OrderCount>4</OrderCount><ORDTime>2021-02-02 21:13:12</ORDTime><ORDStatus>COMPLETE</ORDStatus><ORDValue><USR1OrderTotalTime>221</USR1OrderTotalTime><USR1OrderKYC>{ND}</USR1OrderKYC><USR1OrderLoc>{ND}</USR1OrderLoc><orders><order><name>MR RITA SOMA</name><address>606 JAL TXS</address><tracknumber>7825225</tracknumber><status>UNK</status><amount>1030</amount><orderdate>2020-11-16</orderdate></order><order><name>MR RITA SOMA</name><address>1 BAL, HAL</address><tracknumber>7825226</tracknumber><status>FAIL</status><amount>8000</amount><orderdate>2018-07-17</orderdate></order><order><name>MR RODREX SOMA</name><address>18, GHC, BAN</address><tracknumber>7825224</tracknumber><status>SUC</status><amount>2500</amount><orderdate>2017-09-16</orderdate></order><order><name>MR RITA SOMA</name><address>1 BAL, HAL</address><tracknumber>7825223</tracknumber><status>SUC</status><amount>2700</amount><orderdate>2017-04-22</orderdate></order></orders><USR1Orderqnt>10</USR1Orderqnt><USR1Orderxyz>0</USR1Orderxyz><USR1OrderD>{ND}</USR1OrderD></ORDValue></OrderResult></OrderResponse></USR_ORD>|
+-----------+-----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
When I try to run the below sql I am getting as follows but I need to get 4 records for customer_id 200 as the count is 4 with the corresponding oder details.
spark.sql("""select
| customer_id,
| xpath_string(response_output,'USR_ORD/OrderResponse/OrderResult/OrderCount') as OrderCount,
| xpath_string(response_output,'USR_ORD/OrderResponse/OrderResult/ORDValue/orders/order/name') as name,
| xpath_string(response_output,'USR_ORD/OrderResponse/OrderResult/ORDValue/orders/order/address') as address,
| xpath_string(response_output,'USR_ORD/OrderResponse/OrderResult/ORDValue/orders/order/tracknumber') as tracknumber,
| xpath_string(response_output,'USR_ORD/OrderResponse/OrderResult/ORDValue/orders/order/status') as status
| from cust_tbl""").show()
Result I am Getting:
+-----------+----------+-------------+--------------+-----------+------+
|customer_id|OrderCount| name| address|tracknumber|status|
+-----------+----------+-------------+--------------+-----------+------+
| 100| 1|MRS MITA PERS|17 MAXI RD CHN| 7825222| FAIL|
| 200| 4| MR RITA SOMA| 606 JAL TXS| 7825225| UNK|
+-----------+----------+-------------+--------------+-----------+------+
Expecting OutPut:
+-----------+----------+------------+-----------+-----------+------+
|customer_id|OrderCount|name |address |tracknumber|status|
+-----------+----------+------------+-----------+-----------+------+
|200 |4 |MRRITASOMA |606JALTXS |7825225 |UNK |
|200 |4 |MRRITASOMA |1BAL HAL |7825226 |FAIL |
|200 |4 |MRRODREXSOMA|18 GHC BAN |7825224 |SUC |
|200 |4 |MRRITASOMA |1 BAL HAL |7825223 |SUC |
|100 |1 |MRSMITAPERS |17MAXIRDCHN|7825222 |FAIL |
+-----------+----------+------------+-----------+-----------+------+
The function xpath_string extracts one string value for the given XPath expression. For your case, you need to use xpath to get array of the node values for each order detail (name, status, ...) and zip them all together using arrays_zip:
val df1 = df.withColumn(
"OrderCount",
expr("xpath_string(response_output, 'USR_ORD/OrderResponse/OrderResult/OrderCount')")
).withColumn(
"orders",
explode(
arrays_zip(
expr("xpath(response_output, '/USR_ORD/OrderResponse/OrderResult/ORDValue/orders/order/name/text()')"),
expr("xpath(response_output, '/USR_ORD/OrderResponse/OrderResult/ORDValue/orders/order/address/text()')"),
expr("xpath(response_output, '/USR_ORD/OrderResponse/OrderResult/ORDValue/orders/order/tracknumber/text()')"),
expr("xpath(response_output, '/USR_ORD/OrderResponse/OrderResult/ORDValue/orders/order/status/text()')")
).cast("array<struct<name:string,address:string,tracknumber:string,status:string>>")
)
).select("customer_id", "OrderCount", "orders.*")
df1.show(false)
//+-----------+----------+--------------+--------------+-----------+------+
//|customer_id|OrderCount|name |address |tracknumber|status|
//+-----------+----------+--------------+--------------+-----------+------+
//|100 |1 |MRS MITA PERS |17 MAXI RD CHN|7825222 |FAIL |
//|200 |4 |MR RITA SOMA |606 JAL TXS |7825225 |UNK |
//|200 |4 |MR RITA SOMA |1 BAL, HAL |7825226 |FAIL |
//|200 |4 |MR RODREX SOMA|18, GHC, BAN |7825224 |SUC |
//|200 |4 |MR RITA SOMA |1 BAL, HAL |7825223 |SUC |
//+-----------+----------+--------------+--------------+-----------+------+
Update
For Spark < 2.4, you can posexplode each array columns and join on index :
val df1 = df.withColumn(
"OrderCount",
expr("xpath_string(response_output, 'USR_ORD/OrderResponse/OrderResult/OrderCount')")
).select(
col("customer_id"),
col("OrderCount"),
expr("xpath(response_output, '/USR_ORD/OrderResponse/OrderResult/ORDValue/orders/order/name/text()')").as("name"),
expr("xpath(response_output, '/USR_ORD/OrderResponse/OrderResult/ORDValue/orders/order/address/text()')").as("address"),
expr("xpath(response_output, '/USR_ORD/OrderResponse/OrderResult/ORDValue/orders/order/tracknumber/text()')").as("tracknumber"),
expr("xpath(response_output, '/USR_ORD/OrderResponse/OrderResult/ORDValue/orders/order/status/text()')").as("status")
)
val result = df1.selectExpr("customer_id", "OrderCount", "posexplode(name) as (idx, name)")
.join(
df1.selectExpr("customer_id", "posexplode(address) as (idx, address)"),
Seq("idx", "customer_id")
).join(
df1.selectExpr("customer_id","posexplode(tracknumber) as (idx, tracknumber)"),
Seq("idx", "customer_id")
).join(
df1.selectExpr("customer_id", "posexplode(status) as (idx, status)"),
Seq("idx", "customer_id")
).drop("idx")
result.show(false)
//+-----------+----------+--------------+--------------+-----------+------+
//|customer_id|OrderCount|name |address |tracknumber|status|
//+-----------+----------+--------------+--------------+-----------+------+
//|100 |1 |MRS MITA PERS |17 MAXI RD CHN|7825222 |FAIL |
//|200 |4 |MR RITA SOMA |606 JAL TXS |7825225 |UNK |
//|200 |4 |MR RITA SOMA |1 BAL, HAL |7825226 |FAIL |
//|200 |4 |MR RODREX SOMA|18, GHC, BAN |7825224 |SUC |
//|200 |4 |MR RITA SOMA |1 BAL, HAL |7825223 |SUC |
//+-----------+----------+--------------+--------------+-----------+------+
This question already has answers here:
Prepend zeros to a value in PySpark
(2 answers)
Closed 4 years ago.
In short, I'm leveraging spark-xml to do some parsing of XML files. However, using this is removing the leading zeros in all the values I'm interested in. However, I need the final output, which is a DataFrame, to include the leading zeros. I'm unsure/can not figure out a way to add leading zeros to the columns I'm interested in.
val df = spark.read
.format("com.databricks.spark.xml")
.option("rowTag", "output")
.option("excludeAttribute", true)
.option("allowNumericLeadingZeros", true) //including this does not solve the problem
.load("pathToXmlFile")
Example output that I'm getting
+------+---+--------------------+
|iD |val|Code |
+------+---+--------------------+
|1 |44 |9022070536692784476 |
|2 |66 |-5138930048185086175|
|3 |25 |805582856291361761 |
|4 |17 |-9107885086776983000|
|5 |18 |1993794295881733178 |
|6 |31 |-2867434050463300064|
|7 |88 |-4692317993930338046|
|8 |44 |-4039776869915039812|
|9 |20 |-5786627276152563542|
|10 |12 |7614363703260494022 |
+------+---+--------------------+
Desired output
+--------+----+--------------------+
|iD |val |Code |
+--------+----+--------------------+
|001 |044 |9022070536692784476 |
|002 |066 |-5138930048185086175|
|003 |025 |805582856291361761 |
|004 |017 |-9107885086776983000|
|005 |018 |1993794295881733178 |
|006 |031 |-2867434050463300064|
|007 |088 |-4692317993930338046|
|008 |044 |-4039776869915039812|
|009 |020 |-5786627276152563542|
|0010 |012 |7614363703260494022 |
+--------+----+--------------------+
This solved it for me, thank you all for the help
val df2 = df
.withColumn("idLong", format_string("%03d", $"iD"))
You can simply do that by using concat inbuilt function
df.withColumn("iD", concat(lit("00"), col("iD")))
.withColumn("val", concat(lit("0"), col("val")))