I have a table, the table has all the data of this type:
03104|00000000000000105000|00000000000000002000|001|00095|000000000000000000000000000000000000000000001835162021-07-15
I want to split this column into:
+--------------------+
| value|
+--------------------+
| 03104|
|00000000000000105000|
|00000000000000002000|
| 001|
| 00095|
|00000000000000000...|
+--------------------+
How can I do this?
You can split the column by | to get array then call explode/explode_outer to get the desired result.
val spark = SparkSession.builder().master("local[*]").getOrCreate()
spark.sparkContext.setLogLevel("ERROR")
import spark.implicits._
List("03104|00000000000000105000|00000000000000002000|001|00095|" +
"000000000000000000000000000000000000000000001835162021-07-15")
.toDF("value")
.select(explode_outer(split('value, "\\|")).as("value"))
.show(false)
/*
+------------------------------------------------------------+
|value |
+------------------------------------------------------------+
|03104 |
|00000000000000105000 |
|00000000000000002000 |
|001 |
|00095 |
|000000000000000000000000000000000000000000001835162021-07-15|
+------------------------------------------------------------+ */
Related
I have this DataFrame of Spark:
+-------------+
|father|child |
+-------------+
|Aaron |Adam |
|Aaron |Berel |
|Aaron |Kasper|
|Levi |Saul |
|Levi |Tiger |
+-------------+
How can I group by parents and put all the data together in a single field with delimiter?
My desired output would be:
+------------------------+
|union_all_name_by_father|
+------------------------+
|Aaron;Adam;Berel;Kasper |
|Levi;Saul;Tiger |
+------------------------+
You can use groupby and then concat_ws:
val df2 = df.groupBy("father").agg(
concat_ws(";", collect_list(col("child"))).as("col2")
).select(concat_ws(";", col("father"), col("col2")).as("union_all_name_by_father"))
df2.show(false)
+------------------------+
|union_all_name_by_father|
+------------------------+
|Aaron;Adam;Berel;Kasper |
|Levi;Saul;Tiger |
+------------------------+
I need to get a date from below input on which there is a consecutive 'complete' status for past 7 days from that given date.
Requirement:
1. go Back 8 days (this is easy)
2. So we are on 20190111 from below data frame, I need to check day by day from 20190111 to 20190104 (7 day period) and get a date on which status has 'complete' for consecutive 7 days. So we should get 20190108
I need this in spark-scala.
input
+---+--------+--------+
| id| date| status|
+---+--------+--------+
| 1|20190101|complete|
| 2|20190102|complete|
| 3|20190103|complete|
| 4|20190104|complete|
| 5|20190105|complete|
| 6|20190106|complete|
| 7|20190107|complete|
| 8|20190108|complete|
| 9|20190109| pending|
| 10|20190110|complete|
| 11|20190111|complete|
| 12|20190112| pending|
| 13|20190113|complete|
| 14|20190114|complete|
| 15|20190115| pending|
| 16|20190116| pending|
| 17|20190117| pending|
| 18|20190118| pending|
| 19|20190119| pending|
+---+--------+--------+
output
+---+--------+--------+
| id| date| status|
+---+--------+--------+
| 1|20190101|complete|
| 2|20190102|complete|
| 3|20190103|complete|
| 4|20190104|complete|
| 5|20190105|complete|
| 6|20190106|complete|
| 7|20190107|complete|
| 8|20190108|complete|
output
+---+--------+--------+
| id| date| status|
+---+--------+--------+
| 1|20190101|complete|
| 2|20190102|complete|
| 3|20190103|complete|
| 4|20190104|complete|
| 5|20190105|complete|
| 6|20190106|complete|
| 7|20190107|complete|
| 8|20190108|complete|
for >= spark 2.4
import org.apache.spark.sql.expressions.Window
val df= Seq((1,"20190101","complete"),(2,"20190102","complete"),
(3,"20190103","complete"),(4,"20190104","complete"), (5,"20190105","complete"),(6,"20190106","complete"),(7,"20190107","complete"),(8,"20190108","complete"),
(9,"20190109", "pending"),(10,"20190110","complete"),(11,"20190111","complete"),(12,"20190112", "pending"),(13,"20190113","complete"),(14,"20190114","complete"),(15,"20190115", "pending") , (16,"20190116", "pending"),(17,"20190117", "pending"),(18,"20190118", "pending"),(19,"20190119", "pending")).toDF("id","date","status")
val df1= df.select($"id", to_date($"date", "yyyyMMdd").as("date"), $"status")
val win = Window.orderBy("id")
coalesce lag_status and status to remove null
val df2= df1.select($"*", lag($"status",1).over(win).as("lag_status")).withColumn("lag_stat", coalesce($"lag_status", $"status")).drop("lag_status")
create integer columns to denote if staus for current day is equal to status for previous days
val df3=df2.select($"*", ($"status"===$"lag_stat").cast("integer").as("status_flag"))
val win1= Window.orderBy($"id".desc).rangeBetween(0,7)
val df4= df3.select($"*", sum($"status_flag").over(win1).as("previous_7_sum"))
val df_new= df4.where($"previous_7_sum"===8).select($"date").select(explode(sequence(date_sub($"date",7), $"date")).as("date"))
val df5=df4.join(df_new, Seq("date"), "inner").select($"id", concat_ws("",split($"date".cast("string"), "-")).as("date"), $"status")
+---+--------+--------+
| id| date| status|
+---+--------+--------+
| 1|20190101|complete|
| 2|20190102|complete|
| 3|20190103|complete|
| 4|20190104|complete|
| 5|20190105|complete|
| 6|20190106|complete|
| 7|20190107|complete|
| 8|20190108|complete|
+---+--------+--------+
for spark < 2.4
use udf instead of built in array function "sequence"
val df1= df.select($"id", $"date".cast("integer").as("date"), $"status")
val win = Window.orderBy("id")
coalesce lag_status and status to remove null
val df2= df1.select($"*", lag($"status",1).over(win).as("lag_status")).withColumn("lag_stat", coalesce($"lag_status", $"status")).drop("lag_status")
create integer columns to denote if staus for current day is equal to status for previous days
val df3=df2.select($"*", ($"status"===$"lag_stat").cast("integer").as("status_flag"))
val win1= Window.orderBy($"id".desc).rangeBetween(0,7)
val df4= df3.select($"*", sum($"status_flag").over(win1).as("previous_7_sum"))
val ud1= udf((col1:Int) => {
((col1-7).to(col1 )).toArray})
val df_new= df4.where($"previous_7_sum"===8)
.withColumn("dt_arr", ud1($"date"))
.select(explode($"dt_arr" ).as("date"))
val df5=df4.join(df_new, Seq("date"), "inner").select($"id", concat_ws("",split($"date".cast("string"), "-")).as("date"), $"status")
I have a spark dataframe, and I wish to check whether each string in a particular column exists in a pre-defined a column of another dataframe.
I have found a same problem in Spark (scala) dataframes - Check whether strings in column contain any items from a set
but I want to Check whether strings in column exists in a column of another dataframe not a List or a set follow that question. Who can help me! I don't know convert a column to a set or a list and i don't know "exists" method in dataframe.
My data is similar to this
df1:
+---+-----------------+
| id| url |
+---+-----------------+
| 1|google.com |
| 2|facebook.com |
| 3|github.com |
| 4|stackoverflow.com|
+---+-----------------+
df2:
+-----+------------+
| id | urldetail |
+-----+------------+
| 11 |google.com |
| 12 |yahoo.com |
| 13 |facebook.com|
| 14 |twitter.com |
| 15 |youtube.com |
+-----+------------+
Now, i am trying to create a third column with the results of a comparison to see if the strings in the $"urldetail" column if exists in $"url"
+---+------------+-------------+
| id| urldetail | check |
+---+------------+-------------+
| 11|google.com | 1 |
| 12|yahoo.com | 0 |
| 13|facebook.com| 1 |
| 14|twitter.com | 0 |
| 15|youtube.com | 0 |
+---+------------+-------------+
I want to use UDF but i don't know how to check whether string exists in a column of a dataframe! Please help me!
I have a spark dataframe, and I wish to check whether each string in a
particular column contains any number of words from a pre-defined a
column of another dataframe.
Here is the way. using = or like
package examples
import org.apache.log4j.Level
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions.{col, _}
object CompareColumns extends App {
val logger = org.apache.log4j.Logger.getLogger("org")
logger.setLevel(Level.WARN)
val spark = SparkSession.builder()
.appName(this.getClass.getName)
.config("spark.master", "local").getOrCreate()
import spark.implicits._
val df1 = Seq(
(1, "google.com"),
(2, "facebook.com"),
(3, "github.com"),
(4, "stackoverflow.com")).toDF("id", "url").as("first")
df1.show
val df2 = Seq(
(11, "google.com"),
(12, "yahoo.com"),
(13, "facebook.com"),
(14, "twitter.com")).toDF("id", "url").as("second")
df2.show
val df3 = df2.join(df1, expr("first.url like second.url"), "full_outer").select(
col("first.url")
, col("first.url").contains(col("second.url")).as("check")).filter("url is not null")
df3.na.fill(Map("check" -> false))
.show
}
Result :
+---+-----------------+
| id| url|
+---+-----------------+
| 1| google.com|
| 2| facebook.com|
| 3| github.com|
| 4|stackoverflow.com|
+---+-----------------+
+---+------------+
| id| url|
+---+------------+
| 11| google.com|
| 12| yahoo.com|
| 13|facebook.com|
| 14| twitter.com|
+---+------------+
+-----------------+-----+
| url|check|
+-----------------+-----+
| google.com| true|
| facebook.com| true|
| github.com|false|
|stackoverflow.com|false|
+-----------------+-----+
with full outer join we can achive this...
For more details see my article with all joins here in my linked in post
Note : Instead of 0 for false 1 for true i have used boolean
conditions here.. you can translate them in to what ever you wanted...
UPDATE : If rows are increasing in second dataframe
you can use this, it wont miss any rows from second
val df3 = df2.join(df1, expr("first.url like second.url"), "full").select(
col("second.*")
, col("first.url").contains(col("second.url")).as("check"))
.filter("url is not null")
df3.na.fill(Map("check" -> false))
.show
Also, one more thing is you can try regexp_extract as shown in below post
https://stackoverflow.com/a/53880542/647053
read in your data and use the trim operation just to be conservative when joining on the strings to remove the whitesapace
val df= Seq((1,"google.com"), (2,"facebook.com"), ( 3,"github.com "), (4,"stackoverflow.com")).toDF("id", "url").select($"id", trim($"url").as("url"))
val df2 =Seq(( 11 ,"google.com"), (12 ,"yahoo.com"), (13 ,"facebook.com"),(14 ,"twitter.com"),(15,"youtube.com")).toDF( "id" ,"urldetail").select($"id", trim($"urldetail").as("urldetail"))
df.join(df2.withColumn("flag", lit(1)).drop("id"), (df("url")===df2("urldetail")), "left_outer").withColumn("contains_bool",
when($"flag"===1, true) otherwise(false)).drop("flag","urldetail").show
+---+-----------------+-------------+
| id| url|contains_bool|
+---+-----------------+-------------+
| 1| google.com| true|
| 2| facebook.com| true|
| 3| github.com| false|
| 4|stackoverflow.com| false|
+---+-----------------+-------------+
I have a Spark Dataframe with the following schema:
________________________
|id | no | date |
|1 | 123 |2018/10/01 |
|2 | 124 |2018/10/01 |
|3 | 123 |2018/09/28 |
|4 | 123 |2018/09/27 |
...
What I want is to have a new DataFrame with the following data:
___________________
| no | date |
| 123 |2018/09/27 |
| 124 |2018/10/01 |
Can someone help me on this?:) Thank you!!
You can resolve it by using the rank (https://databricks.com/blog/2015/07/15/introducing-window-functions-in-spark-sql.html) on dataframe with spark sql:
use registerTempTable on sparkContext such as df_temp_table
Make this query:
select dftt.*,
dense_rank() OVER ( PARTITION BY dftt.no ORDER BY dftt.date DESC) AS Rank from
df_temp_table as dftt
you will get this dataframe:
|id | no | date | rank
|1 | 123 |2018/10/01 | 1
|2 | 124 |2018/10/01 | 1
|3 | 123 |2018/09/28 | 2
|4 | 123 |2018/09/27 | 3
on this df you can now filter the rank column by 1
Welcome,
you can try below Code :
import org.apache.spark.sql.functions.row_number
import org.apache.spark.sql.expressions.Window
val w = Window.partitionBy($"no").orderBy($"date".asc)
val Resultdf = df.withColumn("rownum", row_number.over(w))
.where($"rownum" === 1).drop("rownum","id")
Resultdf.show()
Output:
+---+----------+
| no| date|
+---+----------+
|124|2018/10/01|
|123|2018/09/27|
+---+----------+
I have a spark app, that need to convert from string to timestamp below is my code.
val df = sc.parallelize(Seq("09/18/2017","")).toDF("sDate")
+----------+
| sDate|
+----------+
|09/18/2017|
| |
+----------+
val ts = unix_timestamp($"sDate","MM/dd/yyyy").cast("timestamp")
df.withColumn("ts", ts).show()
+----------+--------------------+
| sDate| ts|
+----------+--------------------+
|09/18/2017|2017-09-18 00:00:...|
| | null|
+----------+--------------------+
The conversion is doing good, but if the value is empty , I'm getting null after casting.
Is there any way to return empty if the source value is empty.
you can use when function as below
import org.apache.spark.sql.functions._
val ts = unix_timestamp($"sDate","MM/dd/yyyy").cast("timestamp")
df.withColumn("ts", when(ts.isNotNull, ts).otherwise(lit("empty"))).show()
which would give you output as
+----------+-------------------+
| sDate| ts|
+----------+-------------------+
|09/18/2017|2017-09-18 00:00:00|
| | empty|
+----------+-------------------+