Use PySpark Dataframe column in another spark sql query - pyspark

I have a situation where I'm trying to query a table and use the result (dataframe) from that query as IN clause of another query.
From the first query I have the dataframe below:
+-----------------+
|key |
+-----------------+
| 10000000000004|
| 10000000000003|
| 10000000000008|
| 10000000000009|
| 10000000000007|
| 10000000000006|
| 10000000000010|
| 10000000000002|
+-----------------+
And now I want to run a query like the one below using the values of that dataframe dynamically instead of hard coding the values:
spark.sql("""select country from table1 where key in (10000000000004, 10000000000003, 10000000000008, 10000000000009, 10000000000007, 10000000000006, 10000000000010, 10000000000002)""").show()
I tried the following, however it didn't work:
df = spark.sql("""select key from table0 """)
a = df.select("key").collect()
spark.sql("""select country from table1 where key in ({0})""".format(a)).show()
Can somebody help me?

You should use an (inner) join between two data frames to get the countries you would like. See my example:
# Create a list of countries with Id's
countries = [('Netherlands', 1), ('France', 2), ('Germany', 3), ('Belgium', 4)]
# Create a list of Ids
numbers = [(1,), (2,)]
# Create two data frames
df_countries = spark.createDataFrame(countries, ['CountryName', 'Id'])
df_numbers = spark.createDataFrame(numbers, ['Id'])
The data frames look like the following:
df_countries:
+-----------+---+
|CountryName| Id|
+-----------+---+
|Netherlands| 1|
| France| 2|
| Germany| 3|
| Belgium| 4|
+-----------+---+
df_numbers:
+---+
| Id|
+---+
| 1|
| 2|
+---+
You can join them as follows:
countries.join(numbers, on='Id', how='inner')
Resulting in:
+---+-----------+
| Id|CountryName|
+---+-----------+
| 1|Netherlands|
| 2| France|
+---+-----------+
Hope that clears things up!

Related

Spark Dataframe Combine 2 Columns into Single Column, with Additional Identifying Column

I'm trying to split and then combine 2 DataFrame columns into 1, with another column identifying which column it originated from. Here is the code to generate the sample DF
val data = Seq(("1", "in1,in2,in3", null), ("2","in4,in5","ex1,ex2,ex3"), ("3", null, "ex4,ex5"), ("4", null, null))
val df = spark.sparkContext.parallelize(data).toDF("id", "include", "exclude")
This is the sample DF
+---+-----------+-----------+
| id| include| exclude|
+---+-----------+-----------+
| 1|in1,in2,in3| null|
| 2| in4,in5|ex1,ex2,ex3|
| 3| null| ex4,ex5|
| 4| null| null|
+---+-----------+-----------+
which I'm trying to transform into
+---+----+---+
| id|type|col|
+---+----+---+
| 1|incl|in1|
| 1|incl|in2|
| 1|incl|in3|
| 2|incl|in4|
| 2|incl|in5|
| 2|excl|ex1|
| 2|excl|ex2|
| 2|excl|ex3|
| 3|excl|ex4|
| 3|excl|ex5|
+---+----+---+
EDIT: Should mention that the data inside each of the cells in the example DF is just for visualization, and doesn't need to have the form in1,ex1, etc.
I can get it to work with union, as so:
df.select($"id", lit("incl").as("type"), explode(split(col("include"), ",")))
.union(
df.select($"id", lit("excl").as("type"), explode(split(col("exclude"), ",")))
)
but I was wondering if this was possible to do without using union.
The approach that I am thinking off is, better club both the include and exclude columns and then apply explode function. Then fetch only the column which doesn't have nulls. Finally a case statement.
This might be a long process.
With cte as ( select id, include+exclude as outputcol from SQL),
Ctes as (select id,explode(split(col("outputcol"), ",")) as finalcol from cte)
Select id, case when finalcol like 'in%' then 'incl' else 'excl' end as type, finalcol from Ctes
Where finalcol is not null

How to select the dynamic columns during a spark join?

I am trying to join 2 data frame, in the 1st DF i need to pass a dynamic number of column and join that with another DF. The complexity I am facing here i have a case statement with the output of 1st DF. I am able get the desired output by creating the temp view. But not able to achieve the same output through spark.
Below is the snippet, i have tried and works as expected.
// Sample DF1
val studentDF = Seq(
(1, "Peter","M",15,"Tution Received"),
(2, "Merry","F",14,null),
(3, "Sam","M",16,"Tution Received"),
(4, "Kat","O",16,null),
(5, "Keivn","M",18,null)
).toDF("Enrollment", "Name","Gender","Age","Notes")
//Sample DF2
val studentFees = Seq((1,"$500","Deposit"),(2, "$800","Deposit"),(3,"$200","Deposit"),(4,"$100","Deposit")).toDF("Enrollment","Fees","Notes")
studentDF.createOrReplaceTempView("STUDENT")
studentFees.createOrReplaceTempView("FEES")
val displayColumns = List("Enrollment","Name","Gender").map("a."+_).reduce(_+","+_)
val queryStr = spark.sql(s"select $displayColumns, case when a.Notes is null then b.Notes else a.Notes end as Notes, b.Fees from STUDENT a join FEES b on a.Enrollment=b.Enrollment")
queryStr.show()
---------+-----+------+---------------+----+
|Enrollment| Name|Gender| Notes|Fees|
+----------+-----+------+---------------+----+
| 1|Peter| M|Tution Received|$500|
| 2|Merry| F| Deposit|$800|
| 3| Sam| M|Tution Received|$200|
| 4| Kat| O| Deposit|$100|
+----------+-----+------+---------------+----+
// Below is not giving the desired output
val displayColumns = List("Enrollment","Name","Gender","Notes")
val queryStr = studentDF.select(displayColumns.head, displayColumns.tail: _*).alias("a").join(studentFees.as("b"),Seq("Enrollment"),"inner").withColumn("Notes",when($"a.Notes".isNull,$"b.Notes").otherwise($"a.Notes"))
queryStr.show()
Enrollment| Name|Gender| Notes|Fees| Notes|
+----------+-----+------+---------------+----+---------------+
| 1|Peter| M|Tution Received|$500|Tution Received|
| 2|Merry| F| Deposit|$800| Deposit|
| 3| Sam| M|Tution Received|$200|Tution Received|
| 4| Kat| O| Deposit|$100| Deposit|
+----------+-----+------+---------------+----+---------------+
// Expecting the output like below.
---------+-----+------+---------------+----+
|Enrollment| Name|Gender| Notes|Fees|
+----------+-----+------+---------------+----+
| 1|Peter| M|Tution Received|$500|
| 2|Merry| F| Deposit|$800|
| 3| Sam| M|Tution Received|$200|
| 4| Kat| O| Deposit|$100|
+----------+-----+------+---------------+----+
Is there a better way to handle such scenarios instead of crating temp table/views?
Thank You all whoever read my post!!
I was able to find the solution for my problem.
val displayColumns = List("Enrollment","Name","Gender","Notes")
val queryStr = studentDF.select(displayColumns.head, displayColumns.tail: _*).alias("a").join(studentFees.as("b"),Seq("Enrollment"),"inner").select($"a.*",when($"a.Notes".isNull,$"b.Notes").otherwise($"a.Notes").as("Notes"),$"b.Fees").drop($"a.Notes")

Get column names, distinct values and its occurrences into a text file with Spark Scala

I am new to Spark Scala and would like to execute the following tasks:
Get all column names, the values and its occurrences from a table
Write the result into a text file, i.e. in the following format:
Column Name |Value | Occurrences
Col1 |Test | 12
Col2 |123 | 15
I am using Spark 1.6, not Spark 2.0.
Thanks a lot in advance for any help.
Cheers,
Matthias
Hope this will help you.
Let me explain with an example.
I have a file, users.txt with content as following:
1 abc#test.com EN US
2 xyz#test2.com EN GB
3 srt#test3.com FR FR
Code:
var fileRDD=sc.textFile("users.txt")
case class User(ID:Int,email:String,lang:String,country:String)
var rawRDD=fileRDD.flatMap(_.split("\t")).map(_.split(" "))
var userRDD=rawRDD.map(u=>User(u(0).toInt,u(1).toString,u(2).toString,u(3).toString))
userDF.registerTempTable("user_table")
sqlContext.sql("select * from user_table").show()
+---+-------------+----+-------+
| ID| email|lang|country|
+---+-------------+----+-------+
| 1| abc#test.com| EN| US|
| 2|xyz#test2.com| EN| GB|
| 3|srt#test3.com| FR| FR|
+---+-------------+----+-------+
var emailCount=sqlContext.sql("select 'email' as col,email as value, count(email) as occur from user_table group by email")
var langCount=sqlContext.sql("select 'lang' as col,lang as value, count(lang) as occur from user_table group by lang")
emailCount.unionAll(langCount).show()
+-----+-------------+-----+
| col| value|occur|
+-----+-------------+-----+
|email|srt#test3.com| 1|
|email|xyz#test2.com| 1|
|email| abc#test.com| 1|
| lang| EN| 2|
| lang| FR| 1|
+-----+-------------+-----+

Randomly join two dataframes

I have two tables, one called Reasons that has 9 records and another containing IDs with 40k records.
IDs:
+------+------+
|pc_pid|pc_aid|
+------+------+
| 4569| 1101|
| 63961| 1101|
|140677| 4364|
|127113| 7|
| 96097| 480|
| 8309| 3129|
| 45218| 89|
|147036| 3289|
| 88493| 3669|
| 29973| 3129|
|127444| 3129|
| 36095| 89|
|131001| 1634|
|104731| 781|
| 79219| 244|
+-------------+
Reasons:
+-----------------+
| reasons|
+-----------------+
| follow up|
| skin chk|
| annual meet|
|review lab result|
| REF BY DR|
| sick visit|
| body pain|
| test|
| other|
+-----------------+
I want output like this
|pc_pid|pc_aid| reason
+------+------+-------------------
| 4569| 1101| body pain
| 63961| 1101| review lab result
|140677| 4364| body pain
|127113| 7| sick visit
| 96097| 480| test
| 8309| 3129| other
| 45218| 89| follow up
|147036| 3289| annual meet
| 88493| 3669| review lab result
| 29973| 3129| REF BY DR
|127444| 3129| skin chk
| 36095| 89| other
In the reasons I have only 9 records and in the ID dataframe I have 40k records, I want to assign reason randomly to each and every id.
The following solution tries to be more robust to the number of reasons (ie. you can have as many reasons as you can reasonably fit in your cluster). If you just have few reasons (like the OP asks), you can probably broadcast them or embed them in a udf and easily solve this problem.
The general idea is to create an index (sequential) for the reasons and then random values from 0 to N (where N is the number of reasons) on the IDs dataset and then join the two tables using these two new columns. Here is how you can do this:
case class Reasons(s: String)
defined class Reasons
case class Data(id: Long)
defined class Data
Data will hold the IDs (simplified version of the OP) and Reasons will hold some simplified reasons.
val d1 = spark.createDataFrame( Data(1) :: Data(2) :: Data(10) :: Nil)
d1: org.apache.spark.sql.DataFrame = [id: bigint]
d1.show()
+---+
| id|
+---+
| 1|
| 2|
| 10|
+---+
val d2 = spark.createDataFrame( Reasons("a") :: Reasons("b") :: Reasons("c") :: Nil)
+---+
| s|
+---+
| a|
| b|
| c|
+---+
We will later need the number of reasons so we calculate that first.
val numerOfReasons = d2.count()
val d2Indexed = spark.createDataFrame(d2.rdd.map(_.getString(0)).zipWithIndex)
d2Indexed.show()
+---+---+
| _1| _2|
+---+---+
| a| 0|
| b| 1|
| c| 2|
+---+---+
val d1WithRand = d1.select($"id", (rand * numerOfReasons).cast("int").as("rnd"))
The last step is to join on the new columns and the remove them.
val res = d1WithRand.join(d2Indexed, d1WithRand("rnd") === d2Indexed("_2")).drop("_2").drop("rnd")
res.show()
+---+---+
| id| _1|
+---+---+
| 2| a|
| 10| b|
| 1| c|
+---+---+
pyspark random join itself
data_neg = data_pos.sortBy(lambda x: uniform(1, 10000))
data_neg = data_neg.coalesce(1, False).zip(data_pos.coalesce(1, True))
The fastest way to randomly join dataA (huge dataframe) and dataB (smaller dataframe, sorted by any column):
dfB = dataB.withColumn(
"index", F.row_number().over(Window.orderBy("col")) - 1
)
dfA = dataA.withColumn("index", (F.rand() * dfB.count()).cast("bigint"))
df = dfA.join(dfB, on="index", how="left").drop("index")
Since dataB is already sorted, row numbers can be assigned over sorted window with high degree of parallelism. F.rand() is another highly parallel function, so adding index to dataA will be very fast as well.
If dataB is small enough, you may benefit from broadcasting it.
This method is better than using:
zipWithIndex: Can be very expensive to convert dataframe to rdd, zipWithIndex, and then to df.
monotonically_increasing_id: Need to be used with row_number which will collect all the partitions into a single executor.
Reference: https://towardsdatascience.com/adding-sequential-ids-to-a-spark-dataframe-fa0df5566ff6

How to merge duplicate rows using expressions in Spark Dataframes

How can I merge 2 data frames by removing duplicates by comparing columns.
I have two dataframes with same column names
a.show()
+-----+----------+--------+
| name| date|duration|
+-----+----------+--------+
| bob|2015-01-13| 4|
|alice|2015-04-23| 10|
+-----+----------+--------+
b.show()
+------+----------+--------+
| name| date|duration|
+------+----------+--------+
| bob|2015-01-12| 3|
|alice2|2015-04-13| 10|
+------+----------+--------+
What I am trying to do is merging of 2 dataframes to display only unique rows by applying two conditions
1.For same name duration will be sum of durations.
2.For same name,the final date will be latest date.
Final output will be
final.show()
+-------+----------+--------+
| name | date|duration|
+----- +----------+--------+
| bob |2015-01-13| 7|
|alice |2015-04-23| 10|
|alice2 |2015-04-13| 10|
+-------+----------+--------+
I followed the following method.
//Take union of 2 dataframe
val df =a.unionAll(b)
//group and take sum
val grouped =df.groupBy("name").agg($"name",sum("duration"))
//join
val j=df.join(grouped,"name").drop("duration").withColumnRenamed("sum(duration)", "duration")
and I got
+------+----------+--------+
| name| date|duration|
+------+----------+--------+
| bob|2015-01-13| 7|
| alice|2015-04-23| 10|
| bob|2015-01-12| 7|
|alice2|2015-04-23| 10|
+------+----------+--------+
How can I now remove duplicates by comparing dates.
Will it be possible by running sql queries after registering it as table.
I am a beginner in SparkSQL and I feel like my way of approaching this problem is weird. Is there any better way to do this kind of data processing.
you can do max(date) in groupBy(). No need to do join the grouped with df.
// In 1.3.x, in order for the grouping column "name" to show up,
val grouped = df.groupBy("name").agg($"name",sum("duration"), max("date"))
// In 1.4+, grouping column "name" is included automatically.
val grouped = df.groupBy("name").agg(sum("duration"), max("date"))