how do I join a pyspark dataframe on two different columns?
Cols df1: ID,DATE
cols df2: user,DATE
I want to Join df1.ID==df2.user and df1.DATE==df2.DATE
Joindf = df1.join(df2.withColumnRenamed("ID","user"), ["ID","DATE"])
should do it for you.
Related
I need help to concat two dataframes in which one is empty and other one having the data. Could you please how to do this in pyspark?
pandas I am using:
suppose df2 is empty and df1 is having some record.
df2 = pd.concat([df2, df1])
But how to perform this operation in pyspark?
df1:
+--------------------+----------+---------+
| Programname|Projectnum| Drug|
+--------------------+----------+---------+
|Non-Oncology Phar...|SR0480-000|Invokamet|
+--------------------+----------+---------+
df2:
++
||
++
++
I tried many option. One option worked for me.
For concat df2 to df1, first I need to create the structure of df2 same like df1 then use the union for concatanation.
df2 = sqlContext.createDataFrame(sc.emptyRDD(), df1.schema)
df2 = df2.union(df1)
result:
df2:
+--------------------+----------+---------+
| Programname|Projectnum| Drug|
+--------------------+----------+---------+
|Non-Oncology Phar...|SR0480-000|Invokamet|
+--------------------+----------+---------+
You can use the union method:
df = df1.union(df2)
How do I locate difference between 2 dataframe columns ?
This is causing issues when I join 2 dataframes.
df1_cols = df1.columns
df2_cols = df2.columns
This will return columns for 2 dataframe in 2 list variables.
Thanks
df.columns returns a list here, so you can use any tool in python to compare with another list, i.e. df2_cols. e.g. You can use set to check the common columns in the two DataFrames
df1_cols = df1.columns
df2_cols = df2.columns
set(df1_cols).intersection(set(df2_cols)) # check common columns
set(df1_cols) - set(df2_cols) # check columns in df1 but not in df2
set(df2_cols) - set(df1_cols) # check columns in df2 but not in df1
I'm new to PySpark and thus the question.
I have two dataframes df1 and df2 with columns A, B and C.
Only Col C can have different values in these two dataframes.
How do I compare the df1 and df2 and create df3 with columns A, B C which only has rows where the value of C is different between A and B
Any help appreciated.
Inner join and filter
from pyspark.sql.functions import col
df1.alias("df1").join(df2.alias("df2"), ["a", "b"]).where(col("df1.c") != col("df2.c"))
If you want to handle missing values as well
df1.alias("df1").join(df2.alias("df2"), ["a", "b"], "fullouter").where(
~col("df1.c").eqNullSafe(col("df2.c"))
)
I have to join two dataframes, which is very similar to the task given here Joining two DataFrames in Spark SQL and selecting columns of only one
However, I want to select only the second column from df2. In my task, I am going to use the join function for two dataframes within a reduce function for a list of dataframes. In this list of dataframes, the column names will be different. However, in each case I would want to keep the second column of df2.
I did not find anywhere how to select a dataframe's column by their numbered index. Any help is appreciated!
EDIT:
ANSWER
I figured out the solution. Here is one way to do this:
def joinDFs(df1: DataFrame, df2: DataFrame): DataFrame = {
val df2cols = df2.columns
val desiredDf2Col = df2cols(1) // the second column
val df3 = df1.as("df1").join(df2.as("df2"), $"df1.time" === $"df2.time")
.select($"df1.*",$"df2.$desiredDf2Col")
df3
}
And then I can apply this function in a reduce operation on a list of dataframes.
var listOfDFs: List[DataFrame] = List()
// Populate listOfDFs as you want here
val joinedDF = listOfDFs.reduceLeft((x, y) => {joinDFs(x, y)})
To select the second column in your dataframe you can simply do:
val df3 = df2.select(df2.columns(1))
This will first find the second column name and then select it.
If the join and select methods that you want to define in reduce function is similar to Joining two DataFrames in Spark SQL and selecting columns of only one Then you should do the following :
import org.apache.spark.sql.functions._
d1.as("d1").join(d2.as("d2"), $"d1.id" === $"d2.id").select(Seq(1) map d2.columns map col: _*)
You will have to remember that the name of the second column i.e. Seq(1) should not be same as any of the dataframes column names.
You can select multiple columns as well but remember the bold note above
import org.apache.spark.sql.functions._
d1.as("d1").join(d2.as("d2"), $"d1.id" === $"d2.id").select(Seq(1, 2) map d2.columns map col: _*)
Suppose there is one column in dataframe and there is similar schema column in another dataframe. how to check check the values consisting in the columns are same or not without joining them as there is not common attribute.
DF1
serial_nm
abc
mnc
pqr
DF2
ser_nm
hgf
mnc
uio
pqr
lok
And i want third DF3 as output
DF3
mnc
pqr
I tried this
val DF3 = DF1.filter(DF1("serial_nm") === DF2("ser_nm"))
But its not working
Please Help
Thanks..!!
I believe you can use a join. Consider using it like this:
val DF3 = DF1.join(DF2, DF1("serial_nm") === DF2("ser_nm"))
or
val DF3 = DF1.join(DF2).where(DF1("serial_nm") === DF2("ser_nm"))
Both approaches are quivalent.
Note: To avoid problems with ambiguous columns, one option is to rename them before the join:
val df2_renamed = DF2
.withColumnRenamed("mnc", "df2_mnc")
.withColumnRenamed("pqr", "df2_pqr")