How can I add two different dataframes element by element in pyspark - pyspark

I am trying to add two dataframes that have the same schemas and row numbers in pyspark. I would like to add element by element from Dataframe1 to Dataframe2. In R I can do:
DataframeSum = Dataframe1 + Dataframe2
Is there a way to do this in pyspark?

Related

Covert a Pyspark Dataframe into a List with actual values

I am trying to convert a Pyspark dataframe column to a list of values NOT objects.
Now my ultimate goal is use it as a filter for filtering another dataframe.
I have tries the following:
X = df.select("columnname").collect()
But when I use it to filter I am unable to.
Y = dtaframe.filter(~dtaframe.columnname.isin(X)))
Also, tried to convert into numpy Array and aggregate collect_list()
df.groupby('columnname').agg(collect_list(df["columnname"])
Please advise.
Collect function returns an array of row object by collecting the data from executors. If you need an array of values in native datatypes, it has to be handled explicitly to fetch the column from the row object.
This code creates a DF with column number of LongType.
df = spark.range(0,10,2).toDF("number")
Convert this into a python list.
num_list = [row.number for row in df.collect()]
Now this list can used in any dataframe to filter the values using isin function.
df1 = spark.range(10).toDF("number")
df1.filter(~col("number").isin(num_list)).show()

How to find if one column is contained in another array column in pyspark

I have 2 columns with the following schema in a pyspark dataframe
('pattern', 'array<struct<pattern:string,positions:array<int>>>')
('distinct_patterns', 'array<array<struct<pattern:string,positions:array<int>>>>')
I want to find those rows where pattern is there in distinct patterns

Convert two seperate columns of type org.apache.spark.sql.Column into a dataframe of two columns in scala

I have two columns of type org.apache.spark.sql.Column
I need to create a dataframe from these two columns such that the dataframe looks like [col1,col2] . Both columns contains data of double type.
Any suggestions on how to create the dataframe.

Converting Pandas into Pyspark

So I'm trying to convert my python algorithm to Spark friendly code and I'm having trouble with this one:
indexer = recordlinkage.SortedNeighbourhoodIndex \
(left_on=column1, right_on=column2, window=41)
pairs = indexer.index(df_1,df_2)
It basically compares one column against the other and generates index pairs for those likely to be the same (Record Matching).
My code:
df1 = spark.read.load(*.csv)
df2 = spark.read.load(*.csv)
func_udf = udf(index.indexer) ????
df = df.withColumn('column1',func_udf(df1.column1,df2.column2)) ???
I've been using udf for transformations involving just one dataframe and one column, but how do I run a function that requires two arguments, one column from one dataframe and other from other dataframe? I can't join both dataframes as they have different lengths.
That's not how udf work. UserDefinedFunctions can operate only on data that comes from a single DataFrame
Standard udf on data from a single row.
pandas_udf on data from a single partition or single group.
I can't join both dataframes as they have different lengths.
Join is exactly what you should do (standard or manual broadcast). There is no need for objects to be of the same length - Spark join is a relational join not row-wise merge.
For similarity joins you can use built-in approx join tools:
Efficient string matching in Apache Spark
Optimize Spark job that has to calculate each to each entry similarity and output top N similar items for each

Add list as column to Dataframe in pyspark

I have a list of integers and a sqlcontext dataframe with the number of rows equal to the length of the list. I want to add the list as a column to this dataframe maintaining the order. I feel like this should be really simple but I can't find an elegant solution.
You cannot simply add a list as a dataframe column since list is local object and dataframe is distirbuted. You can try one of thw followin approaches:
convert dataframe to local by collect() or toLocalIterator() and for each row add corresponding value from the list OR
convert list to dataframe adding an extra column (with keys from dataframe) and then join them both