I am exploring spark persist function. It seems for some dataframe it is persisting whereas for others it is not, even though I have used the persist method on all the dataframes
Here is my code with explaination
// loading csv as dataframe and creating a view
val src_data=spark.read.option("header",true).csv("sources/data.csv")
src_data.createTempView("src_data")
**There is alreading a table called test in hive**
Here I am creating 3 dataframes using src and test and using persist on all 3 for later use
//dataframe 1
val changed_data= spark.sql("select sc.* from src_data sc inner join default.test t on sc.id=t.id where t.value!=sc.value or t.description!=sc.description ")
changed_data.persist().show()
changed_data.createOrReplaceTempView("changed_data")
// dataframe 2
val new_data= spark.sql("select * from src_data where id not in (select distinct id from default.test)")
println("new_data")
new_data.persist().show()
new_data.createOrReplaceTempView("new_data")
// dataframe 3
val unchanged_data= spark.sql("select * from test where id not in (select id from changed_data)")
unchanged_data.persist().show()
unchanged_data.createTempView("unchanged_data")
**then I truncate the table test**
spark.sql("truncate table test")
***Then i print the 3 dataframes I persisted***
new_data.show()
unchanged_data.show()
changed_data.show()
Before truncating test I can see data for all 3 dataframes using show but after I see only data for one dataframe....
I get data for only new_data(which is dataframe 2) eventhough I persisted all 3 dataframes and all three use table test??
Why this odd behaviour
The Dataframes will only be persisted if you invoke an action that goes through every record of the Dataframe.
Remember, that show() only shows the Top 20 rows as documented in the ScalaDocs.
In contrast, you could apply something like count() but that obviously will have some negative impact on your performance.
Related
I am observing some weired issue. I am not sure whether it is lack of my knowledge in spark or what.
I have a dataframe as shown in below code. I create a tempview from it and I am observing that after merge operation, that tempview becomes empty. Not sure why.
val myDf = getEmployeeData()
myDf.createOrReplaceTempView("myView")
// Result1: Below lines display all the records
myDf.show()
spark.Table("myView").show()
// performing merge operation
val sql = s"""MERGE INTO employee AS a
USING myView AS b
ON a.Id = b.Id
WHEN MATCHED THEN UPDATE SET *
WHEN NOT MATCHED THEN INSERT *"""
spark.sql(sql)
// Result2: ISSUE is here. myDf & mvView both are empty
myDf.show()
spark.Table("myView").show()
Edit
getEmployeeData method performs join between two dataframes and returns the result.
df1.as(df1Alias).join(df2.as(df2Alias), expr(joinString), "inner").filter(finalFilterString).select(s"$df1Alias.*")
Dataframes in Spark are lazily evaluated, ie. not executed until an action like .show, .collect is executed or they're used in SQL DDL operation. This also means that if you refer to it once more, it will get reevaluated again.
Assuming there's no other background activity that can mess up, apparently your function getEmployeeData, depends on employee table. It gets executed both before and after the merge and might yield different result.
To prevent it you can checkpoint the dataframe:
myView.checkpoint()
or explicitly materialize it:
myView.write.saveAsTable("myViewMaterialized")
and later refer to the materialized version.
I agree with the points that #Kombajn zbozowy said related to Dataframes in spark are lazily evaluated and will be reevaluated again if you call action on it once more.
I would like to point out that, it is normal expected behavior and might not have to do anything with the merge operation.
For example, the dataframe that you get as an output of the join only contains inserts and you perform those inserts on the target table using dataframe write api, and then if you run the df.show() command again it would show you output as empty because when it is reevaluating the content of the dataframe by performing join it won't get any difference and will not output any records...
Same holds true for merge operation as it also updates the target table with insert/updates and when you rerun it won't show any output rows.
I have dataframe with some 10 columns. I have selected 4 columns out of these 10 and cleaned their values(by calling some external API and using its response). I would like to create new dataframe now (as old one cannot be updated) and update these 4 columns with its cleaned value(as returned by the API) and keep other 6 as is.
Tried exploring .na.replace and .withColumn but they all work on some condition for the columns.
val newdf = df.withColumn("col1", when(col("col1") === "XYZ", cleanedcol1)
.otherwise(col("col1")));
And
val newdf = df.na.replace("col1", Map("col1" -> cleanedcol1))
The first snippet matches col1 value with XYZ and then replaces it. I want unconditional change.
The second one actually tries to look for String "col1" for col1 column and hence does not replace anything.
What is the optimum approach to achieve this? The source of the df is Kafka and hence traffic would be fast.
You can have unconditional change with withCoulumn, just write
val newdf = df.withColumn("col1", newColumnValue)
Where newColumnValue is the new value you want to set for the column.
I work with Spark 1.6.1 in Scala.
I have one dataframe, and I want to create different dataframe and only want to read 1 time.
For example one dataframe have two columns ID and TYPE, and I want to create two dataframe one with the value of type = A and other with type value = B.
I've checked another posts on stackoverflow, but found only the option to read the dataframe 2 times.
However, I would like another solution with the best performance possible.
Kinds regards.
Spark will read from the data source multiple times if you perform multiple actions on the data. The way to aviod this is to use cache(). In this way, the data will be saved to memory after the first action, which will make subsequent actions using the data faster.
Your two dataframes can be created in this way, requiring only one read of the data source.
val df = spark.read.csv(path).cache()
val dfA = df.filter($"TYPE" === "A").drop("TYPE")
val dfB = df.filter($"TYPE" === "B").drop("TYPE")
The "TYPE" column is dropped as it should be unnecessary after the separation.
When I am running my spark job (version 2.1.1) on EMR, each run counts a different amount of rows on a dataframe. I first read data from s3 to 4 different dataframes, these counts are always consistent an then after joining the dataframes, the result of the join have different counts. afterwards I also filter the result and that also has a different count on each run. The variations are small, 1-5 rows difference but it's still something I would like to understand.
This is the code for the join:
val impJoinKey = Seq("iid", "globalVisitorKey", "date")
val impressionsJoined: DataFrame = impressionDsNoDuplicates
.join(realUrlDSwithDatenoDuplicates, impJoinKey, "outer")
.join(impressionParamterDSwithDateNoDuplicates, impJoinKey, "left")
.join(chartSiteInstance, impJoinKey, "left")
.withColumn("timestamp", coalesce($"timestampImp", $"timestampReal", $"timestampParam"))
.withColumn("url", coalesce($"realUrl", $"url"))
and this is for the filter:
val impressionsJoined: Dataset[ImpressionJoined] = impressionsJoinedFullDay.where($"timestamp".geq(new Timestamp(start.getMillis))).cache()
I have also tried using filter method instead of where, but with same results
Any thought?
Thanks
Nir
is it possible that one of the data sources changes over over time?
since impressionsJoined is not cached, spark will reevaluate it from scratch on every action, and that includes reading the data again from the source.
try caching impressionsJoined after the join.
We have customer data in a Hive table and sales data in another Hive table, which has data in TB's. We are trying to pull the sales data for multiple customers and save it to a file.
What we tried so far:
We tired with left outer join between customer and sales tables, but because of the huge sales data it is not working.
val data = customer.join(sales,"customer.id" = "sales.customerID",leftouter)
So the alternative is to pull the data form sales table based on specific customer region list and see if this region data has the customer data, if data exist save it in other dataframe and load the data to the same dataframe for all the regions.
My question here is, whether the multiple insert of data for the dataframe is supported in spark.
If the sales dataframe is larger than the customer dataframe then you could simply switch the order of the dataframes in the join operation.
val data = sales.join(customer,"customer.id" === "sales.customerID", "left_outer")
You could also add a hint for Spark to broadcast the smaller dataframe, though I belive it needs to be smaller than 2GB:
import org.apache.spark.sql.functions.broadcast
val data = sales.join(broadcast(customer),"customer.id" === "sales.customerID", "leftouter")
To use the other approach and iterativly merge dataframes is also possible. For this purpose you can use the union method (Spark 2.0+) or unionAll (older versions). This method will append a dataframe to another. In the case where you have a list of dataframes that you want to merge with each other you can use union together with reduce:
val dataframes = Seq(df1, df2, df3)
dataframes.reduce(_ union _)