SPARK Join strategy in Cloud Datafusion - google-cloud-data-fusion

In cloud Datafusion I am using a joiner transform to join two tables.
One of them is a large table with about 87M Joins, while the other is a smaller table with only ~250 records. I am using 200 partitions in the joiner.
This causes the following failure:
org.apache.spark.SparkException: Job aborted due to stage failure:
Task 50 in stage 7.0 failed 4 times, most recent failure: Lost task
50.3 in stage 7.0 (TID xxx, cluster_workerx.c.project.internal, executor 6): ExecutorLostFailure (executor 6 exited caused by one of
the running tasks) Reason: Executor heartbeat timed out after 133355
ms java.util.concurrent.ExecutionException:
java.lang.RuntimeException: org.apache.spark.SparkException:
Application application_xxxxx finished with failed status
On a closer look into the spark UI the 200 tasks for the Join, nearly 80% of the 87m records go into one task O/P which fails with the heartbeat error, while the succeeded tasks has very few record O/P ~<10k records
Seems like spark performs a shuffle hash Join, is there a way in datafusion/cdap where we can force a broadcast join since one of my table is very small? Or can i make come configuration changes to the cluster config to make this join work?
What are the performance tuning i can make in the data fusion pipeline. I didnt find any reference to the configuration, tuning in the Datafusion documentation

You can use org.apache.spark.sql.functions.broadcast(Dataset[T]) to mark a dataframe/dataset to be broadcasted while being joined. Broadcast is not always guaranteed but for 250 record it will work. If the dataframe with 87M rows is evenly partitioned then it should improve the performance.

Related

Apache Zeppelin Can't Write Deltatable to Spark

I'm attempting to run the following commands using the "%spark" interpreter in Apache Zeppelin:
val data = spark.range(0, 5)
data.write.format("delta").save("/tmp/delta-table")
Which yields this output (truncated to omit repeat output):
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 7, 192.168.64.3, executor 2): java.io.FileNotFoundException: File file:/tmp/delta-table/_delta_log/00000000000000000000.json does not exist
It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177)
...
I'm unable to figure out why this is happening at all as I'm too unfamiliar with Spark. Any tips? Thanks for your help.

Why Apache Spark does some checks and raises those exceptions during the job runtime, but has never thrown them during Unit test?

There was a bug in my Scala code, formatting the date of the timestamp, being then concatenated as the String to some, non-timestamp column of the Spark Streaming:
concat(date_format(col("timestamp"),"yyyy-MM-DD'T'HH:mm:ss.SSS'Z'")
So, during the tests, everything was ok and tests, sending the messages to the Kafka, were passed, and I was able to see those messages in the Kafka Tool:
Not 292th of October there because of DD instead of dd in the formatter.
But then in the executor it was some extra check that wasn't passed and job was crashed:
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 8.0 failed 1 times, most recent failure: Lost task 1.0 in stage 8.0 (TID 12, kafkadatageneratorjob-driver, executor driver): org.apache.spark.SparkUpgradeException: You may get a different result due to the upgrading of Spark 3.0: Fail to format it to '2021-10-292T14:27:12.577Z' in the new formatter. You can set spark.sql.legacy.timeParserPolicy to LEGACY to restore the behavior before Spark 3.0, or set to CORRECTED and treat it as an invalid datetime string.
How to enable the same strict check on the Unit tests to make them also failing on those checks without explicit check of the value, but just forcing timeParserPolicy also to be executed in tests also.

Spark read failed due to duplicate column

I am trying to read parquet file from S3 in databricks, using scala.
below is the simple read code
val df = spark.read.parquet(s"/mnt/$MountName/tstamp=2020_03_25")
display(df)
MountName is the dbfs where data is mounted from S3.
But I am getting error which is due to duplicate key in file.
SparkException: Job aborted due to stage failure: Task 0 in stage 813.0 failed 4 times, most recent failure: Lost task 0.3 in stage 813.0 (TID 79285, 10.179.245.218, executor 0): com.databricks.sql.io.FileReadException: Error while reading file dbfs:/mnt/Alibaba_data/tstamp=2020_03_25/ts-1585154320710.parquet.gz.
Caused by: java.lang.RuntimeException: Found duplicate field(s) "subtype": [subtype, subType] in case-insensitive mode
Now i need to overcome it. May be making the read case sensitive or by dropping the column while read, or by any other means if suggested.
Suggestion please.
Try with case sensitivity enabled.
spark.sql.caseSensitive should be set to true.

MongoSpark E11000 error when writing to a MongoDB Replica Set

I am using a Spark2 application which uses the following command from com.mongodb.spark.MongoSpark to write a DataFrame to a three-node-MongoDB Replica Set:
//The real command is similar to this one, depending on options
//set to the DataFrame and the DataFrameWriter object about MongoDB configurations,
//such as the writeConcern
var df: DataFrameWriter[Row] = spark.sql(sql).write
.option("uri", theUri)
.option("database", theDatabase)
.option("collection", theCollection)
.option("replaceDocument", "false")
.mode("append")
[...]
MongoSpark.save(df)
The fact is that although I am sure the source data, which comes from a Hive table, has a unique primary key, when Spark application is running I get a duplicate key error:
2019-01-14 13:01:08 ERROR: Job aborted due to stage failure: Task 51 in stage 19.0 failed 8 times,
most recent failure: Lost task 51.7 in stage 19.0 (TID 762, mymachine, executor 21):
com.mongodb.MongoBulkWriteException: Bulk write operation error on server myserver.
Write errors: [BulkWriteError{index=0, code=11000,
message='E11000 duplicate key error collection:
ddbb.tmp_TABLE_190114125615 index: idx_unique dup key: { : "00120345678" }', details={ }}].
at com.mongodb.connection.BulkWriteBatchCombiner.getError(BulkWriteBatchCombiner.java:176)
at com.mongodb.connection.BulkWriteBatchCombiner.throwOnError(BulkWriteBatchCombiner.java:205)
[...]
I have tried setting write concern to "3" or even "majority". Furthermore, the timeout has been set to 4/5 seconds, but sometimes this duplicate key error still appears.
I would like to know how to configurate the load in order not to obtain duplicate entries when writing to the Replica Set.
Any suggestions? Thanks in advance!

Spark streaming Redis Read Time Out with Scala

While i'm reading table from redis getting this below error.
Below code normally working well.
val readDF= spark.sparkContext.fromRedisKeyPattern(tableName,5).getHash().toDS()
Normally it's working for less than 2 million rows. But if i'm reading big table getting this error.
18/10/11 17:08:25 ERROR Executor: Exception in task 37.0 in stage 3.0
(TID 338) redis.clients.jedis.exceptions.JedisConnectionException:
java.net.SocketTimeoutException: Read timed out at
redis.clients.util.RedisInputStream.ensureFill(RedisInputStream.java:202)
at
redis.clients.util.RedisInputStream.readByte(RedisInputStream.java:40)
val redis =
spark.sparkContext.fromRedisKeyPattern(tableName,100).getHash().toDS()
I also changed some settings on redis but i think it's not about that.
Do you know how can i solve this problem ?