I am trying to extract data from Redshift and insert into a S3 using an newly created Glue table on that S3 location.
versions
Pyspark - 2.4.0
EMR-emr-5.21.0
My write looks like something like below:
date_filtered_df.coalesce(int(args.numpartitions)) \
.write \
.mode("overwrite") \
.format("parquet") \
.insertInto("{}.{}_stg".format(args.database, args.table))
That is a newly created table just before this insert and it points to a completly empty location.
But the code is erroring with
19/12/09 14:37:48 WARN TaskSetManager: Lost task 576.1 in stage 1.0 (TID 3125, ip-172-31-31-203.ec2.internal, executor 401): org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:254)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:168)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.fs.FileAlreadyExistsException: File already exists:s3://bucketname/trusted/databasename/dw/2019-12-08/tbalename/.hive-staging_hive_2019-12-09_12-49-15_136_2979557082816709535-1/-ext-10000/sales_dt=20051130/biz_unit_code=CS/geo_code=AMER/part-00576-84bcde9a-4fc0-402b-8aab-8d71e06f8c43.c000
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249)
at org.apache.spark.sql.hive.execution.HiveOutputWriter.<init>(HiveFileFormat.scala:123)
at org.apache.spark.sql.hive.execution.HiveFileFormat$$anon$1.newInstance(HiveFileFormat.scala:103)
at org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.newOutputWriter(FileFormatDataWriter.scala:236)
at org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.write(FileFormatDataWriter.scala:260)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:239)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:245)
... 10 more
Any help is appreciated.
Related
I'm trying to write a simple DataFrame in parquet format to Azure Blob Storage.
Note that the following code snippets work in local, so my guess is that it has to be something related with Azure libraries. I tried as well with delta format and it works (even if it uses parquet under the hood).
Using Spark 3.1.1, Scala 2.12.10, OpenJDK 1.8.0_292.
I set up my Spark session as usual, something like:
$SPARK_HOME/bin/spark-shell \
(...cluster settings...) \
--conf spark.hadoop.fs.azure.account.key.<account>.blob.core.windows.net="${AZURE_BLOB_STORAGE_KEY}" \
--conf spark.hadoop.fs.AbstractFileSystem.wasb.impl=org.apache.hadoop.fs.azure.Wasb \
--conf spark.hadoop.fs.wasb.impl=org.apache.hadoop.fs.azure.NativeAzureFileSystem \
--conf spark.delta.logStore.class=org.apache.spark.sql.delta.storage.AzureLogStore \
--packages org.apache.hadoop:hadoop-azure:2.7.0,com.azure:azure-storage-blob:12.8.0,com.azure:azure-storage-common:12.8.0,com.microsoft.azure:azure-storage:2.0.0,io.delta:delta-core_2.12:0.8.0
(...other irrelevant settings...)
I tried other versions for azure-storage-blob, azure-storage-common and azure-storage packages, all resulting in the same problem.
To reproduce the problem I create a simple dataframe and write it to the storage:
val columns = Seq("language", "users_count")
val data = Seq(("Java", "20000"), ("Python", "100000"), ("Scala", "3000"))
val rdd = spark.sparkContext.parallelize(data)
val df = spark.createDataFrame(rdd).toDF(columns: _*)
df.show
// +--------+-----------+
// |language|users_count|
// +--------+-----------+
// | Java| 20000|
// | Python| 100000|
// | Scala| 3000|
// +--------+-----------+
df.write.parquet("wasb://<container>#<account>.blob.core.windows.net/<path>")
When writing on parquet format I get the com.microsoft.azure.storage.StorageException: One of the request inputs is not valid exception:
21/09/21 13:38:14 WARN TaskSetManager: Lost task 0.0 in stage 5.0 (TID 83) (10.244.6.3 executor 6): org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:296)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.fs.azure.AzureException: com.microsoft.azure.storage.StorageException: One of the request inputs is not valid.
at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.rename(AzureNativeFileSystemStore.java:2482)
at org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.execute(NativeAzureFileSystem.java:424)
at org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1997)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitTask(FileOutputCommitter.java:531)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitTask(FileOutputCommitter.java:502)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:50)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:77)
at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitTask(HadoopMapReduceCommitProtocol.scala:260)
at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:79)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:280)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1473)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
... 9 more
Caused by: com.microsoft.azure.storage.StorageException: One of the request inputs is not valid.
at com.microsoft.azure.storage.StorageException.translateException(StorageException.java:162)
at com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:307)
at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:177)
at com.microsoft.azure.storage.blob.CloudBlob.startCopyFromBlob(CloudBlob.java:764)
at org.apache.hadoop.fs.azure.StorageInterfaceImpl$CloudBlobWrapperImpl.startCopyFromBlob(StorageInterfaceImpl.java:399)
at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.rename(AzureNativeFileSystemStore.java:2449)
... 20 more
Any hints or ideas on what is causing it or what to do to make it work?
Thank you!
Two things helped me to write to Azure storage using WASB protocol:
storage container should be Data Lake Gen1 (tried Gen2 and failed)
you need to add the following dependency/jar:
org.codehaus.jackson
jackson-mapper-lgpl
1.9.13
I had to use WASB (due to my version of hadoop (2.9.2) does not support ABFS), but if you have hadoop-2.10.1+, please use ABFS.
I am trying to write a dataframe to cassandra using pyspark but its thworing me an error:
py4j.protocol.Py4JJavaError: An error occurred while calling o74.save.
: org.apache.spark.SparkException: Job aborted due to stage failure:
Task 6 in stage 3.0 failed 4 times, most recent failure: Lost task 6.3
in stage 3.0 (TID 24, ip-172-31-11-193.us-west-2.compute.internal,
executor 1): java.lang.NoClassDefFoundError:
com/twitter/jsr166e/LongAdder
at org.apache.spark.metrics.OutputMetricsUpdater$TaskMetricsSupport$class.$init$(OutputMetricsUpdater.scala:107)
at org.apache.spark.metrics.OutputMetricsUpdater$TaskMetricsUpdater.(OutputMetricsUpdater.scala:153)
at org.apache.spark.metrics.OutputMetricsUpdater$.apply(OutputMetricsUpdater.scala:75)
at com.datastax.spark.connector.writer.TableWriter.writeInternal(TableWriter.scala:209)
at com.datastax.spark.connector.writer.TableWriter.insert(TableWriter.scala:197)
at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:183)
at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36)
at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Below is my code for write:
DataFrame.write.format(
"org.apache.spark.sql.cassandra"
).mode(
'append'
).options(
table="student1",
keyspace="university"
).save()
I have added the below mentioned spark-caasandra connector in spark-default.conf
spark.jars.packages datastax:spark-cassandra-connector:2.4.0-s_2.11
I am able to read the data from cassandra but issue is with write.
I am not an expert of Spark, but this might help:
These errors are commonly thrown when the Spark Cassandra Connector or
its dependencies are not on the runtime classpath of the Spark
Application. This is usually caused by not using the prescribed
--packages method of adding the Spark Cassandra Connector and its dependencies to the runtime classpath.
Source:
https://github.com/datastax/spark-cassandra-connector/blob/master/doc/FAQ.md#why-cant-the-spark-job-find-spark-cassandra-connector-classes-classnotfound-exceptions-for-scc-classes
I have following versions installed :-
spark 2.1.0,
scala 2.11.6,
mongoDB 3.2.17
I tried to start the pyspark shell with following command
./bin/pyspark --packages org.mongodb.spark:mongo-spark-connector_2.11:2.2.0
after this i started spark session as follows
from pyspark.sql import SparkSession
my_spark = SparkSession.builder.appName("myApp").config("spark.mongodb.input.uri", "mongodb://127.0.0.1/mycollection.dummy").config("spark.mongodb.output.uri", "mongodb://127.0.0.1/mycollection.dummy").getOrCreate()
i performed writing to a collection in mongodb db and it is getting executed successfully
but, when i try to read the collection using the command
df = my_spark.read.format("com.mongodb.spark.sql.DefaultSource").option("uri","mongodb://127.0.0.1/mycollection.dummy").load()
it is showing error as follows
17/10/13 10:43:33 ERROR executor.Executor: Exception in task 0.0 in stage 2.0 (TID 2)
java.lang.NoSuchMethodError: org.apache.spark.sql.catalyst.analysis.TypeCoercion$.findTightestCommonType()Lscala/Function2;
at com.mongodb.spark.sql.MongoInferSchema$.com$mongodb$spark$sql$MongoInferSchema$$compatibleType(MongoInferSchema.scala:135)
at com.mongodb.spark.sql.MongoInferSchema$$anonfun$3.apply(MongoInferSchema.scala:78)
at com.mongodb.spark.sql.MongoInferSchema$$anonfun$3.apply(MongoInferSchema.scala:78)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:214)
at scala.collection.AbstractIterator.aggregate(Iterator.scala:1336)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1135)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1135)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1136)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1136)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
You seem to have an inconsistency in the dataframe reading line. First of all, you initialize my_spark, but then use spark. You also use "uri", instead of "spark.mongodb.input.uri". Try this below:
df = my_spark.read.format("com.mongodb.spark.sql.DefaultSource").option("spark.mongodb.input.uri","mongodb://127.0.0.1/mycollection.dummy").load()
Otherwise provide more code to check as a whole.
PROBLEM
Spark Application error due to Mkdirs failed to create.
I'm using spark 1.6.3 unable to save output on my local dir
java.io.IOException: Mkdirs failed to create file:/home/zooms/output/sample1/sample1.txt/_temporary/0/_temporary/attempt_201709251225_0005_m_000000_10
(exists=false, cwd=file:/grid/1/hadoop/yarn/local/usercache/zooms/appcache/application_1504506749061_0086/container_e01_1504506749061_0086_01_000003)
Updated logs
17/09/25 13:39:02 WARN TaskSetManager: Lost task 0.0 in stage 5.0 (TID 10, worker3.hdp.example.com): java.io.IOException: Mkdirs failed to create file:/home/zooms/output/sample1/sample1.txt/_temporary/0/_temporary/attempt_201709251339_0005_m_000000_10 (exists=false, cwd=file:/grid/1/hadoop/yarn/local/usercache/zooms/appcache/application_1504506749061_0099/container_e01_1504506749061_0099_01_000003)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:442)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:930)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:823)
at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:123)
at org.apache.spark.SparkHadoopWriter.open(SparkHadoopWriter.scala:91)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1191)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1183)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Code:
val output = "file:///home/zooms/output/sample1/sample1.txt"
result.coalesce(1).saveAsTextFile(output)
SOLUTION
Make sure that the whole cluster have access to the local or specific directory.
On my case, the cluster or the spark executors doesn't have access to the specific directory.
Here's the answer to my question.
Since i'm running on a cluster mode or client mode, workers won't able to create the directory on each node unless you define it. Use
spark-submit -v --master local ...
References:
Writing files to local system with Spark in Cluster mode
Why does Spark job fails to write output?
I can run a spark job successfully in the spark-shell but when its packages and run through spark-submit Im getting a NoSuchMethodError.
This indicates to me some sort of mismatch of classpaths. Is there a way I can compare the two classpaths? Some sort of logging statement?
Thanks!
15/05/28 12:46:46 ERROR Executor: Exception in task 1.0 in stage 0.0 (TID 1)
java.lang.NoSuchMethodError: scala.Predef$.ArrowAssoc(Ljava/lang/Object;)Ljava/lang/Object;
at com.ldamodel.LdaModel$$anonfun$5$$anonfun$apply$5.apply(LdaModel.scala:22)
at com.ldamodel.LdaModel$$anonfun$5$$anonfun$apply$5.apply(LdaModel.scala:22)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at com.ldamodel.LdaModel$$anonfun$5.apply(LdaModel.scala:22)
at com.ldamodel.LdaModel$$anonfun$5.apply(LdaModel.scala:22)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:202)
at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:56)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:64)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I think this should work:
import java.lang.ClassLoader
val cl = ClassLoader.getSystemClassLoader
cl.asInstanceOf[java.net.URLClassLoader].getURLs.foreach(println)
Without modifying the code:
SPARK_PRINT_LAUNCH_COMMAND=true /usr/lib/spark/bin/spark-shell
Also works with spark-submit.
This should do the trick without requiring any code changes:
--conf 'spark.driver.extraJavaOptions=-verbose:class'
--conf 'spark.executor.extraJavaOptions=-verbose:class'
/opt/spark/bin/compute-classpath.sh