Unusually long time in writing parquet files to Google Cloud - pyspark

I am using pyspark dataframe on dataproc cluster to generate features and writing parquet files as output to the Google Cloud Storage. There are two problems I am facing-
I have provided 22 executers, 3 cores per exec and ~13G RAM per executer. However only 10 executers are fired when I submit the jobs. The dataproc cluster contains 10 worker nodes and 8 cores per node and 30 GB ram per node.
When I write the individual feature files and record the total time, it is significantly lower then the time taken to write all the features together in a single file. I have tried changing the partitions but doesn't help either.
This is how I write the parquet file:
df.select([feature_lst]).write.parquet(gcs_path+outfile,mode='overwrite')
data size - 20M+ records, 30+ numerical features
Spark UI image:
The current stage is when I write all features together- significantly higher than all of previous stages combined.
If someone can provide any insight into the above two issues I will be grateful.

Related

does EMR cluster size matters to read data from S3 using spark

Setup: latest (5.29) AWS EMR, spark, 1 master 1 node.
step 1. I have used S3Select to parse a file & collect all file keys for pulling from S3.
step 2. Use pyspark iterate the keys in a loop and do the following
spark
.read
.format("s3selectCSV")
.load(key)
.limit(superhighvalue)
.show(superhighvalue)
It took be x number of minutes.
When I increase the cluster to 1 master and 6 nodes, I am not seeing difference in time. It appears to me that I am not using the increased core nodes.
Everything else, config wise are defaults out of the box, I am not setting anything.
So, my question is does cluster size matters to read and inspect (say log or print) data from S3 using EMR, Spark?
Few thing to keep in mind.
are you sure that the executors have indeed increased because of
increase of nodes? or u can specify them during spark submit
--num-executors 6. MOre nodes doenst mean nore executors are spinned.
next thing, wht is the size of csv file? some 1MB? then u will not see much difference. Make sure to have atleast 3-4 GB
Yes, size does matter. For my use case, sc.parallelize(s3fileKeysList), parallelize turned out to be the key.

How to optimize Spark for writing large amounts of data to S3

I do a fair amount of ETL using Apache Spark on EMR.
I'm fairly comfortable with most of the tuning necessary to get good performance, but I have one job that I can't seem to figure out.
Basically, I'm taking about 1 TB of parquet data - spread across tens of thousands of files in S3 - and adding a few columns and writing it out partitioned by one of the date attributes of the data - again, parquet formatted in S3.
I run like this:
spark-submit --conf spark.dynamicAllocation.enabled=true --num-executors 1149 --conf spark.driver.memoryOverhead=5120 --conf spark.executor.memoryOverhead=5120 --conf spark.driver.maxResultSize=2g --conf spark.sql.shuffle.partitions=1600 --conf spark.default.parallelism=1600 --executor-memory 19G --driver-memory 19G --executor-cores 3 --driver-cores 3 --class com.my.class path.to.jar <program args>
The size of the cluster is dynamically determined based on the size of the input data set, and the num-executors, spark.sql.shuffle.partitions, and spark.default.parallelism arguments are calculated based on the size of the cluster.
The code roughly does this:
va df = (read from s3 and add a few columns like timestamp and source file name)
val dfPartitioned = df.coalesce(numPartitions)
val sqlDFProdDedup = spark.sql(s""" (query to dedup against prod data """);
sqlDFProdDedup.repartition($"partition_column")
.write.partitionBy("partition_column")
.mode(SaveMode.Append).parquet(outputPath)
When I look at the ganglia chart, I get a huge resource spike while the de-dup logic runs and some data shuffles, but then the actual writing of the data only uses a tiny fraction of the resources and runs for several hours.
I don't think the primary issue is partition skew, because the data should be fairly distributed across all the partitions.
The partition column is essentially a day of the month, so each job typically only has 5-20 partitions, depending on the span of the input data set. Each partition typically has about 100 GB of data across 10-20 parquet files.
I'm setting spark.sql.files.maxRecordsPerFile to manage the size of those output files.
So, my big question is: how can I improve the performance here?
Simply adding resources doesn't seem to help much.
I've tried making the executors larger (to reduce shuffling) and also to increase the number of CPUs per executor, but that doesn't seem to matter.
Thanks in advance!
Zack, I have a similar use case with 'n' times more files to process on a daily basis. I am going to assume that you are using the code above as is and trying to improve the performance of the overall job. Here are couple of my observations:
Not sure what the coalesce(numPartitions) number actually is and why its being used before de-duplication process. Your spark-submit shows you are creating 1600 partitions and thats good enough to start with.
If you are going to repartition before write then the coalesce above may not be beneficial at all as re-partition will shuffle data.
Since you claim writing 10-20 parquet files it means you are only using 10-20 cores in writing in the last part of your job which is the main reason its slow. Based on 100 GB estimate the parquet file ranges from approx 5GB to 10 GB, which is really huge and I doubt one will be able to open them on their local laptop or EC2 machine unless they use EMR or similar (with huge executor memory if reading whole file or spill to disk) because the memory requirement will be too high. I will recommend creating parquet files of around 1GB to avoid any of those issues.
Also if you create 1GB parquet file, you will likely speed up the process 5 to 10 times as you will be using more executors/cores to write them in parallel. You can actually run an experiment by simply writing the dataframe with default partitions.
Which brings me to the point that you really don't need to use re-partition as you want to write.partitionBy("partition_date") call. Your repartition() call is actually forcing the dataframe to only have max 30-31 partitions depending upon the number of days in that month which is what is driving the number of files being written. The write.partitionBy("partition_date") is actually writing the data in S3 partition and if your dataframe has say 90 partitions it will write 3 times faster (3 *30). df.repartition() is forcing it to slow it down. Do you really need to have 5GB or larger files?
Another major point is that Spark lazy evaluation is sometimes too smart. In your case it will most likely only use the number of executors for the whole program based on the repartition(number). Instead you should try, df.cache() -> df.count() and then df.write(). What this does is that it forces spark to use all available executor cores. I am assuming you are reading files in parallel. In your current implementation you are likely using 20-30 cores. One point of caution, as you are using r4/r5 machines, feel free to up your executor memory to 48G with 8 cores. I have found 8cores to be faster for my task instead of standard 5 cores recommendation.
Another pointer is to try ParallelGC instead of G1GC. For the use case like this when you are reading 1000x of files, I have noticed it performs better or not any worse than G1Gc. Please give it a try.
In my workload, I use coalesce(n) based approach where 'n' gives me a 1GB parquet file. I read files in parallel using ALL the cores available on the cluster. Only during the write part my cores are idle but there's not much you can do to avoid that.
I am not sure how spark.sql.files.maxRecordsPerFile works in conjunction with coalesce() or repartition() but I have found 1GB seems acceptable with pandas, Redshift spectrum, Athena etc.
Hope it helps.
Charu
Here are some optimizations for faster running.
(1) File committer - this is how Spark will read the part files out to the S3 bucket. Each operation is distinct and will be based upon
spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version 2
Description
This will write the files directly to part files instead or initially loading them to temp files and copying them over to their end-state part files.
(2) For file size you can derive it based upon getting the average number of bytes per record. Below I am figuring out the number of bytes per record to figure the number of records for 1024 MBs. I would try it first with 1024MBs per partition, then move upwards.
import org.apache.spark.util.SizeEstimator
val numberBytes : Long = SizeEstimator.estimate(inputDF.rdd)
val reduceBytesTo1024MB = numberBytes/123217728
val numberRecords = inputDF.count
val recordsFor1024MB = (numberRecords/reduceBytesTo1024MB).toInt + 1
(3) [I haven't tried this] EMR Committer - if you are using EMR 5.19 or higher, since you are outputting Parquet. You can set the Parquet optimized writer to TRUE.
spark.sql.parquet.fs.optimized.committer.optimization-enabled true

Spark re partition logic for databricks scalable cluster

Databricks spark cluster can auto-scale as per load.
I am reading gzip files in spark and doing repartitoning on the rdd to get parallelism as for gzip file it will be read on signle core and generate rdd with one partition.
As per this post ideal number of partitions is the number of cores in the cluster which I can set during repartitioning but in case of auto-scale cluster this number will vary as per the state of cluster and how many executors are there in it.
So, What should be the partitioning logic for an auto scalable spark cluster?
EDIT 1:
The folder is ever growing, gzip files keep coming periodically in it, the size of gzip file is ~10GB & uncompressed size is ~150GB. I know that multiple files can be read in parallel. But for a single super large file databricks may try to auto scale the cluster however even though after scaling the cores in cluster have increased, my dataframe would have less number of partitions (based on previous state of cluster where it may be having lesser cores).
Even though my cluster will auto scale(scale out), the processing will be limited to number of partitions which I do by
num_partitions = <cluster cores before scaling>
df.repartition(num_partitions)
A standard gzip file is not splittable, so Spark will handle the gzip file with just a single core, a single task, no matter what your settings are [As of Spark 2.4.5/3.0]. Hopefully the world will move to bzip2 or other splittable compression techniques when creating large files.
If you directly write the data out to Parquet, you will end up with a single, splittable parquet file. This will be written out by a single core.
If stuck with the default gzip codec, would be better to re-partition after the read, and write out multiple parquet files.
from pyspark.sql.types import StructType, StructField, StringType, DoubleType, IntegerType
schema = StructType([
StructField("a",IntegerType(),True),
StructField("b",DoubleType(),True),
StructField("c",DoubleType(),True)])
input_path = "s3a://mybucket/2G_large_csv_gzipped/onebillionrows.csv.gz"
spark.conf.set('spark.sql.files.maxPartitionBytes', 1000 * (1024 ** 2))
df_two = spark.read.format("csv").schema(schema).load(input_path)
df_two.repartition(32).write.format("parquet").mode("overwrite").save("dbfs:/tmp/spark_gunzip_default_remove_me")
I very recently found, and initial tests are very promising, a splittable gzip codec. This codec actually reads the file multiple times, and each task scans ahead by some number of bytes (w/o decompressing) then starts the decompression.
The benefits of this pay off when it comes time to write the dataframe out as a parquet file. You will end up with multiple files, all written in parallel, for greater throughput and shorter wall clock time (your CPU hours will be higher).
Reference: https://github.com/nielsbasjes/splittablegzip/blob/master/README-Spark.md
My test case:
from pyspark.sql.types import StructType, StructField, StringType, DoubleType, IntegerType
schema = StructType([
StructField("a",IntegerType(),True),
StructField("b",DoubleType(),True),
StructField("c",DoubleType(),True)])
input_path = "s3a://mybucket/2G_large_csv_gzipped/onebillionrows.csv.gz"
spark.conf.set('spark.sql.files.maxPartitionBytes', 1000 * (1024 ** 2))
df_gz_codec = (spark.read
.option('io.compression.codecs', 'nl.basjes.hadoop.io.compress.SplittableGzipCodec')
.schema(schema)
.csv(input_path)
)
df_gz_codec.write.format("parquet").save("dbfs:/tmp/gunzip_to_parquet_remove_me")
For a splittable file/data the partitions will be mostly created automatically depending on cores, operation being narrow or wide, file size etc. Partitions can also be controlled programmatically using coalesce and repartition. But for a gzip/un-splittable file there will be just 1 task for a file and it can be as many parallel as many cores available (like you said).
For dynamic cluster one option you have is to point your job to a folder/bucket containing large number of gzip files. Say you have 1000 files to process and you have 10 cores then 10 will in parallel. When dynamically your cluster increases to 20 then 20 will run in parallel. This happens automatically and you needn't code for this. The only catch is that you can't scale fewer files than the available cores. This is a known deficiency of un-splittable files.
The other option would be to define the cluster size for the job based the number and size of files available. You can find an emparical formula based on the historical run time. Say you have 5 large files and 10 small files (half size of large) then you may assign 20 cores (10 + 2*5) to efficiently use the cluster resources.

How spark loads the data into memory

I have total confusion in the spark execution process. I have referred may articles and tutorials, nobody is discussing in detailed. I might be wrongly understanding spark. Please correct me.
I have my file of 40GB distributed across 4 nodes (10GB each node) of the 10 node cluster.
When I say spark.read.textFile("test.txt") in my code, will it load data(40GB) from all the 4 nodes into driver program (master node)?
Or this RDD will be loaded in all the 4 nodes separately. In that case, each node RDD should hold 10GB of physical data, is it?
And the whole RDD holds 10GB data and perform tasks for each partition i.e 128MB in spark 2.0. And finally shuffles the output to the driver program (master node)
And I read somewhere "numbers of cores in Cluster = no. of partitions" does it mean, the spark will move the partitions of one node to all 10 nodes for processing?
Spark doesn't have to read the whole file into memory at once. That 40GB file is split into many 128MB (or whatever your partition size is) partitions. Each of those partitions is a processing task. Each core will only work on one task at a time, with a preference to work on tasks where the data partition is stored on the same node. Only the 128MB partition that is being worked on needs to be read, the rest of the file is not read. Once the task completes (and produces some output) then the 128MB for the next task cab be read in and the data read in for the first task can be freed from memory. Because of this only the small amount of data being processed at a time needs to be loaded in to memory and not the entire file at once.
Also strictly speaking spark.read.textFile("test.txt") does nothing. It reads no data and does no processing. It creates an RDD but an RDD doesn't contain any data. And RDD is just an execution plan. spark.read.textFile("test.txt") declared that the file test.txt will be read an used as a source of data if and when the RDD is evaluated but doesn't do anything on its own.

Spark: sc.WholeTextFiles takes a long time to execute

I have a cluster and I execute wholeTextFiles which should pull about a million text files who sum up to approximately 10GB total
I have one NameNode and two DataNode with 30GB of RAM each, 4 cores each. The data is stored in HDFS.
I don't run any special parameters and the job takes 5 hours to just read the data. Is that expected? are there any parameters that should speed up the read (spark configuration or partition, number of executors?)
I'm just starting and I've never had the need to optimize a job before
EDIT: Additionally, can someone explain exactly how the wholeTextFiles function works? (not how to use it, but how it was programmed). I'm very interested in understand the partition parameter, etc.
EDIT 2: benchmark assessment
So I tried repartition after the wholeTextFile, the problem is the same because the first read is still using the pre-defined number of partitions, so there are no performance improvements. Once the data is loaded the cluster performs really well... I have the following warning message when dealing with the data (for 200k files), on the wholeTextFile:
15/01/19 03:52:48 WARN scheduler.TaskSetManager: Stage 0 contains a task of very large size (15795 KB). The maximum recommended task size is 100 KB.
Would that be a reason of the bad performance? How do I hedge that?
Additionally, when doing a saveAsTextFile, my speed according to Ambari console is 19MB/s. When doing a read with wholeTextFiles, I am at 300kb/s.....
It seems that by increase the number of partitions in wholeTextFile(path,partitions), I am getting better performance. But still only 8 tasks are running at the same time (my number of CPUs). I'm benchmarking to observe the limit...
To summarize my recommendations from the comments:
HDFS is not a good fit for storing many small files. First of all, NameNode stores metadata in memory so the amount of files and blocks you might have is limited (~100m blocks is a max for typical server). Next, each time you read file you first query NameNode for block locations, then connect to the DataNode storing the file. Overhead of this connections and responses is really huge.
Default settings should always be reviewed. By default Spark starts on YARN with 2 executors (--num-executors) with 1 thread each (--executor-cores) and 512m of RAM (--executor-memory), giving you only 2 threads with 512MB RAM each, which is really small for the real-world tasks
So my recommendation is:
Start Spark with --num-executors 4 --executor-memory 12g --executor-cores 4 which would give you more parallelism - 16 threads in this particular case, which means 16 tasks running in parallel
Use sc.wholeTextFiles to read the files and then dump them into compressed sequence file (for instance, with Snappy block level compression), here's an example of how this can be done: http://0x0fff.com/spark-hdfs-integration/. This will greatly reduce the time needed to read them with the next iteration