Spark reparition() function increases number of tasks per executor, how to increase number of executor - scala

I'm working on IBM Server of 30gb ram (12 cores engine), I have provided all the cores to spark but still, it uses only 1 core, I tried while loading the file and got successful with the command
val name_db_rdd = sc.textFile("input_file.csv",12)
and able to provide all the 12 cores to the processing for the starting jobs but I want to split the operation in between the intermediate operations to the executors, so that it can use all the 12 cores.
Image - description
val new_rdd = rdd.repartition(12)
As you can see in this image only 1 executor is running and repartition function split the data to many tasks at one executor.

It depends how you're launching the job, but you probably want to add --num-executors to your command line when you're launching your spark job.
Something like
spark-submit
--num-executors 10 \
--driver-memory 2g \
--executor-memory 2g \
--executor-cores 1 \
might work well for you.
Have a look on the Running Spark on Yarn for more details, though some of the switches they mention are Yarn specific.

Related

Spark (v2) does not generate output if the size is more than 2 GB

My Spark application writes outputs that range from several KBs to GBs. I have been facing problem in generating output for certain cases when the file size appears to be more than 2 GB, wherein nothing seems to happen. I hardly see any CPU usage. However, in case where the output size is less than 2 GB, such as 1.3 GB, the same application works flawlessly. Also, please note that writing output is the last stage and all the computations using the data to be written gets correctly and completely processed (as can be seen from debug output) -- hence driver storing the data is not an issue. Besides, the size of the executor memory is also not an issue as I had increased it even to 90 GB while 30GB also seems to be adequate. The following is the code I am using to write the output. Please suggest any way to fix it.
var output = scala.collection.mutable.ListBuffer[String]()
...
output.toDF().coalesce(1).toDF().write.mode("overwrite")
.option("parserLib","univocity").option("ignoreLeadingWhiteSpace","false")
.option("ignoreTrailingWhiteSpace","false").format("csv").save(outputPath)
Other related parameters passed by spark-submit are as follows:
--driver-memory 150g \
--executor-cores 4 \
--executor-memory 30g \
--conf spark.cores.max=252 \
--conf spark.local.dir=/tmp \
--conf spark.rpc.message.maxSize=2047 \
--conf spark.driver.maxResultSize=50g \
The issue was observed on two different systems, one standalone and the other which is a spark cluster.
Based on Gabio's idea of reparitioning, I solved the problem as follows:
val tDF = output.toDF()
println("|#tDF partitions = " + tDF.rdd.partitions.size.toString)
tDF.write.mode("overwrite")
.option("parserLib","univocity").option("ignoreLeadingWhiteSpace","false")
.option("ignoreTrailingWhiteSpace","false").format("csv").save(outputPath)
The output ranged between 2.3 GB and 14 GB, so the source of the problem is elsewhere and perhaps not in spark.driver.maxResultSize.
A big thank you to #Gabio!

How Can I submit multiple jobs in Spark Standalone cluster?

I have a Machine with Apache Spark. Machine is 64GB RAM 16 Cores.
My Objective in each spark job
1. Download a gz file from a remote server
2. Extract gz to get csv file (1GB max)
3. Process csv file in spark and save some stats.
Currently I am submitting one job for each file received by doing following
./spark-submit --class ClassName --executor-cores 14 --num-executors 3 --driver-memory 4g --executor-memory 4g jar_path
And wait for this job to complete and then start new job for new file.
Now I want to utilise 64GB RAM by running multiple jobs in parallel.
I can assign 4g RAM to each job and want to queue my jobs when there are enough jobs already running.
How Can I achieve this?
You should submit multiple jobs from different threads:
https://spark.apache.org/docs/latest/job-scheduling.html#scheduling-within-an-application
and configure pool properties (set schedulingMode to FAIR):
https://spark.apache.org/docs/latest/job-scheduling.html#configuring-pool-properties
From Spark Doc:
https://spark.apache.org/docs/latest/spark-standalone.html#resource-scheduling:
The standalone cluster mode currently only supports a simple FIFO
scheduler across applications. However, to allow multiple concurrent
users, you can control the maximum number of resources each
application will use. By default, it will acquire all cores in the
cluster, which only makes sense if you just run one application at a
time. You can cap the number of cores by setting spark.cores.max ...
By default, it utilise all the resources for one single job.We need to define the resources so that their will be space to run other job as well.Below is the command you can use to submit spark job.
bin/spark-submit --class classname --master spark://hjvm1:6066 --deploy-mode cluster --driver-memory 500M --conf spark.executor.memory=1g --conf spark.cores.max=1 /data/test.jar

Container killed by YARN for exceeding memory limits.14.8 GB of 6 GB physical memory used

I have a spark job where I do the following
Load the data from parquet via spark sql and convert it to a
pandas df. The datasize is only 250 MB
Run an rdd.foreach to
iterate over a relatively some dataset(1000 rows) and take the
pandas df from step 1 and do some transformation.
I get a Container killed by YARN for exceeding memory limits error after some iterations .
Container killed by YARN for exceeding memory limits. 14.8 GB of 6 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead
I am unable to understand why the error says 14.8 GB of 6 GB physical memory used ?
I have tried increasing spark.yarn.executor.memoryOverhead
I have used the following command
spark-submit --master yarn --deploy-mode cluster --num-executors 4 --executor-cores 2 --executor-memory 2G --conf spark.yarn.executor.memoryOverhead=4096 --py-files test.zip app_main.py
I am using spark 2.3
yarn.scheduler.minimum-allocation-mb = 512 MB
yarn.nodemanager.resource.memory-mb = 126 GB
This is one of the common error when memoryOverhead option is used, it is better to use other options to tune jobs.
http://ashkrit.blogspot.com/2018/09/anatomy-of-apache-spark-job.html post talks about this issue and how to deal with it.

Executor is taking more memory than defined

spark-submit --num-executors 10 --executor-memory 5g --master yarn --executor-cores 3 --class com.octro.hbase.hbase_final /home/hadoop/testDir/nikunj/Hbase_data_maker/target/Hbase_data_maker-0.0.1-SNAPSHOT-jar-with-dependencies.jar main_user_profile
This is my command to execute my spark code on the cluster.
On this command my YARN page gives total memory allocated as
71GB
I tried searching on the internet for the various reason but didn't received any clear clarification.
Later I figured out it is using the formula as
No of Executors*(Memory*2)+1
Plus 1 is for the main container.But why that 2GB by default.?
It was because of 2GB memory overhead that was specified in configuration file of spark.
That's why it was taking 2GB more.

Spark-Submit execution time

I have developed a Scala Program on Spark which connected MySQL Database to pull the data about 250K records and process it. When I execute the application from the IDE itself (IntelliJ) it takes about 1 min to complete the job where as if I submit through Spark-Sumit from my terminal it takes 4 minutes.
Scala Code
val sparkSession = SparkSession.builder().
appName("credithistory").
master("local[*]")
.getOrCreate()
From Terminal
spark-submit --master local[*] .....
Any changes should I have to make or it is normal behaviour? Since local[*] I have it in code also Im supplying from terminal.
local[K] Run Spark locally with K worker threads (ideally, set this to the number of cores on your machine).
it's from the reference of spark web page. link
you can adjust the number of 'K',
for example, "local[4] or local[8]" following your CPU performance.