In spark-env.sh, it's possible to configure the following environment variables:
# - SPARK_WORKER_MEMORY, to set how much memory to use (e.g. 1000m, 2g)
export SPARK_WORKER_MEMORY=22g
[...]
# - SPARK_MEM, to change the amount of memory used per node (this should
# be in the same format as the JVM's -Xmx option, e.g. 300m or 1g)
export SPARK_MEM=3g
If I start a standalone cluster with this:
$SPARK_HOME/bin/start-all.sh
I can see at the Spark Master UI webpage that all the workers start with only 3GB RAM:
-- Workers Memory Column --
22.0 GB (3.0 GB Used)
22.0 GB (3.0 GB Used)
22.0 GB (3.0 GB Used)
[...]
However, I specified 22g as SPARK_WORKER_MEMORY in spark-env.sh
I'm somewhat confused by this. Probably I don't understand the difference between "node" and "worker".
Can someone explain the difference between the two memory settings and what I might have done wrong?
I'm using spark-0.7.0. See also here for more configuration info.
A standalone cluster can host multiple Spark clusters (each "cluster" is tied to a particular SparkContext). i.e. you can have one cluster running kmeans, one cluster running Shark, and another one running some interactive data mining.
In this case, the 22GB is the total amount of memory you allocated to the Spark standalone cluster, and your particular instance of SparkContext is using 3GB per node. So you can create 6 more SparkContext's using up to 21GB.
Related
10 Node cluster, each machine has 16 cores and 126.04 GB of RAM
Application input dataset is around 1TB with 10-15 files and there is some aggregation(groupBy)
Job will run using Yarn as resource scheduler
My Question how to pick num-executors, executor-memory, executor-core, driver-memory, driver-cores?
I tend to use this tool - http://spark-configuration.luminousmen.com/ , for profiling my Spark Jobs , the process does take some hit and try , but it helps in the longer run
Additionally you can understand how Spark Memory works - https://luminousmen.com/post/dive-into-spark-memory
I have a spark job running on a EMR cluster with following cluster configuration:
Master : 1 : m4.2xlarge: 32 GiB of memory, 8 vCPUs. Core : 2 :
m4.2xlarge: 32 GiB of memory, 8 vCPUs. Task Nodes : Upto 52 :
r4.2xlarge: 61 GiB of memory, 8 vCPUs.
Here is my spark submit configuration based on this blog:
1: https://blog.cloudera.com/how-to-tune-your-apache-spark-jobs-part-2/ .
spark.yarn.executor.memory=19g
spark.executor.cores=3
spark.yarn.driver.memoryOverhead=2g
spark.executor.memoryOverhead=2g
spark.dynamicAllocation.enabled=true
spark.dynamicAllocation.minExecutors=7
spark.dynamicAllocation.initialExecutors=7
spark.dynamicAllocation.maxExecutors=1000
spark.shuffle.service.enabled=true
spark.yarn.maxAttempts=1
I am running a cross join of 2 datasets for an use case. And I am trying to utilize every bit of memory and CPU available on cluster that I can using above settings. I am able to successfully utilize all memory available in the cluster but not CPU. I see that even though 432 cores are available, but spark job is able to utilize only 103 cores are being used as shown in screenshot. I see same behaviour when job is run in yarn-client mode (zeppelin) or yarn-cluster mode.
I am not sure what setting is missing or is incorrect. Any suggestions to resolve this is appreciated.
If you are seeing this in YARN ui probably you have to add this in yarn-site.xml
yarn.scheduler.capacity.resource-calculator: org.apache.hadoop.yarn.util.resource.DominantResourceCalculator
I had the same confusion. Actually while using DefaultResourceCalculator in Yarn UI its only calculates memory usage, behind the scene it may have been using more than 1 core but you will see only 1 core used. On the other hand DominantResourceCalculator calculates both core and memory for resource allocation and shows actual number of core and memory.
You can also enable ganglia or see EMR metrics for more details.
I am running an instance group of 20 Preemptible GCE instance to read ORC files on Google storage, The data partitioned by hour, each hour about 2GB.
What type of instances should i use ?
How many of the Ram should be used by the JVM ?
I am using autoscale configuration of 80% CPU and 10 minute cooldown, Is there more subtitle config for Presto ?
Is there a solution for servers shutdowns, due to lack of resources ?
Partial responses will be appreciated as well.
As 0.199 version of PrestoDB there's no google cloud storage connector for Presto, which makes impossible to query GCS data.
Regarding hardware requirements, I'll cite Terada doc here.
Memory
You should allocate a minimum of 16GB of RAM per node for Presto. But
recommend 64GB for most production workloads.
Network Bandwidth
It is recommended to have 10 Gigabit Ethernet between all the nodes in
the cluster.
Other Recommendations
Presto can be installed on any normally configured Hadoop cluster.
YARN should be configured to account for resources dedicated to
Presto. For example, if a node has 64GB of RAM, perhaps you would
normally allocate 60GB to YARN. If you install Presto on that node and
give Presto 32GB of RAM, then you should subtract 32GB from the 60GB
and let YARN only allocate 28GB per node. An optimized configuration
might choose to have separate Presto and Hadoop nodes. The optimized
configuration allows you to give more memory to Presto, and thus
perform larger join queries, for example.
Starting spark in standalone client mode on 10 nodes cluster using Spark-2.1.0-SNAPSHOT.
9 nodes are workers, 10th is master and driver. Each 256GB of memory.
I'm having difficuilty to utilize my cluster fully.
I'm setting up memory limit for executors and driver to 200GB using following parameters to spark-shell:
spark-shell --executor-memory 200g --driver-memory 200g --conf spark.driver.maxResultSize=200g
When my application starts I can see those values set as expected both in console and in spark web UI /environment/ tab.
But when I go to /executors/ tab then I see that my nodes got only 114.3GB storage memory assigned, see screen below.
Total memory shown here is then 1.1TB while I would expect to have 2TB. I double checked that other processes were not using the memory.
Any idea what is the source of that discrepancy? Did I miss some setting? Is it a bug in /executors/ tab or spark engine?
You're fully utilizing the memory, but here you are only looking at the storage portion of the memory. By default, the storage portion is 60% of the total memory.
From Spark Docs
Memory usage in Spark largely falls under one of two categories: execution and storage. Execution memory refers to that used for computation in shuffles, joins, sorts and aggregations, while storage memory refers to that used for caching and propagating internal data across the cluster.
As of Spark 1.6, the execution memory and the storage memory is shared, so it's unlikely that you would need to tune the memory.fraction parameter.
If you're using yarn, the main page of the resource manager's "Memory Used" and "Memory Total" will signify the total memory usage.
I am using Cassandra 1.2 with the new MurMur3Partitioner on centos.
On a 2 node cluster both set up with num_tokens=256
I see that one node is using much more memory than the other after inserting a couple million rows with CQL3.
When I run the free command
it shows 6GB usage on the second node and 1GB on the seed node.
However, when running
ps -e -o pid,vsz,comm= | sort -n -k 2
It shows the java process using about 6.8GB on each node.
Note that I have
MAX_HEAP_SIZE="4GB"
HEAP_NEWSIZE="400M"
set in cassandra-env.sh on each node.
Can anyone provide some insight?
This is most likely related to the general difficulties around reporting accurate memory utilization in Linux, especially as it relates to Java processes. Since Java processes reserve and allocate memory automatically, what the operating system sees can be misleading. The best way to understand what a Java process is doing is using JMX to monitor heap utilization. Tools such as VisualVM and jconsole work well for this.