I am trying to read 1TB of parquet data from s3 into spark dataframes and have assigned 80 executors with 30 gb and 5 cores to process and ETL the data.
However i am seeing the data is not distributed equally among the executors to make use of the cores while reading the data. My understanding is that the input is divided into chunks and then distributed equally among the executors for processing . I am not using any shuffles or joins of any kind and also the explain plan does not have any hash partitioning or aggregations of any kind . Please suggest if this is expected and how we can better redistribute the data to make use of all the cores.
I have the following use case:
50 students write their own code which consumes a preloaded dataset, and they will repeat it many times.
They all need to do the same task: read the data in order, and process it.
The dataset is a time series containing 600 million messages, each message is about 1.3KB.
Processing will probably be in Spark, but not mandatory.
The dataset is fixed and ReadOnly.
The data should be read at "reasonable speed" > 30MB/sec for each consumer.
I was thinking of setting kafka cluster with 3+ brokers, 1 topic, and 50 partitions.
My issue with the above plan is that each student (== consumer) must read all the data, regardless of what other consumers do.
Is Kafka a good fit for this? If so, how?
What if I relax the requirement of reading the dataset in order? i.e. a consumer can read the 600M messages in any order.
Is it correct that in this case each consumer will simply pull the full topic (starting with "earliest)?
An alternative is to set an HDFS storage (we use Azure so it's called Storage Account) and simply supply a mount point. However, I do not have control of the throughput in this case.
Throughput calculation:
let's say 25 consumers run concurrently, each reading at 30MB/s -> 750MB/s .
Assuming data is read from disk, and disk rate is 50MB/s, I need to read concurrently from 750/50 = 15 disks.
Does it mean I need to have 15 brokers? I did not see how one broker can allocate partitions to several disks attached to it.
similar posts:
Kafka topic partitions to Spark streaming
How does one Kafka consumer read from more than one partition?
(Spring) Kafka appears to consume newly produced messages out of order
Kafka architecture many partitions or many topics?
Is it possible to read from multiple partitions using Kafka Simple Consumer?
Processing will probably be in Spark, but not mandatory
An alternative is to set an HDFS storage (we use Azure)
Spark can read from Azure Blob Storage, so I suggest you start with that first. You can easily scale up Spark executors in parallel for throughput.
If want to use Kafka, don't base consumption rate on disk speed alone, especially when Kafka can do zero-copy transfers. Use kafka-consumer-perf-test script to test how fast your consumers can go with one partition. Or, better, if your data has some key other than timestamp that you can order by, then use that.
It's not really clear if each "50 students" does the same processing on the data set, or some pre computations can be done, but if so, Kafka Streams KTables can be setup to aggregate some static statistics of the data, if it's all streamed though a topic, that way, you can distribute load for those queries, and not need 50 parallel consumers.
Otherwise, my first thought would be to simply use a TSDB like OpenTSDB, Timescale or Influx, maybe Druid . Which could also be used with Spark, or queried directly.
If you are using Apache Spark 3.0+ there are ways around consumer per partition bound, as it can use more executor threads than partitions are, so it's mostly about how fast your network and disks are.
Kafka stores latest offsets in memory, so probably for your use case most of reads will be from memory.
Desired minimum number of partitions to read from Kafka. By default, Spark has a 1-1 mapping of topicPartitions to Spark partitions consuming from Kafka. If you set this option to a value greater than your topicPartitions, Spark will divvy up large Kafka partitions to smaller pieces. Please note that this configuration is like a hint: the number of Spark tasks will be approximately minPartitions. It can be less or more depending on rounding errors or Kafka partitions that didn't receive any new data.
https://spark.apache.org/docs/3.0.1/structured-streaming-kafka-integration.html
We have a kstreams app doing kstream-kstable inner join. Both the topics are high volume with 256 partitions each. kstreams App is deployed on 8 nodes with 8 GB heap each right now.
The state store (rocksdb) persists to disk and we are running out of disk space on the containers. What are some of the options to consume data from one of the topics as KTABLE, but limit the amount of data (like if we want to hold only a days worth of keys/data or some time frame) on disk and have the previous state/files get deleted?
What will be the number of partitions for 10 nodes cluster with 20 executors and code reading a folder with 100 files?
It is different in different modes that you are running and you can tune it up using the spark.default.parallelism setting. From Spark Documentation :
For operations like parallelize with no parent RDDs, it depends on
the cluster manager:
Local mode: number of cores on the local machine
Mesos fine grained mode: 8
Others: total number of cores on all executor nodes or 2, whichever is larger
Link to related Documentation:
http://spark.apache.org/docs/latest/configuration.html#execution-behavior
You can yourself change the number of partitions yourself depending upon the data that you are reading.Some of the Spark api's provide an additional setting for the number of partition.
Further to check how many partitions are getting created do as #Sandeep Purohit says
rdd.getNumPartitions
And it will result into the number of partitions that are getting created !
You can also change the number of partitons after it is created by using two Api's namely : coalesce and repartition
Link to Coalesce and Repartition : Spark - repartition() vs coalesce()
From Spark doc:
By default, Spark creates one partition for each block of the file
(blocks being 64MB by default in HDFS), but you can also ask for a
higher number of partitions by passing a larger value. Note that you
cannot have fewer partitions than blocks.
Number of partitions also depends upon the size of the file. If the file size is too big, you may choose to have more partitions.
The number of partitions for the scala/java objects RDD will be dependent on the core of the machines and if you are creating RDD using Hadoop input files then it will dependent on block size of the hdfs (version dependent) you can find number of partitions in RDD as follows
rdd.getNumPartitions
I am using spark streaming process some events. It is deployed in standalone mode with 1 master and 3 workers. I have set number of cores per executor to 4 and total num of executors to 24. This means totally 6 executors will be spawned. I have set spread-out to true. So each worker machine get 2 executors. My batch interval is 1 second. Also I have repartitioned the batch to 21. The rest 3 are for receivers. While running what I observe from event timeline is that only 3 of the executors are being used. The other 3 are not being used. As far as I know, there is no parameter in spark standalone mode to specify the number of executors. How do I make spark to use all the available executors?
Probably your streaming has not so many partitions to fill all executors on every 1-second minibatch. Try repartition(24) as first streaming transformation to use full spark cluster power.