Mongo Spark connector write issues - mongodb

We're observing significant increase in duration for writes, which eventually results in timeouts.
We're using replica set based MongoDB cluster.
It only happens during the high peak days of the week due to high volume.
We've tried deploying additional nodes, but it hasn't helped.
Attaching the screen shots.
We're using Mongo-connector 2.2.1 on databricks Apache Spark 2.2.1
Any recommendations to optimise write speed will be truly appreciated.

how many workers are there? please check DAG, executor metrics for the job. if all writes happening from a single executor, try repartitioning the dataset based on no. of executors.
MongoSpark.save(dataset.repartition(50), writeConf);

Related

Clickhouse prioritize kafka import threads over other queries

TLDR; Is there any way to prioritize the kafka-engine import threads over any other CH threads OR can i reserve CPUs for the kafka consumers?
In my setup, the kafkalag increases too much when issuing a big query. I guess, this is because the import thread doesn't receive enough CPU time when there is too much CPU load. I tried to set a maximum thread cap for users as well as setting nice values. Nothing seems to work, so any advice is welcome.
upgrade to 20.9.7.11
recreate kafka engine tables with settings kafka_num_consumers=5(10), kafka_thread_per_consumer=1
add to default profile (users.xml) background_schedule_pool_size=30

Do partitions increase performance when only using one computer/node?

I know that partitions will boost the performance by doing parallel tasks on different nodes in a cluster. But will partitions help me get better performance when I am only using one single computer? I am using Spark and Scala.
Yes it will increase performance.
Make sure your CPU have more than one core.
when you making your local sparksession, make sure to use multiple core :
local to run locally with one thread, or local[N] to run locally with N thread, i suggest you to use local[*]
and make sure your RDD/Dataset have multiple partition, i good number of partition is 2 to 4 time the number of core.
Apache Spark scacles as well vertically (CPU, Ram, ...) and horizontally (Nodes). I assume, that your computer/node has a CPU with more than one core. The partitions are then processed in parallel.

kafka-connect - s3-connector - JVM heap - estimated heap size calculation

im trying to productionalize kafka connect in our environment. For infra requirements purposes, I'm looking for how to estimate the required JVM heap size per node. I have two topics that I would like to sink to s3 with s3 connector. I dont see any good articles to arrive at the estimates. can someone please guide me?
There is no good guide because the connector is too configurable.
For example, each task (max.tasks) will batch records up to the flush size (flush.size), then dump it to storage.
If you are using the DefaultPartitoner, you could estimate how many records you're storing per partition, then how many tasks will be running per node, and then how many total topics you're consuming, and come up with a rough number.
If you're using the TimeBasedPartitioner, then you'll need to take into account the partition duration and scheduled rotate interval. I can say that 8GB RAM is capable of writing multiple GB files from few partitions on an hourly partition, so I don't think you need much more heap than that to start.
As far as other documentation, there's a decent description in this issue https://github.com/confluentinc/kafka-connect-storage-cloud/issues/177

having Spark process partitions concurrently, using a single dev/test machine

I'm naively testing for concurrency in local mode, with the following spark context
SparkSession
.builder
.appName("local-mode-spark")
.master("local[*]")
.config("spark.executor.instances", 4)
.config("spark.executor.cores", 2)
.config("spark.network.timeout", "10000001") // to avoid shutdown during debug, avoid otherwise
.config("spark.executor.heartbeatInterval", "10000000") // to avoid shutdown during debug, avoid otherwise
.getOrCreate()
and a mapPartitions API call like follows:
import spark.implicits._
val inputDF : DataFrame = spark.read.parquet(inputFile)
val resultDF : DataFrame =
inputDF.as[T].mapPartitions(sparkIterator => new MyIterator)).toDF
On the surface of it, this did surface one concurrency bug in my code contained in MyIterator (not a bug in Spark's code). However, I'd like to see that my application will crunch all available machine resources both in production, and also during this testing so that the chances of spotting additional concurrency bugs will improve.
That is clearly not the case for me so far: my machine is only at very low CPU utilization throughout the heavy processing of the inputDF, while there's plenty of free RAM and the JVM Xmx poses no real limitation.
How would you recommend testing for concurrency using your local machine? the objective being to test that in production, Spark will not bump into thread-safety or other concurrency issues in my code applied by spark from within MyIterator?
Or can it even in spark local mode, process separate partitions of my input dataframe in parallel? Can I get spark to work concurrently on the same dataframe on a single machine, preferably in local mode?
Max parallelism
You are already running spark in local mode using .master("local[*]").
local[*] uses as many threads as the number of processors available to the Java virtual machine (it uses Runtime.getRuntime.availableProcessors() to know the number).
Max memory available to all executors/threads
I see that you are not setting the driver memory explicitly. By default the driver memory is 512M. If your local machine can spare more than this, set this explicitly. You can do that by either:
setting it in the properties file (default is spark-defaults.conf),
spark.driver.memory 5g
or by supplying configuration setting at runtime
$ ./bin/spark-shell --driver-memory 5g
Note that this cannot be achieved by setting it in the application, because it is already too late by then, the process has already started with some amount of memory.
Nature of Job
Check number of partitions in your dataframe. That will essentially determine how much max parallelism you can use.
inputDF.rdd.partitions.size
If the output of this is 1, that means your dataframe has only 1 partition and so you won't get concurrency when you do operations on this dataframe. In that case, you might have to tweak some config to create more number of partitions so that you can concurrently run tasks.
Running local mode cannot simulate a production environment for the following reasons.
There are lots of code which gets bypassed when code is run in local mode, which would normally run with any other cluster manager. Amongst various issues, few things that i could think
a. Inability to detect bugs from the way shuffle get handled.(Shuffle data is handled in a completely different way in local mode.)
b. We will not be able to detect serialization related issues, since all code is available to the driver and task runs in the driver itself, and hence we would not result in any serialization issues.
c. No speculative tasks(especially for write operations)
d. Networking related issues, all tasks are executed in same JVM. One would not be able detect issues like communication between driver/executor, codegen related issues.
Concurrency in local mode
a. Max concurrency than can be attained will be equal to the number of cores in your local machine.(Link to code)
b. The Job, Stage, Task metrics shown in Spark UI are not accurate since it will incur the overhead of running in the JVM where the driver is also running.
c: As for CPU/Memoryutilization, it depends on operation being performed. Is the operation CPU/memory intensive?
When to use local mode
a. Testing of code that will run only on driver
b. Basic sanity testing of the code that will get executed on the executors
c. Unit testing
tl; dr The concurrency bugs that occur in local mode might not even be present in other cluster resource managers, since there are lot of special handling in Spark code for local mode(There are lots of code which checks isLocal in code and control goes to a different code flow altogether)
Yes!
Achieving parallelism in local mode is quite possible.
Check the amount of memory and cpu available in your local machine and supply values to the driver-memory and driver-cores conf while submitting your spark job.
Increasing executor-memory and executor-cores will not make a difference in this mode.
Once the application is running, open up the SPARK UI for the job. You can now go to the EXECUTORS tab to actually check the amount of resources your spark job is utilizing.
You can monitor various tasks that get generated and the number of tasks that your job runs concurrently using the JOBS and STAGES tab.
In order to process data which is way larger than the resources available, ensure that you break your data into smaller partitions using repartition. This should allow your job to complete successfully.
Increase the default shuffle partitions in case your job has aggregations or joins. Also, ensure sufficient space on the local file system since spark creates intermediate shuffle files and writes them to disk.
Hope this helps!

What happens with data persisted in memory when using autoscaling on task nodes?

I'm using AWS EMR with Spark/Scala. Say I have a large DataFrame that I choose to persist. the persist() method may be lazy but let's say I activate it right after, with a .show()
df.persist()
df.show()
My understanding is it stores it in memory, so it's quicker to use next time. Let's say autoscaling kicks in and scales half of my task nodes back.
If I do a df.select, after the task nodes are terminated, will it still work? Or is that block lost from the terminated nodes? Or does it get persisted to the core nodes?
In AWS EMR, only core nodes will store the data blocks. Task nodes only help with scaling up the compute power by reading data from core nodes.
Ideally each executor will store few partitions of your dataset in memory, when you lose an executor, missing partition will be recomputed based on the reassignment of partitions on to the existing executors/resources.