I have an apache beam batch pipeline (written in java) to transform raw analytics data from bigquery into an aggregated form. Session records (that might now be extended by the next days worth of page events) and a new set of page events are read from bigquery. The pipeline is then performing a groupByKey operation to group by user id (across both datasets) before the aggregation operation to create session records. The groupByKey operation is performing very slowly (a throughput of ~50 per sec) on the larger dataset (~8400000 records) whereas the throughput for the other input (~1000000 records) was much much higher (~10000 per sec). Does anyone have any advice on how I can troubleshoot and ultimately improve the speed of the operation?
From research online I am aware sometimes it can be more efficient to use a Combine operation rather than groupByKey ( among others this article) but I did not think that would be appropriate for the data I'm grouping (BQ TableRow records).
Further info that might be useful:
The groupByKey is taking the 8400000 into approx 3500000 grouped records with a range of ~2000 to 1 records being combined per key
I fully acknowledge I am lacking a full understanding of the intricacies of apache beam and dataflow and am keen to understand a lot more as I will be building out a number of different pipelines.
Below is a screenshot of the dataflow graph
Individual stages in Beam get fused together when running on Dataflow, meaning the throughput of on stage gets tied to others, so it's fully possible it's not the GroupByKey but rather adjacent DoFns that are causing the slowness. If you click on a step you can see, the step-info tab, a field that gives the wall time for executing a particular step. I would see if there is a particular step in your pipeline around that GroupByKey that has a high walltime.
Related
I want to transfer data from oracle to MongoDB using apache nifi. Oracle has a total of 9 million records.
I have created nifi flow using QueryDatabaseTable and PutMongoRecord processors. This flow is working fine but has some performance issues.
After starting the nifi flow, records in the queue for SplitJson -> PutMongoRecord are increasing.
Is there any way to slow down records putting into the queue by SplitJson processor?
OR
Increase the rate of insertion in PutMongoRecord?
Right now, in 30 minutes 100k records are inserted, how to speed up this process?
#Vishal. The solution you are looking for is to increase the concurrency of PutMongoRecord:
You can also experiment with the the BATCH size in the configuration tab:
You can also reduce the execution time splitJson. However you should remember this process is going to take 1 flowfile and make ALOT of flowfiles regardless of the timing.
How much you can increase concurrency is going to depend on how many nifi nodes you have, and how many CPU Cores each node has. Be experimental and methodical here. Move up in single increments (1-2-3-etc) and test your file in each increment. If you only have 1 node, you may not be able to tune the flow to your performance expectations. Tune the flow instead for stability and as fast as you can get it. Then consider scaling.
How much you can increase concurrency and batch is also going to depend on the MongoDB Data Source and the total number of connections you can get fro NiFi to Mongo.
In addition to Steven's answer, there are two properties on QueryDatabaseTable that you should experiment with:
Max Results Per Flowfile
Use Avro logical types
With the latter, you might be able to do a direct shift from Oracle to MongoDB because it'll convert Oracle date types into Avro ones and those should in turn by converted directly into proper Mongo date types. Max results per flowfile should also allow you to specify appropriate batching without having to use the extra processors.
How does Datastage Parallelism help with Performance improvement? What is the relationship between Parallelism and Performance?
Thanks & Regards,
Subhasree
This question is very broad - please try to be nore specific next time.
There are several differnt parallel approaches in DataStage:
Pipeline Parallelism: Imagine a job where data get read from a database is being transformed and written to another database. While data is still read from the database some rows get transformed and some (have already been transformed) and are already written to the target.
Because you do not have to wait for a single step to finish this provides performamce.
Partitioning Parallelism: Data get read i.e. from a Sequential file and then will be split up into different data partitions (number of partitions is determined by the configuration file). Parallel stages also designed once will be instanciated one per partition and therefore extra threads will be spawned. These thread will be running in parallel and again provide a better performance (throughput).
Hope this helps.
I wonder if there is a way to specify the size of the mini-batch in Spark Structured streaming. That is rather than only stating the mini-batch interval (Triggers), I would like to state how many Row can be in a mini-batch (DataFrame) per interval.
Is there a way to do that ?
Aside from the general capability to do that, I particularily need to apply that in testing scenario, where i have an MemoryStream. I would like Spark to consume a certain amount of data from the MemoryStream, instead of taking all of it at once, to actually see how the the overall application behave. My understanding is that the MemoryStream data structure needs to be filled before launching the job on it. Hence, how can i see the mini-batch processing behavior, is spark is able to ingest the entire content of the MemoryStream within the interval that I give ?
EDIT1
In the Kafka Integration I have found the following:
maxOffsetsPerTrigger: Rate limit on maximum number of offsets processed per trigger interval. The specified total number of offsets will be proportionally split across topicPartitions of different volume.
But that is just for KAFKA integration. I have also seen
maxFilesPerTrigger: maximum number of new files to be considered in every trigger
So it seems things are defined per source types. Hence, is there a way to control how data is consumed from MEMORYSTREAM[ROW] ?
Look for below guys they can solve your problem:
1.spark.streaming.backpressure.initialRate
2.spark.streaming.backpressure.enabled
I'm trying to understand how Spark partitions data. Suppose I have an execution DAG like that in the picture (orange boxes are the stages). The two groupBy and the join operations are supposed to be very heavy if the RDD's are not partitioned.
Is it wise then to use .partitonBy(new HashPartitioner(properValue)) to P1, P2, P3 and P4 to avoid shuffle? What's the cost of partitioning an existing RDD? When isn't proper to partition an existing RDD? Doesn't Spark partition my data automatically if I don't specify a partitioner?
Thank you
tl;dr The answers to your questions respectively: Better to partition at the outset if you can; Probably less than not partitioning; Your RDD is partitioned one way or another anyway; Yes.
This is a pretty broad question. It takes up a good portion of our course! But let's try to address as much about partitioning as possible without writing a novel.
As you know, the primary reason to use a tool like Spark is because you have too much data to analyze on one machine without having the fan sound like a jet engine. The data get distributed among all the cores on all the machines in your cluster, so yes, there is a default partitioning--according to the data. Remember that the data are distributed already at rest (in HDFS, HBase, etc.), so Spark just partitions according to the same strategy by default to keep the data on the machines where they already are--with the default number of partitions equal to the number of cores on the cluster. You can override this default number by configuring spark.default.parallelism, and you want this number to be 2-3 per core per machine.
However, typically you want data that belong together (for example, data with the same key, where HashPartitioner would apply) to be in the same partition, regardless of where they are to start, for the sake of your analytics and to minimize shuffle later. Spark also offers a RangePartitioner, or you can roll your own for your needs fairly easily. But you are right that there is an upfront shuffle cost to go from default partitioning to custom partitioning; it's almost always worth it.
It is generally wise to partition at the outset (rather than delay the inevitable with partitionBy) and then repartition if needed later. Later on you may choose to coalesce even, which causes an intermediate shuffle, to reduce the number of partitions and potentially leave some machines and cores idle because the gain in network IO (after that upfront cost) is greater than the loss of CPU power.
(The only situation I can think of where you don't partition at the outset--because you can't--is when your data source is a compressed file.)
Note also that you can preserve partitions during a map transformation with mapPartitions and mapPartitionsWithIndex.
Finally, keep in mind that as you experiment with your analytics while you work your way up to scale, there are diagnostic capabilities you can use:
toDebugString to see the lineage of RDDs
getNumPartitions to, shockingly, get the number of partitions
glom to see clearly how your data are partitioned
And if you pardon the shameless plug, these are the kinds of things we discuss in Analytics with Apache Spark. We hope to have an online version soon.
By applying partitionBy preemptively you don't avoid the shuffle. You just push it in another place. This can be a good idea if partitioned RDD is reused multiple times, but you gain nothing for a one-off join.
Doesn't Spark partition my data automatically if I don't specify a partitioner?
It will partition (a.k.a. shuffle) your data a part of the join) and subsequent groupBy (unless you keep the same key and use transformation which preserves partitioning).
We have a pipeline for which the initial stages are properly scalable - using several dozen workers apiece.
One of the last stages is
dataFrame.write.format(outFormat).mode(saveMode).
partitionBy(partColVals.map(_._1): _*).saveAsTable(tname)
For this stage we end up with a single worker. This clearly does not work for us - in fact the worker runs out of disk space - on top of being very slow.
Why would that command end up running on a single worker/single task only?
Update The output format was parquet. The number of partition columns did not affect the result (tried one column as well as several columns).
Another update None of the following conditions (as posited by an answer below) held:
coalesce or partitionBy statements
window / analytic functions
Dataset.limit
sql.shuffle.partitions
The problem is unlikely to be related in any way to saveAsTable.
A single task in a stage indicates that the input data (Dataset or RDD) has only a one partition. This is contrast to cases where there are multiple tasks but one or more have significantly higher execution time, which normally correspond to partitions containing positively skewed keys. Also you should confound a single task scenario with low CPU utilization. The former is usually a result of insufficient IO throughput (high CPU wait times are the most obvious indication of that), but in rare cases can be traced to usage of shared objects with low level synchronization primitives.
Since standard data sources don't shuffle data on write (including cases where partitionBy and bucketBy options are used) it is safe to assume that data has been repartitioned somewhere in the upstream code. Usually it means that one of the following happened:
Data has been explicitly moved to a single partition using coalesce(1) or repartition(1).
Data has been implicitly moved to a single partition for example with:
Dataset.limit
Window function applications with window definition lacking PARTITION BY clause.
df.withColumn(
"row_number",
row_number().over(Window.orderBy("some_column"))
)
sql.shuffle.partitions option is set to 1 and upstream code includes non-local operation on a Dataset.
Dataset is a result of applying a global aggregate function (without GROUP BY caluse). This usually not an issue, unless function is non-reducing (collect_list or comparable).
While there is no evidence that it is the problem here, in general case you should also possibility, data contains only a single partition all the way to the source. This usually when input is fetched using JDBC source, but the 3rd party formats can exhibit the same behavior.
To identify the source of the problem you should either check the execution plan for the input Dataset (explain(true)) or check SQL tab of the Spark Web UI.