I have a Kafka topic with N partitions. The record key is a cookie. Records are distributed across partitions with the formula hash(key) % N.
I want to process records form partitions in a parallel way. Let's say that each partition has M workers assigned to do the processing. Additional requirement is that the same cookie is processed by a single worker thread (to keep events order).
I don't know the hash function used for partitioning records. Computing my_hash(key) % M may not work properly if I use my_hash == hash and gcd(N, M) > 1 (particularly when N = M).
Since I know partitionId for all records, my initial idea was to compute my_hash(key + "." + partitionId) % M but I am wondering whether this kind of bucketing is good enough. There is a chance that hash(key) == h1(key + "." + (h2(key) % N)) == my_hash(key + "." + partitionId).
I think I should generate a unique hash function. Do you know such generators?
EDIT
Example:
Partition1: (cookie1, ...), (cookie1, ...), (cookie3, ...)
Partition2: (cookie2, ...), (cookie2, ...), (cookie4, ...)
I want to have 2 threads (N = M) processing each partition. I don't know hash function, so it can happen that I will choose my_hash == hash.
Then I will get:
Partition1_Subpartition1: (cookie1, ...), (cookie1, ...), (cookie3, ...)
Partition1_Subpartition2: <always_empty>
Partition2_Subpartition1: <always_empty>
Partition2_Subpartition2: (cookie2, ...), (cookie2, ...), (cookie4, ...)
Instead better possible split, e.g.:
Partition1_Subpartition1: (cookie3, ...)
Partition1_Subpartition2: (cookie1, ...), (cookie1, ...)
Partition2_Subpartition1: (cookie4, ...)
Partition2_Subpartition2: (cookie2, ...), (cookie2, ...)
A Kafka consumer needs to run in its own separate thread. It's not possible(or advised) to share a thread among multiple consumers. So, if you have M threads, that implies you have M consumers. Now, let's come to your requirement:
I want to read the topic in a parallel way with M threads so that all
records with the same cookie are read by a single thread.
This statement itself looks a bit foggy to me. Because the default hash function ensures that same cookie will always come to the same partition, so your requirement is anyway going to be satisfied.
I want to have 2 threads (N = M) processing each partition.
Do you mean, that you want each partition being processed by two consumer threads? That's not possible, unless they are in different consumer-groups, which I think isn't what you want.
Now, are you trying to redirect a particular key (cookie) to different partitions, based on some function (maybe timestamp, or whatever), and if you know, that it can go to any of the partitions in the set (p1, p2, ... pn) then you want a single consumer to consume all these n partitions? Then what would you be gaining compared to the situation if all occurrences of the same cookie came to the same partition? Because in the end it's the same Kafka consumer thread consuming it. And, along the same line, I think if your Kafka consumer thread delegates the processing job to a threadpool (which you might be talking about), then also it doesn't matter whether you consume same key from same partition or a set of different partitions, the threadpool size will determine how much parallelism you'll achieve.
Related
in my Kafka Streams application, I have a task that sets up a scheduled (by the wall time) punctuator. The punctuator iterates over the entries of a store and does something with them. Like this:
var store = context().getStateStore("MyStore");
var iter = store.all();
while (iter.hasNext()) {
var entry = iter.next();
// ... do something with the entry
}
// Print a summary (now): N entries processed
// Print a summary (wish): N entries processed in partition P
Since I'm working with a single store here (which might be partitioned), I assume that every single execution of the punctuator is bound to a single partition of that store.
Is it possible to find out which partition the punctuator operates on? The java docs for ProcessorContext.partition() states that this method returns -1 within punctuators.
I've read Kafka Streams: Punctuate vs Process and the answers there. I can understand that a task is, in general, not tied to a particular partition. But an iterator should be tied IMO.
How can I find out the partition?
Or is my assumption that a particular instance of a store iterator is tied to a partion wrong?
What I need it for: I'd like to include the partition number in some log messages. For now, I have several nearly identical log messages stating that the punctuator does this and that. In order to make those messages "unique" I'd like to include the partition number into them.
Just to post here the answer that was provided in https://issues.apache.org/jira/browse/KAFKA-12328:
I just used context.taskId(). It contains the partition number at the end of the value, after the underscore. This was sufficient for me.
I am trying to invoke parallel reading from Cassandra table using spark. But I am not able to invoke parallelism as only one reads is happening any given time. What approach should be followed to achieve the same?
I'd recommend you go with below approach source Russell Spitzer's Blog
Manually dividing our partitions using a Union of partial scans :
Pushing the task to the end-user is also a possibility (and the current workaround.) Most end users already understand why they have long partitions and know in general the domain their column values fall in. This makes it possible for them to manually divide up a request so that it chops up large partitions.
For example, assuming the user knows clustering column c spans from 1 to 1000000. They could write code like
val minRange = 0
val maxRange = 1000000
val numSplits = 10
val subSize = (maxRange - minRange) / numSplits
sc.union(
(minRange to maxRange by subSize)
.map(start =>
sc.cassandraTable("ks", "tab")
.where("c > $start and c < ${start + subSize}"))
)
Each RDD would contain a unique set of tasks drawing only portions of full partitions. The union operation joins all those disparate tasks into a single RDD. The maximum number of rows any single Spark Partition would draw from a single Cassandra partition would be limited to maxRange/ numSplits. This approach, while requiring user intervention, would preserve locality and would still minimize the jumps between disk sectors.
Also read-tuning-parameters
I have the following:
KTable<Integer, A> tableA = builder.table("A");
KStream<Integer, B> streamB = builder.stream("B");
Messages in streamB need to be enriched with data from tableA.
Example data:
Topic A: (1, {name=john})
Topic B: (1, {type=create,...}), (1, {type=update,...}), (1, {type=update...})
In a perfect world, I would like to do
streamB.join(tableA, (b, a) -> { b.name = a.name; return b; })
.selectKey((k,b) -> b.name)
.to("C");
Unfortunately this does not work for me because my data is such that every time a message is written to topic A, a corresponding message is also written to topic B (the source is a single DB transaction). Now after this initial 'creation' transaction topic B will keep receiving more messages. Sometimes several events per seconds will show up on topic B but it is also possible to have consecutive events hours apart for a given key.
The reason the simple solution does not work is that the original 'creation' transaction causes a race condition: Topic A and B get their message almost simultaneously and if the B message reaches the 'join' part of the topology first (say a few ms before the A message gets there) the tableA will not yet contain a corresponding entry. At this point the event is lost. I can see this happening on topic C: some events show up, some don't (if I use a leftJoin, all events show up but some have null key which is equivalent to being lost). This is only a problem for the initial 'creation' transaction. After that every time an event arrives on topic B, the corresponding entry exists in tableA.
So my question is: how do you fix this?
My current solution is ugly. What I do is that I created a 'collection of B' and read topic B using
B.groupByKey()
.aggregate(() -> new CollectionOfB(), (id, b, agg) -> agg.add(b));
.join(tableA, ...);
Now we have a KTable-KTable join, which is not susceptible to this race condition. The reason I consider this 'ugly' is because after each join, I have to send a special message back to topic B that essentially says "remove the event(s) that I just processed from the collection". If this special message is not sent to topic B, the collection will keep growing and every event in the collection will be reported on every join.
Currently I'm investigating whether a window join would work (read both A and B into KStreams and use a windowed join). I'm not sure that this will work either because there is no upper bound on the size of the window. I want to say, "window starts 1 second 'before' and ends infinity seconds 'after'". Even if I can somehow make this work, I am a bit concerned with the space requirement of having an unbounded window.
Any suggestion would be greatly appreciated.
Not sure what version you are using, but latest Kafka 2.1 improves the stream-table-join. Even before 2.1, the following holds:
stream-table join is base on event-time
Kafka Streams processes messages based on event-time, however, in offset-order (for two input streams, the stream with smaller record timestamps is processed first)
if you want to ensure that the table is updated first, the table update record should have a smaller timestamp than the stream record
Since 2.1:
to allow for some delay, you can configure max.task.idle.ms configuration to delay processing for the case that only one input topic has input data
The event-time processing order is implemented as best-effort in 2.0 and earlier versions what can lead to the race condition you describe. In 2.1, processing order is guaranteed and might only be violated if max.task.idle.ms hits.
For details, see https://cwiki.apache.org/confluence/display/KAFKA/KIP-353%3A+Improve+Kafka+Streams+Timestamp+Synchronization
My Question is regarding the StatefulNetworkWordCount example :
https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/streaming/StatefulNetworkWordCount.scala
Q1) The stateDstream RDD is maintained by the driver or the worker node or does each worker node has its own local copy of the complete state rdd?
Q2) Why do we need a HashPartitioner in the following line :
val stateDstream = wordDstream.updateStateByKey[Int](newUpdateFunc,
new HashPartitioner (ssc.sparkContext.defaultParallelism), true, initialRDD)
What is happening behind the scenes here ?
To answer both of your questions:
1) The RDD's produced by DStream are distributed across the workers. Similar to non-streaming, this means that records from each RDD produced by the DStream are spread out across the cluster (which is why partitioning matters here).
2) Partitioning is important in this case because it settles how records from every RDD iteration are split up. Especially with a transformation like updateStateByKey(), you tend to see keys of RDD's across various batch intervals stay the same. So it goes without saying here that if our keys from each interval RDD arrayed across the same partitions, this function can work more efficiently and can update state for a key within a partition.
As an example, let us look at the word count program you linked. Let us consider RDD's at two one second intervals (rdd1 at t=1 and rdd2 at t=2). Say rdd1 generated is for the text "hello world" and rdd2 generated also sees the text "hello I'm world". Without partitioning, the records for each RDD can be sent to various partitions on various workers (the "hello" at t=1 and "hello" at t=2 could be sent to separate locations). This implies that an update to the count state would need to reshuffle records on each iteration to obtain the updated count. With a partitioner defined (and remembered as indicated by one of the parameters!), we will see keys "hello" and "world" at the same partition, thereby avoiding a shuffle, and creating a more efficient update.
It is important to also note here that because keys can change, there is a parameter to toggle whether or not to remember the partitioner.
I have a text file consisting of a large number of random floating values separated by spaces.
I am loading this file into a RDD in scala.
How does this RDD get partitioned?
Also, is there any method to generate custom partitions such that all partitions have equal number of elements along with an index for each partition?
val dRDD = sc.textFile("hdfs://master:54310/Data/input*")
keyval=dRDD.map(x =>process(x.trim().split(' ').map(_.toDouble),query_norm,m,r))
Here I am loading multiple text files from HDFS and process is a function I am calling.
Can I have a solution with mapPartitonsWithIndex along with how can I access that index inside the process function? Map shuffles the partitions.
How does an RDD gets partitioned?
By default a partition is created for each HDFS partition, which by default is 64MB. Read more here.
How to balance my data across partitions?
First, take a look at the three ways one can repartition his data:
1) Pass a second parameter, the desired minimum number of partitions
for your RDD, into textFile(), but be careful:
In [14]: lines = sc.textFile("data")
In [15]: lines.getNumPartitions()
Out[15]: 1000
In [16]: lines = sc.textFile("data", 500)
In [17]: lines.getNumPartitions()
Out[17]: 1434
In [18]: lines = sc.textFile("data", 5000)
In [19]: lines.getNumPartitions()
Out[19]: 5926
As you can see, [16] doesn't do what one would expect, since the number of partitions the RDD has, is already greater than the minimum number of partitions we request.
2) Use repartition(), like this:
In [22]: lines = lines.repartition(10)
In [23]: lines.getNumPartitions()
Out[23]: 10
Warning: This will invoke a shuffle and should be used when you want to increase the number of partitions your RDD has.
From the docs:
The shuffle is Spark’s mechanism for re-distributing data so that it’s grouped differently across partitions. This typically involves copying data across executors and machines, making the shuffle a complex and costly operation.
3) Use coalesce(), like this:
In [25]: lines = lines.coalesce(2)
In [26]: lines.getNumPartitions()
Out[26]: 2
Here, Spark knows that you will shrink the RDD and gets advantage of it. Read more about repartition() vs coalesce().
But will all this guarantee that your data will be perfectly balanced across your partitions? Not really, as I experienced in How to balance my data across the partitions?
The loaded rdd is partitioned by default partitioner: hash code. To specify custom partitioner, use can check rdd.partitionBy(), provided with your own partitioner.
I don't think it's ok to use coalesce() here, as by api docs, coalesce() can only be used when we reduce number of partitions, and even we can't specify a custom partitioner with coalesce().
You can generate custom partitions using the coalesce function:
coalesce(numPartitions: Int, shuffle: Boolean = false): RDD[T]