Kafka Operations on Windowed KTables - apache-kafka

I would like to do some further operations on a windowed KTable. To give some background, I have a topic with data in the form of: {clientId, txTimestamp, txAmount}. From this topic, I have created a stream, partitioned by clientId with the underlying topic timestamp equal to the txTimestamp event field. Starting from this stream, I want to aggregate the number of transactions per clientId in every 1 hour windows. This is done with something similar to the following:
CREATE TABLE transactions_per_client WITH (kafka_topic='transactions_per_client_topic') AS SELECT clientId, COUNT(*) AS transactions_per_client, WINDOWSTART AS window_start, WINDOWEND AS window_end FROM transactions_stream WINDOW TUMBLING (SIZE 1 HOURS) GROUP BY clientId;
The aggregations work as expected and yield values similar to:
ClientId
Transactions_per_client
windowsStart
WindowEnd
1
12
1
2
2
8
1
2
1
24
2
3
1
19
3
4
What I want to do now is further process this table to add a column that represents the difference in number of transactions per client between 2 adjacent windows for the same client. For the previous table, that would be something like this:
ClientId
Transactions_per_client
windowsStart
WindowEnd
Deviation
1
12
1
2
0
2
8
1
2
0
1
24
2
3
12
1
19
3
4
-5
What would be the best way to achieve this (either using kafka streams or ksql)? I tried to use the User Defined Aggregation functions to try to create this column but it cannot be applied to a KTable, only to a KStream.

Just for future reference, the official answer at this time (April 2022) is that it cannot be done in kafka-streams through a DSL as "Windowed-TABLE are kind of a “dead end” in ksqlDB atm, and also for Kafka Streams, you cannot really use the DSL to further process the data" (answer on Confluent forum here: https://forum.confluent.io/t/aggregations-on-windowed-ktables/4340). The suggestion there is to use the Processor API, which indeed can be pretty straightforward to implement. At a high level pseudocode, it would be something like this:
topology.addSource(NAME_OF_SOURCE_IN_THE_NEW_TOPOLOGY,
timeWindowedDeserializer, LongDeserializer, SOURCE_TOPIC -> the topic with the windowed KTable);
topology.addProcessor(
NAME_OF_PROCESSOR_IN_THE_NEW_TOPOLOGY,
() -> new Aggregator(storeName),
NAME_OF_SOURCE_IN_THE_NEW_TOPOLOGY);
StoreBuilder storeBuilder = keyValueStoreBuilder for the timeWindowedSerde and a Long serde for value;
topology.addStateStore(storeBuilder, NAME_OF_PROCESSOR_IN_THE_NEW_TOPOLOGY);
topology.addSink(
NAME_OF_SINK_IN_THE_NEW_TOPOLOGY,
sinkTopic,
timeWindowedSerializer,
Serializer for the new structure -> POJO that contains the deviation field,
NAME_OF_PROCESSOR_IN_THE_NEW_TOPOLOGY);
The aggregator in the previous section is a org.apache.kafka.streams.processor.api.Processor implementation that is keeping track of the values it has seen and is able to retrieve the previous seen value for a given key.
Again, at a high level it would be something similar to this:
Long previousTransactionAggregate = kvStore.get(previousWindow);
long deviation;
if (previousTransactionAggregate != null) {
deviation = kafkaRecord.value() - previousTransactionAggregate;
} else {
deviation = 0L;
}
kvStore.put(kafkaRecord.key(), kafkaRecord.value());
Record<Windowed<Long>, TransactionPerNumericKey> newRecord =
new Record<>(
kafkaRecord.key(),
new TransactionPerNumericKey(
kafkaRecord.key().key(), kafkaRecord.value(), deviation),
kafkaRecord.timestamp());
context.forward(newRecord);
TransactionPerNumericKey in the previous section is the name of the structure for the enhanced windowed aggregation (containing the deviation value)

Related

Spring batch, compare current processed record with the rest of chunk records

I need to plan a solution to this case. I have a Table like this and I have to reduce the number of registers that share Product+Service+Origin to minium range dates possible:
ID
PRODUCT
SERVICE
ORIGIN
STARTDATE
ENDDATE
1
100001
500
1
10/01/2023
15/01/2023
2
100001
500
1
12/01/2023
18/01/2023
I have to read all records, and in the process check date intervals to unificate them:
RecordA (10/01/2023 - 15/01/2023) RecordB (12/01/2023 - 18/01/2023) this will result in update the register with ID1 dates leaving the big range between the two dates and registers: 10/01/2023 - 18/01/2023 (extending to "right" or "left" one of the ranges when necessary)
Other case:
ID
PRODUCT
SERVICE
ORIGIN
STARTDATE
ENDDATE
1
100001
500
1
10/01/2023
15/01/2023
2
100001
500
1
12/01/2023
14/01/2023
On this case, range of dates from Record1 is biggest, We Should delete Record2.
Of course, deleting duplicate date ranges
Now whe have implemented and chunk step to make this possible:
Reader: Read data ordering by common fields (Product-Service-Origin)
Processor: Saves in a HashMap<String, List> in the job context all the register while the combination "Product+Service+Origin" is the same. When detect a new combination, get The current List and make a lot of comparision between this, marking records aux properties to "delete" or "update" and sending the full list to the writer (previusly starting a create other list in the map with the new combination of common fields)
Writer: group the records to delete and update and call child writers to execute the sentence.
Well, this was the software several years ago but soon We have to control massive records for each case and the "solution" of use an map in the JobContext have to change.
I was thinking if Spring Batch has some features for process this type of situations that I can use.
Anyway I am thinking about change the step where We insert all this records, and make date range checks one-to-one in the processor, but I think the commit interval here will be mandatory 1 to allows each register check all the previous processed registers (table is iintially empty when We execute this job). Other value in commit interval will check in bbdd but not in the previous processed items making incorrect processing here.
All this cases can have 0-n records sharing Product+Service+Origin.
Sorry my english, it's difficult explain this on other language.

Handling and assigning the date into respective categories

I have an input file like below and trying to convert the multiple customer records into respective quarters and also record per customer. Once the quarter (like Q2 2019) is derived from the data, now the latest one should goto TimeFrame4 and old ones to 3,2,1 order.
So far was able to derive the quarters using the transformer, but after that I was stuck in how to identify and assign them to respective buckets (TimeFrame1 TimeFrame2 TimeFrame3 TimeFrame4). Any ideas on how to implement this effectively (input has 50M records) in DataStage (11.3 Parallel job).
Input:
CustID Contacted_Time
1 2018-12-25
1 2019-06-15
1 2019-01-03
2 2019-02-24
2 2019-03-05
I need desired Output like below:
CustID TimeFrame1 TimeFrame2 TimeFrame3 TimeFrame4
1 null Q4 2018 Q1 2019 Q2 2019
2 null null null Q1 2019
You could sort the data by CustId and Contacted_Time desc, filter the data (because I assume there could be more than 4) to four contacts and once you got the quarters assign a helper column with the number (also in transformer).
Finally a Pivot stage can do the verticalization or you could do this in a transformer as well i.e. with a loop.

Spark window functions: how to implement complex logic with good performance and without looping

I have a data set that lends itself to window functions, 3M+ rows that once ranked can be partitioned into groups of ~20 or less rows. Here is a simplified example:
id date1 date2 type rank
171 20090601 20090601 attempt 1
171 20090701 20100331 trial_fail 2
171 20090901 20091101 attempt 3
171 20091101 20100201 attempt 4
171 20091201 20100401 attempt 5
171 20090601 20090601 fail 6
188 20100701 20100715 trial_fail 1
188 20100716 20100730 trial_success 2
188 20100731 20100814 trial_fail 3
188 20100901 20100901 attempt 4
188 20101001 20101001 success 5
The data is ranked by id and date1, and the window created with:
Window.partitionBy("id").orderBy("rank")
In this example the data has already been ranked by (id, date1). I could also work on the unranked data and rank it within Spark.
I need to implement some logic on these rows, for example, within a window:
1) Identify all rows that end during a failed trial (i.e. a row's date2 is between date1 and date2 of any previous row within the same window of type "trial_fail").
2) Identify all trials after a failed trial (i.e. any row with type "trial_fail" or "trial success" after a row within the same window of type "trial_fail").
3) Identify all attempts before a successful attempt (i.e. any row with type "attempt" with date1 earlier than date1 of another later row of type "success").
The exact logic of these conditions is not important to my question (and there will be other different conditions), what's important is that the logic depends on values in many rows in the window at once. This can't be handled by the simple Spark SQL functions like first, last, lag, lead, etc. and isn't as simple as the typical example of finding the largest/smallest 1 or n rows in the window.
What's also important is that the partitions don't depend on one another so this seems like this a great candidate for Spark to do in parallel, 3 million rows with 150,000 partitions of 20 rows each, in fact I wonder if this is too many partitions.
I can implement this with a loop something like (in pseudocode):
for i in 1..20:
for j in 1..20:
// compare window[j]'s type and dates to window[i]'s etc
// add a Y/N flag to the DF to identify target rows
This would require 400+ iterations (the choice of 20 for the max i and j is an educated guess based on the data set and could actually be larger), which seems needlessly brute force.
However I am at a loss for a better way to implement it. I think this will essentially collect() in the driver, which I suppose might be ok if it is not much data. I thought of trying to implement the logic as sub-queries, or by creating a series of sub-DF's each with a subset or reduction of data.
If anyone is aware of any API's or techniques that I am missing any info would be appreciated.
Edit: This is somewhat related:
Spark SQL window function with complex condition

SparkSQL PostgresQL Dataframe partitions

I have a very simple setup of SparkSQL connecting to a Postgres DB and I'm trying to get a DataFrame from a table, the Dataframe with a number X of partitions (lets say 2). The code would be the following:
Map<String, String> options = new HashMap<String, String>();
options.put("url", DB_URL);
options.put("driver", POSTGRES_DRIVER);
options.put("dbtable", "select ID, OTHER from TABLE limit 1000");
options.put("partitionColumn", "ID");
options.put("lowerBound", "100");
options.put("upperBound", "500");
options.put("numPartitions","2");
DataFrame housingDataFrame = sqlContext.read().format("jdbc").options(options).load();
For some reason, one partition of the DataFrame contains almost all rows.
For what I can understand lowerBound/upperBound are the parameters used to finetune this. In SparkSQL's documentation (Spark 1.4.0 - spark-sql_2.11) it says they are used to define the stride, not to filter/range the partition column. But that raises several questions:
The stride is the frequency (number of elements returned each query) with which Spark will query the DB for each executor (partition)?
If not, what is the purpose of this parameters, what do they depend on and how can I balance my DataFrame partitions in a stable way (not asking all partitions contain the same number of elements, just that there is an equilibrium - for example 2 partitions 100 elements 55/45 , 60/40 or even 65/35 would do)
Can't seem to find a clear answer to these questions around and was wondering if maybe some of you could clear this points for me, because right now is affecting my cluster performance when processing X million rows and all the heavy lifting goes to one single executor.
Cheers and thanks for your time.
Essentially the lower and upper bound and the number of partitions are used to calculate the increment or split for each parallel task.
Let's say the table has partition column "year", and has data from 2006 to 2016.
If you define the number of partitions as 10, with lower bound 2006 and higher bound 2016, you will have each task fetching data for its own year - the ideal case.
Even if you incorrectly specify the lower and / or upper bound, e.g. set lower = 0 and upper = 2016, there will be a skew in data transfer, but, you will not "lose" or fail to retrieve any data, because:
The first task will fetch data for year < 0.
The second task will fetch data for year between 0 and 2016/10.
The third task will fetch data for year between 2016/10 and 2*2016/10.
...
And the last task will have a where condition with year->2016.
T.
Lower bound are indeed used against the partitioning column; refer to this code (current version at the moment of writing this):
https://github.com/apache/spark/blob/40ed2af587cedadc6e5249031857a922b3b234ca/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRelation.scala
Function columnPartition contains the code for the partitioning logic and the use of lower / upper bound.
lowerbound and upperbound have been currently identified to do what they do in the previous answers. A followup to this would be how to balance the data across partitions without looking at the min max values or if your data is heavily skewed.
If your database supports "hash" function, it could do the trick.
partitionColumn = "hash(column_name)%num_partitions"
numPartitions = 10 // whatever you want
lowerBound = 0
upperBound = numPartitions
This will work as long as the modulus operation returns a uniform distribution over [0,numPartitions)

pentaho distinct count over date

I am currently working on Pentaho and I have the following problem:
I want to get a "rooling distinct count on a value, which ignores the "group by" performed by Business Analytics. For instance:
Date Field
2013-01-01 A
2013-02-05 B
2013-02-06 A
2013-02-07 A
2013-03-02 C
2013-04-03 B
When I use a classical "distinct count" aggregator in my schema, sum it, and then add "month" to column, I get:
Month Count Sum
2013-01 1 1
2013-02 2 3
2013-03 1 4
2013-04 1 5
What I would like to get would be:
Month Sum
2013-01 1
2013-02 2
2013-03 3
2013-04 3
which is the distinct count of all Fields so far. Does anyone has any idea on this topic?
my database is in Postgre, and I'm looking for any solution under PDI, PSW, PBA or PME.
Thank you!
A naive approach in PDI is the following:
Sort the rows by the Field column
Add a sequence for changing values in the Field column
Map all sequence values > 1 to zero
These first 3 effectively flag the first time a value was seen (no matter the date).
Sort the rows by year/month
Sum the mapped sequence values by year+month
Get a Cumulative Sum of all the previous sums
These 3 aggregate the distinct values per month, then keep a cumulative sum. In PDI this might look something like:
I posted a Gist of this transformation here.
A more efficient solution is to parallelize the two sorts, then join at the latest point possible. I posted this one as it is easier to explain, but it shouldn't be too difficult to take this transformation and make it more parallel.