Kafka state store range query Key formation - apache-kafka

I am trying to query Kafka State Store using range query.
My key format is satrtTimeInMillis|endTimeInMillis|someUniqueIdInNumber and using String to store records.
I want to execute range query like
store.range("fromSomeStartTimeInMillis","toCurrentTimeInMillis")
OR
store.range("fromSomeEndTimeInMIllis","toCurrentTimeInMillis")
My expectation is, if I execute query based on startTime, it should only return all records between from,to startTime, same likewise should hold for endTime too.
e.g
<1|2|9>, <2|1|8>,<3|2|7>,<100|2|4>,<100|5|11>
(say stored keys)
Query : i) store.range("1|","3|") --startTime-- It should return <1|2|9>, <2|1|8>,<3|2|7>
ii) store.range("|1","2") --endTime-- It should return <1|2|9>, <2|1|8>,<3|2|7>,<100|2|4>
I have been trying like this but results are not expected, also tried with Custom Serde Class to store records, but not any success.
Kindly suggest String Query Format for range query-- from and to for this kind of situation or even Custom Key Class with from and to fields.
Or is there any other way to handle this type of situation where I want to divide work across nodes, but there is no token service available to distribute load. Kafka seems only way to distribute load by partitioning data.

Related

How do I query Postgresql with IDs from a parquet file in an Data Factory pipeline

I have an azure pipeline that moves data from one point to another in parquet files. I need to join some data from a Postgresql database that is in an AWS tenancy by a unique ID. I am using a dataflow to create the unique ID I need from two separate columns using a concatenate. I am trying to create where clause e.g.
select * from tablename where unique_id in ('id1','id2',id3'...)
I can do a lookup query to the database, but I can't figure out how to create the list of IDs in a parameter that I can use in the select statement out of the dataflow output. I tried using a set variable and was going to put that into a for-each, but the set variable doesn't like the output of the dataflow (object instead of array). "The variable 'xxx' of type 'Array' cannot be initialized or updated with value of type 'Object'. The variable 'xxx' only supports values of types 'Array'." I've used a flatten to try to transform to array, but I think the sync operation is putting it back into JSON?
What a workable approach to getting the IDs into a string that I can put into a lookup query?
Some notes:
The parquet file has a small number of unique IDs compared to the total unique IDs in the database.
If this were an azure postgresql I could just use a join in the dataflow to do the join, but the generic postgresql driver isn't available in dataflows. I can't copy the entire database over to Azure just to do the join and I need the dataflow in Azure for non-technical reasons.
Edit:
For clarity sake, I am trying to replace local python code that does the following:
query = "select * from mytable where id_number in "
df = pd.read_parquet("input_file.parquet")
df['id_number'] = df.country_code + df.id
df_other_data = pd.read_sql(conn, query + str(tuple(df.id_number))
I'd like to replace this locally executing code with ADF. In the ADF process, I have to replace the transformation of the IDs which seems easy enough if a couple of different ways. Once I have the IDs in the proper format in a column in a dataset, I can't figure out how to query a database that isn't supported by Data Flow and restrict it to only the IDs I need so I don't bring down the entire database.
Due to variables of ADF only can store simple type. So we can define an Array type paramter in ADF and set default value. Paramters of ADF support any type of elements including complex JSON structure.
For example:
Define a json array:
[{"name": "Steve","id": "001","tt_1": 0,"tt_2": 4,"tt3_": 1},{"name": "Tom","id": "002","tt_1": 10,"tt_2": 8,"tt3_": 1}]
Define an Array type paramter and set its default value:
So we will not get any error.

DynamoDB column with tilde and query using JPA

i have table column with tilde value like below
vendorAndDate - Column name
Chipotle~08-26-2020 - column value
I want to query for month "vendorAndPurchaseDate like '%~08%2020'" and for year ends with 2020 "vendorAndPurchaseDate like '%2020'". I am using Spring Data JPA to query the values. I have not worked on column with tilde values before. Please point me in a right direction or some examples
You cannot.
If vendorAndPurchaseDate is your partition key , you need to pass the whole value.
If vendorAndPurchaseDate is range key , you can only perform
= ,>,<>=,<=,between and begins_with operation along with a partition key
reference : https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Query.html
DynamoDB does not support this type of wildcard query.
Let's consider a more DynamoDB way of handling this type of query. It sounds like you want to support 2 access patterns:
Get Item by month
Get Item by year
You don't describe your Primary Keys (Partition Key/Sort Key), so I'm going to make some assumptions to illustrate one way to address these access patterns.
Your attribute appears to be a composite key, consisting of <vendor>~<date>, where the date is expressed by MM-DD-YYYY. I would recommend storing your date fields in YYYY-MM-DD format, which would allow you to exploit the sort-ability of the date field. An example will make this much clearer. Imagine your table looked like this:
I'm calling your vendorAndDate attribute SK, since I'm using it as a Sort Key in this example. This table structure allows me to implement your two access patterns by executing the following queries (in pseudocode to remain language agnostic):
Access Pattern 1: Fetch all Chipotle records for August 2020
query from MyTable where PK = "Vendors" and SK between Chipotle~2020-08-00 and Chipotle~2020-08-31
Access Pattern 2: Fetch all Chipotle records for 2020
query from MyTable where PK = "Vendors" and SK between Chipotle~2020-01-01 and Chipotle~2020-12-31
Because dates stored in ISO8601 format (e.g. YYYY-MM-DD...) are lexicographically sortable, you can perform range queries in DynamoDB in this way.
Again, I've made some assumptions about your data and access patterns for the purpose of illustrating the technique of using lexicographically sortable timestamps to implement range queries.

Snowflake: clustering on datetime key stored in variant field does not work / do partition pruning

We are ingesting data into Snowflake via the kafka connector.
To increase the data read performance / scan less partitions we decided to add a clustering key to a a key / combination of keys stored in the RECORD_CONTENT variant field.
The data in the RECORD_CONTENT field looks like this:
{
"jsonSrc": {
"Integerfield": 1,
"SourceDateTime": "2020-06-30 05:33:08:345",
*REST_OF_THE_KEY_VALUE_PAIRS*
}
Now, the issue is that clustering on a datetime col like SourceDateTime does NOT work:
CLUSTER BY (to_date(RECORD_CONTENT:jsonSrc:loadDts::datetime))
...while clustering on a field like Integerfield DOES work:
CLUSTER BY (RECORD_CONTENT:jsonSrc:Integerfield::int )
Not working means: when using a filter on RECORD_CONTENT:jsonSrc:loadDts::datetime, it has no effect on the partitions scanned, while filtering on RECORD_CONTENT:jsonSrc:Integerfield::int does perform partition pruning.
What is wrong here? Is this a bug?
Note that:
There is enough data to do meaningful clustering on RECORD_CONTENT:jsonSrc:loadDts::datetime
I validated clustering on RECORD_CONTENT:jsonSrc:loadDts::datetime working by making a copy of the raw table, with RECORD_CONTENT:jsonSrc:loadDts::datetime in a seperate column loadDtsCol and then adding a similar clustering key on that column: to_date(loadDtsCol).
For better pruning and less storage consumption, we recommend
flattening your object and key data into separate relational columns
if your semi-structured data includes: Dates and timestamps,
especially non-ISO 8601dates and timestamps, as string values
Numbers within strings
Arrays
Non-native values such as dates and timestamps are stored as strings
when loaded into a VARIANT column, so operations on these values could
be slower and also consume more space than when stored in a relational
column with the corresponding data type.
See this link: https://docs.snowflake.com/en/user-guide/semistructured-considerations.html#storing-semi-structured-data-in-a-variant-column-vs-flattening-the-nested-structure

Is it possible to Group By a field extracted from JSON input in a Siddhi Query?

I currently have an stream, with 1 string attribute that contains a Json event.
This stream receives different events, which I want to apply Json path expressions so I can use those attributes on filters and functions.
JsonPath extractors work like a charm on filters and selectors, unfortunately, I am not being able to use them for the 'Group By' part.
I am actually doing it in an embedded Siddhi App with siddhi-execution-json extension added manually, but for the discussion, so everybody can
easily check and test it, I will paste an example app that works on WSO2 Stream Processor.
The objective looks like the following App:
#App:name("Group_by_json_attribute")
define stream JsonStream(json string);
#sink(type='log')
define stream LogStream(myField string, count long);
#info(name='query1')
from JsonStream#window.time(10 sec)
select json:getString(json, '$.myField') as myField, count() as count
group by myField having count > 1
insert into LogStream;
and it can accept the following events:
{"myField": "my_value"}
However, this query will raise the error:
Cannot find attribute type as 'myField' does not exist in 'JsonStream'; define stream JsonStream(json string)
I have also tried to use directly the Json extractor at 'Group by':
group by json:getString(json, '$.myField') as myField having count > 1
However the error now is:
mismatched input ':' expecting {',', ORDER, LIMIT, OFFSET, HAVING, INSERT, DELETE, UPDATE, RETURN, OUTPUT}
which seems to not be expecting to use an extension here
I am just wondering, if it is possible to group by attributes not directly defined in the input stream. In this case is a field extracted from a JSON object, but it could be any other function that generates another attribute.
I am also using versions from maven central repository
Siddhi: io.siddhi:siddhi-core:5.0.1
siddhi-execution-json: io.siddhi.extension.execution.json:siddhi-execution-json:2.0.1
(Edit) Clarification
The objective is, to use attributes not directly defined in the Stream, to be used on the Group By.
The reason why is, I currently have an embedded app which defines the whole set of input streams coming from external sources formatted as JSON, and there are also a set of output streams to inform external components when a query matches.
This app allows users to create custom queries on this set of predefined Streams, but they are not able to create Streams by their own.
Many thanks!
It seems we are expecting the group by fields from the query input stream, in this case, JsonStream. Use another query before this for extraction and the aggregation and filtering in the following query,
#App:name("Group_by_json_attribute")
define stream JsonStream(json string);
#sink(type='log')
define stream LogStream(myField string, count long);
#info(name='extract_stream')
from JsonStream
select json:getString(json, '$.myField') as myField
insert into ExtractedStream;
#info(name='query1')
from ExtractedStream#window.time(10 sec)
select myField, count() as count
group by myField
having count > 1
insert into LogStream;

Cassandra: making a data model / schema

(Not sure what its called... model.. schema.. super model?)
I have 'n' (uniquely id'd) sensors in 'm' (uniquely id'd) homes. Each of these fires 0 to 'k' times / day (in blocks of 1-5). This data is currently stored in MySQL with a table for each 'home' and a structure of:
time stamp
sensor id
firing count
Im having trouble wrapping my mind around a 'nosql' model of this data that would allow me to find counts of firings by home, time, or sensor.
.. Or maybe this isn't the right kind of data to push to nosql? Our current server is bogging down under the load ( hundreds of millions of rows x hundreds of homes ). Im very interested in finding a data store that allows the scalability of cassandra.
It depends. Think "Query first" approach:
identify the queries
model the data
So, while you might have a Column Family which is your physical model, you will also have one or more which provide the data as it is queried. And, you can further take advantage of Cassandra features, such as:
Column Names can contain data. You don't have to store a value, each of the names could be a timestamp, for example
It is well suited to store thousands of columns for each key and the columns will remain sorted and can access in forward or reverse order; so, to continue above example, can easily get list of all timestamps for a sensor
Composite data types allow you to combine multiple bits of data into keys, names, or values. e.g. combine house id and sensor id
Counter Columns provide an simple value increment, even for the initial value, so just always a write operation.
Indexes can be defined on static column names which in effect, provides a reverse Column Family with the key as the result, just be careful of bucket size (e.g. might not want values to millisec)
To store firing count by sensor and house:
House_Sensors <-Column family
house_id <-Key
sensor_id <-Column name
firing_count <-Column value
Data represented in JSON-ish notation
House_Sensors = {
house_1 : {
sensor_1: 3436,
sensor_2: 46,
sensor_3: 99,
...
},
house_2 : {
sensor_7: 0,
sensor_8: 444,
...
},
...
}
You may want to define another column family with sensor_id as key to store the firing timestamp.
Think what queries you need when designing the schema and denormalize as needed. Repeat data, Cassandra inserts are very fast.
The timestamp of the firing is not stored in House_Sensor column family. Create a new column family for that with sensor_id as key.
This way you can use House_Sensor family to query firing count and what sensor belongs to each house. Use the other column family to query the firing timestamp.