KSQL table get old and new value - apache-kafka

Is it possible in KSQL to stream out the old and new values from a table? We'd like to use a table as a store of values and when one changes stream out a "reversal" value which is the previous one, tagged in some way, and the new value so that we can just handle the delta in downstream systems?

Kafka tables are generally used for storing the latest values. So for example say stream with key '123' exist in table and a new stream with same key '123' but different column value appears on topic, this will override(upsert) the existing value in table.
So probably its not a great idea to do it on Table.
Your use case is not clear to me still my suggestion would be you need to have some mechanism either in the source of stream or using timestamp to deal with delta feed.

Yes its possible. Does require some juggling.
Create table to keep last state
create table v1_mux_connection_ping_ta
as
select
assetid,
LATEST_BY_OFFSET(pingable) pingable
from v1_mux_connection_ping_st_parse
group by assetid;
Problem is it also emits no-changes. A solution is to translate the table to a stream.
CREATE STREAM v1_mux_connection_ping_ta_s
(assetId VARCHAR KEY, pingable VARCHAR)
WITH (kafka_topic='V1_MUX_CONNECTION_PING_TA', value_format='JSON');
To arrive at only changed values
create table d_opt_details as
select
s.assetId,
LATEST_BY_OFFSET(s.pingable) new,
LATEST_BY_OFFSET(s.pingable, 2)[1] old
from v1_mux_connection_ping_ta_s s
group by
s.assetId;
create table opt_details as
select
s.assetId, s.new as pingable
from d_opt_details s
where new != old;

Related

How to achieve reliable db reads calls with concurrency

Consider you have the very simple table definition of:
CREATE TABLE first_name
(
id Integer NOT NULL,
name varchar(10),
PRIMARY KEY id
);
Now consider you have two rows like:
id , name
1 Dan
2 Jack
Imagine you have X processes that read from time to time the max(id) value,
then decide what is the sequential id to be written as the new record.
The problem is when having multiple processes like that, while reading we can already have another id entered.
What is the best option to guarantee in postgres an atomic action of read latest id and then write the next one, when having multiple processes doing the same all the time?
I know we have the Serial type (like mysql autoincrement) which allows automatic management of field updates in a sequential manner, how will it perform when multiple processes won't have any lock mechanism applied and just the serial definition, is it sufficient? are we protected here for concurrency problem?
Example for the second declaration from point 2:
CREATE TABLE first_name
(
id Serial,
name varchar(10),
PRIMARY KEY id
);
To get the max just query the serial value:
select currval(pg_get_serial_sequence('first_name', 'id'));
for example:
clima=# CREATE TABLE first_name
(
id serial,
name varchar(10)
);
CREATE TABLE
clima=# insert into first_name(name) select 'Diego';
INSERT 0 1
clima=# select currval(pg_get_serial_sequence('first_name', 'id'));
currval
---------
1
(1 row)
Yes and No, Depends, have you transactions... with witch scope?
The whole point of SERIAL is that the database solves the concurrency issue for you.
With respect to postgresql:
Here's a page from the postgresql documentation (Data Types/Numeric Types/Serial Types) which tells you that SERIAL columns are built on sequences.
Note: Because smallserial, serial and bigserial are implemented using sequences...
Here we see sequence generators, CREATE SEQUENCE a postgresql (only?) construct that lets you make your own integer sequences without tying them to an (identity) column. It discusses the semantics, which includes the property that not every sequential number might appear in your sequence (because the sequence ids are generated even if the row isn't actually added to the table, e.g., if the inserting transaction is rolled back).
Because nextval and setval calls are never rolled back, sequence objects cannot be used if "gapless" assignment of sequence numbers is needed. It is possible to build gapless assignment by using exclusive locking of a table containing a counter; but this solution is much more expensive than sequence objects, especially if many transactions need sequence numbers concurrently.
(Also you can "cache" sequence generation but then you have issues with non-sequential sequence ids).
Finally, here we see you can also use GENERATED AS IDENTITY to the same effect, standard SQL.

Storing duplicate data as a column in Postgres?

In some database project, I have a users table which somehow has a computed value avg_service_rating. And there is another table called services with all the services associated to the user and the ratings for that service. Is there a computationally-lite way which I can maintain the avg_service_rating rating without updating it every time an INSERT is done on the services table? Perhaps like a generate column but with a function call instead? Any direct advice or link to resources will be greatly appreciated as well!
CREATE TABLE users (
username VARCHAR PRIMARY KEY,
avg_service_ratings NUMERIC -- is it possible to store some function call for this column?,
...
);
CREATE TABLE service (
username VARCHAR NOT NULL REFERENCE users (username);
service_date DATE NOT NULL,
rating INTEGER,
PRIMARY KEY (username, service_date),
);
If the values should be consistent, a generated column won't fit the bill, since it is only recomputed if the row itself is modified.
I see two solutions:
have a trigger on the services table that updates the users table whenever a rating is added or modified. That slows down data modifications, but not your queries.
Turn users into a view. The original users table would be renamed, and it loses the avg_service_rating column, which is computed on the fly by the view.
To make the illusion perfect, create an INSTEAD OF INSERT OR UPDATE OR DELETE trigger on the view that modifies the underlying table. Then your application does not need to be changed.
With this solution you pay a certain price both on SELECT and on data modifications, but the latter price will be lower, since you don't have to modify two tables (and users might receive fewer modifications than services). An added advantage is that you avoid data duplication.
A generated column would only be useful if the source data is in the same table row.
Otherwise your options are a view (where you could call a function or calculate the value via a subquery), or an AFTER UPDATE OR INSERT trigger on the service table, which updates users.avg_service_ratings. With a trigger, if you get a lot of updates on the service table you'd need to consider possible concurrency issues, but it would mean the figure doesn't need to be calculated every time a row in the users table is accessed.

Way to migrate a create table with sequence from postgres to DB2

I need to migrate a DDL from Postgres to DB2, but I need that it works the same as in Postgres. There is a table that generates values from a sequence, but the values can also be explicitly given.
Postgres
create sequence hist_id_seq;
create table benchmarksql.history (
hist_id integer not null default nextval('hist_id_seq') primary key,
h_c_id integer,
h_c_d_id integer,
h_c_w_id integer,
h_d_id integer,
h_w_id integer,
h_date timestamp,
h_amount decimal(6,2),
h_data varchar(24)
);
(Look at the sequence call in the hist_id column to define the value of the primary key)
The business logic inserts into the table by explicitly providing an ID, and in other cases, it leaves the database to choose the number.
If I change this in DB2 to a GENERATED ALWAYS it will throw errors because there are some provided values. On the other side, if I create the table with GENERATED BY DEFAULT, DB2 will throw an error when trying to insert with the same value (SQL0803N), because the "internal sequence" does not take into account the already inserted values, and it does not retry with a next value.
And, I do not want to restart the sequence each time a provided ID was inserted.
This is the problem in BenchmarkSQL when trying to port it to DB2: https://sourceforge.net/projects/benchmarksql/ (File sqlTableCreates)
How can I implement the same database logic in DB2 as it does in Postgres (and apparently in Oracle)?
You're operating under a misconception: that sources external to the db get to dictate its internal keys. Ideally/conceptually, autogenerated ids will never need to be seen outside of the db, as conceptually there should be unique natural keys for export or reporting. Still, there are times when applications will need to manage some ids, often when setting up related entities (eg, JPA seems to want to work this way).
However, if you add an id value that you generated from a different source, the db won't be able to manage it. How could it? It's not efficient - for one thing, attempting to do so would do one of the following
Be unsafe in the face of multiple clients (attempt to add duplicate keys)
Serialize access to the table (for a potentially slow query, too)
(This usually shows up when people attempt something like: SELECT MAX(id) + 1, which would require locking the entire table for thread safety, likely including statements that don't even touch that column. If you try to find any "first-unused" id - trying to fill gaps - this gets more complicated and problematic)
Neither is ideal, so it's best to not have the problem in the first place. This is usually done by having id columns be autogenerated, but (as pointed out earlier) there are situations where we may need to know what the id will be before we insert the row into the table. Fortunately, there's a standard SQL object for this, SEQUENCE. This provides a db-managed, thread-safe, fast way to get ids. It appears that in PostgreSQL you can use sequences in the DEFAULT clause for a column, but DB2 doesn't allow it. If you don't want to specify an id every time (it should be autogenerated some of the time), you'll need another way; this is the perfect time to use a BEFORE INSERT trigger;
CREATE TRIGGER Add_Generated_Id NO CASCADE BEFORE INSERT ON benchmarksql.history
NEW AS Incoming_Entity
FOR EACH ROW
WHEN Incoming_Entity.id IS NULL
SET id = NEXTVAL FOR hist_id_seq
(something like this - not tested. You didn't specify where in the project this would belong)
So, if you then add a row with something like:
INSERT INTO benchmarksql.history (hist_id, h_data) VALUES(null, 'a')
or
INSERT INTO benchmarksql.history (h_data) VALUES('a')
an id will be generated and attached automatically. Note that ALL ids added to the table must come from the given sequence (as #mustaccio pointed out, this appears to be true even in PostgreSQL), or any UNIQUE CONSTRAINT on the column will start throwing duplicate-key errors. So any time your application needs an id before inserting a row in the table, you'll need some form of
SELECT NEXT VALUE FOR hist_id_seq
FROM sysibm.sysdummy1
... and that's it, pretty much. This is completely thread and concurrency safe, will not maintain/require long-term locks, nor require serialized access to the table.

How the select the last record from a time series in Cassandra?

I want to store some encoded 'data' into cassadra, versioned by timestamp. My tentative schema is:
CREATE TABLE items (
item_id varchar,
timestamp timestamp,
data blob,
PRIMARY KEY (item_id, timestamp)
);
I would like to be able to return the list of items, returning only the latest ( highest timestamp) for each item_id; Is it possible with this schema?
It is not possible to express such a query in a single CQL statement for this table, so the answer is no.
You can try creating another table, e.g. latest_items, and only storing the last update there, so the schema would be:
CREATE TABLE latest_items (
item_id varchar,
timestamp timestamp,
data blob,
PRIMARY KEY (item_id)
);
If your rows are inserted in timestamp order, the table would naturally contain only the latest row for each item. Then you can just run select * from latest_items limit 10000000;. This will of course be expensive, because you're fetching all rows, but given your requirements where you actually want all of them, there is no way to avoid it.
This second table involves duplicating your data, but this is a common theme with Cassandra. You can avoid duplicating the blob by storing it indirectly, i.e. as a path or URL or somesuch.

How to maintain record history on table with one-to-many relationships?

I have a "services" table for detailing services that we provide. Among the data that needs recording are several small one-to-many relationships (all with a foreign key constraint to the service_id) such as:
service_owners -- user_ids responsible for delivery of service
service_tags -- e.g. IT, Records Management, Finance
customer_categories -- ENUM value
provider_categories -- ENUM value
software_used -- self-explanatory
The problem I have is that I want to keep a history of updates to a service, for which I'm using an update trigger on the table, that performs an insert into a history table matching the original columns. However, if a normalized approach to the above data is used, with separate tables and foreign keys for each one-to-many relationship, any update on these tables will not be recognised in the history of the service.
Does anyone have any suggestions? It seems like I need to store child keys in the service table to maintain the integrity of the service history. Is a delimited text field a valid approach here or, as I am using postgreSQL, perhaps arrays are also a valid option? These feel somewhat dirty though!
Thanks.
If your table is:
create table T (
ix int identity primary key,
val nvarchar(50)
)
And your history table is:
create table THistory (
ix int identity primary key,
val nvarchar(50),
updateType char(1), -- C=Create, U=Update or D=Delete
updateTime datetime,
updateUsername sysname
)
Then you just need to put an update trigger on all tables of interest. You can then find out what the state of any/all of the tables were at any point in history, to determine what the relationships were at that time.
I'd avoid using arrays in any database whenever possible.
I don't like updates for the exact reason you are saying here...you lose information as it's over written. My answer is quite simple...don't update. Not sure if you're at a point where this can be implemented...but if you can I'd recommend using the main table itself to store historical (no need for a second set of history tables).
Add a column to your main header table called 'active'. This can be a character or a bit (0 is off and 1 is on). Then it's a bit of trigger magic...when an update is preformed, you insert a row into the table identical to the record being over-written with a status of '0' (or inactive) and then update the existing row (this process keeps the ID column on the active record the same, the newly inserted record is the inactive one with a new ID).
This way no data is ever lost (admittedly you are storing quite a few rows...) and the history can easily be viewed with a select where active = 0.
The pain here is if you are working on something already implemented...every existing query that hits this table will need to be updated to include a check for the active column. Makes this solution very easy to implement if you are designing a new system, but a pain if it's a long standing application. Unfortunately existing reports will include both off and on records (without throwing an error) until you can modify the where clause