PipelineDB: continuous view output stream unexpectedly shows same (old) and (new) values - pipelinedb

I am using PipelineDB 0.9.7u3
I played a little with continuous view output stream to find out if I could get a new continuous view just with some the updates.
This is my test case.
CREATE STREAM stream_test(ticketid text, val int, status text);
-- simple continuous view on stream_test
CREATE CONTINUOUS VIEW cv_test AS
SELECT
ticketid,
min(val) as v0,
keyed_min(val, status) as v0_status
FROM stream_test
GROUP BY ticketid;
-- continuous view to keep cv_test's updates and insertions
CREATE CONTINUOUS VIEW cv_test_upin AS
SELECT
(new).ticketid,
(old).v0 as oldV0,
(old).v0_status as oldV0Status,
(new).v0 as newV0,
(new).v0_status as newV0Status
FROM output_of('cv_test')
-- continuous view to keep just some cv_test's updates
CREATE CONTINUOUS VIEW cv_test_up AS
SELECT
(new).ticketid,
(old).v0 as oldV0,
(old).v0_status as oldV0Status,
(new).v0 as newV0,
(new).v0_status as newV0Status
FROM output_of('cv_test')
WHERE (old).v0 != (new).v0;
Let's put some data.
INSERT INTO stream_test VALUES
('t1', 124, 'open'),
('t2', 190, 'pending')
And as expected:
select * from cv_test;
"t2";190;"pending"
"t1";124;"open"
select * from cv_test_upin;
"t2";;"";190;"pending"
"t1";;"";124;"open"
select * from cv_test_up;
Then, some updates.
INSERT INTO stream_test VALUES
('t2', 160, 'waiting'),
('t1', 100, 'pending')
And as expected:
select * from cv_test;
"t2";160;"waiting"
"t1";100;"pending"
select * from cv_test_upin;
"t2";;"";190;"pending"
"t1";;"";124;"open"
"t2";190;"pending";160;"waiting"
"t1";124;"open";100;"pending"
select * from cv_test_up;
"t2";190;"pending";160;"waiting"
"t1";124;"open";100;"pending"
Now, some new data and some updates.
INSERT INTO stream_test VALUES
('t2', 90, 'spam'),
('t3', 140, 'open'),
('t1', 80, 'closed')
select * from cv_test; returned as expected, but select * from cv_test_upin;did not.
...
"t2";160;"waiting";90;"spam"
"t3";;"";140;"open"
"t1";80;"closed";80;"closed"
I expected last "t1" to be "t1";100;"pending";80;"closed"
Bug or expected behaviour?
Thanks.

After digging into this, it appears that you have indeed discovered some unexpected behavior, and most likely it's a bug. We are going to resolve it shortly, here is the issue:
https://github.com/pipelinedb/pipelinedb/issues/1797
After it's resolved we will publish an updated release.

Related

DB2 measure execution time of triggers

How can I best measure the execution time of triggers in DB2 for insert or update.
It is needed for some performance issues, some of them are behaving very slow.
CREATE OR REPLACE TRIGGER CHECK
NO CASCADE BEFORE INSERT ON DAG
REFERENCING NEW AS OBJ
FOR EACH ROW MODE DB2SQL
WHEN(
xyz)
)
SIGNAL SQLSTATE xxx)
For compiled triggers (that is, with BEGIN ... END body):
SELECT
T.TRIGNAME
, M.SECTION_NUMBER
, M.STMT_EXEC_TIME
, M.NUM_EXEC_WITH_METRICS
-- Other M metrics
, M.*
, M.STMT_TEXT
FROM SYSCAT.TRIGGERS T
JOIN SYSCAT.TRIGDEP D
ON (D.TRIGSCHEMA, D.TRIGNAME) = (T.TRIGSCHEMA, T.TRIGNAME)
JOIN TABLE (MON_GET_PKG_CACHE_STMT (NULL, NULL, NULL, -2)) M
ON (M.PACKAGE_SCHEMA, M.PACKAGE_NAME) = (D.BSCHEMA, D.BNAME)
WHERE D.BTYPE = 'K'
-- Or use whatever filter on your triggers
AND (T.TABSCHEMA, T.TABNAME) = ('MYSCHEMA', 'MYTABLE')
ORDER BY 1, 2
For inlined triggers (that is, with BEGIN ATOMIC ... END body):
No way to get separate statistics for them. They are compiled and optimized with the corresponding statement fired them.

Get postgres query log statement and duration as one record

I have log_min_duration_statement=0 in config.
When I check log file, sql statement and duration are saved into different rows.
(Not sure what I have wrong, but statement and duration are not saved together as this answer points)
As I understand session_line_num for duration record always equals to session_line_num + 1 for relevant statement, for same session of course.
Is this correct? is below query reliable to correctly get statement with duration in one row?
(csv log imported into postgres_log table):
WITH
sql_cte AS(
SELECT session_id, session_line_num, message AS sql_statement
FROM postgres_log
WHERE
message LIKE 'statement%'
)
,durat_cte AS (
SELECT session_id, session_line_num, message AS duration
FROM postgres_log
WHERE
message LIKE 'duration%'
)
SELECT
t1.session_id,
t1.session_line_num,
t1.sql_statement,
t2.duration
FROM sql_cte t1
LEFT JOIN durat_cte t2
ON t1.session_id = t2.session_id AND t1.session_line_num + 1 = t2.session_line_num;

INSERT INTO SELECT JOIN returning null from joined table (Postgres)

I'm using the following Knex statement to copy data from two tables into another table:
const insert = knex.from(knex.raw('?? (??, ??, ??, ??, ??, ??, ??)', ['carrier', 'docket_number', 'dot_number', 'legal_name', 'dba_name', 'nbr_power_unit', 'rating', 'carrier_operation']))
.insert(knex('carrier_temp as ct').leftJoin('carrier_census_temp as cen', 'ct.dot_number', 'cen.DOT_NUMBER')
.select(['ct.docket_number as docket_number',
'ct.dot_number as dot_number',
'ct.legal_name as legal_name',
'ct.dba_name as dba_name',
'cen.tot_pwr as nbr_power_unit',
'cen.RATING as rating',
knex.raw('CASE WHEN cen.crrinter = \'A\' THEN \'INTERSTATE\' ELSE \'INTRASTATE\' END as "carrier_operation"')])).toString()
const conflict = knex.raw('ON CONFLICT (docket_number) DO NOTHING;').toString()
const q = insert + conflict
await knex.raw(q).debug()
The generated sql is:
INSERT INTO "carrier"
(
"docket_number",
"dot_number",
"legal_name",
"dba_name",
"nbr_power_unit",
"rating",
"carrier_operation"
)
SELECT "ct"."docket_number" AS "docket_number",
"ct"."dot_number" AS "dot_number",
"ct"."legal_name" AS "legal_name",
"ct"."dba_name" AS "dba_name",
"cen"."tot_pwr" AS "nbr_power_unit",
"cen"."RATING" AS "rating",
CASE
WHEN cen.crrinter = 'A' THEN 'INTERSTATE'
ELSE 'INTRASTATE'
END AS "carrier_operation"
FROM "carrier_temp" AS "ct"
left join "carrier_census_temp" AS "cen"
ON "ct"."dot_number" = "cen"."DOT_NUMBER" ON conflict (docket_number) DO nothing;
The carrier table receives all the columns that are referenced from carrier_temp or ct correctly, but the columns pulled from carrier_census_temp or cen are all ending up as NULL (nbr_power_unit, rating, and carrier_operation). That is, except for the case statement, which is setting every row's carrier_operation to INTRASTATE. If I instead equate NULL in the case statement, it still sets every row to INTRASTATE. Does anyone have any idea why this is?
See comments on question. I totally missed something wrong in my data that led to nothing ever being joined, which just leads everything to always be NULL.

Update query is not working in the function created but that same query when runs manually working

I am creating function in which update command is used consecutively two times the first update is working and the second one is not
Have tried execute format () for the second update not working
While running the function the updation2 is not working but when I manually run this updation command the table get updated...
The code is as follows:
update edmonton.weekly_pmt_report
set permit_number = pmt.prnum
from(select permit_details,
split_part(permit_details,'-',1) as prnum
from edmonton.weekly_pmt_report) as pmt
where edmonton.weekly_pmt_report.permit_details =
pmt.permit_details;
execute format('update edmonton.weekly_pmt_report
set address = ds_dt.adr,
job_description = ds_dt.job,
applicant = ds_dt.apnt
from(select split_part(per_num,''-'',1) as job_id,
job_des as job,addr as adr, applic as apnt
from edmonton.descriptive_details ) as ds_dt
where edmonton.weekly_pmt_report.permit_number =
ds_dt.job_id');
That the second update query has value only 400 out of 1000 so the null columns are on the Top, that's why It seemed to be not working...

Updating remote view with dynamic value throws an error

I have an updatable view (vwItem) being accessed via a linked server ([sql\dev].)
When I update the view with a static data, the underlying table gets updated.
UPDATE ci SET CertifiedNumber = '44444'
FROM [sql\dev].contact.dbo.vwItem ci WITH (NOLOCK)
WHERE ci.CertifiedBatchID IN ( 5829 )
But when I try to pass a dynamic value,
declare #lo_vch_CertifiedNumber varchar(50) =
'1111111111222222222233333'
UPDATE ci
SET CertifiedNumber = #lo_vch_CertifiedNumber + '44444'
FROM [sql\dev].contact.dbo.vwItem ci
WITH (NOLOCK)
WHERE ci.CertifiedBatchID IN ( 5829 )
it fails, with following error message
The statement has been terminated. Msg 16932, Level 16, State 1,
Line 1 The cursor has a FOR UPDATE list and the requested column
to be updated is not in this list.
I don't even use a cursor but the error mentions a cursor..
Here is the definition of "vwItem".
CREATE view [dbo].vwItem
with schemabinding
AS
select CertifiedItemID = cast(CertifiedItemID as varchar),
CertifiedNumber, [Service], Weight, Price, CertifiedBatchID
from dbo.tblItem (nolock)
Why does the error occur and what does it mean?
Got around the problem by implementing a sproc that updates vwItem instead of using updatable view