H2 gives syntax error for SQL merge or update query copied from official H2 site - merge

I want to run an upsert query on H2 databese, so WHEN MATCHED then update WHEN NOT MATCHED than insert.
I have table:
create TABLE TESTTABLE
(NAME VARCHAR2(100) NOT NULL,
NUMBER1 INT,
NUMBER2 INT,
);
And 1 row in it:
Peter 1 2
I tried these copied code from official H2 site:
MERGE INTO TESTTABLE AS T USING DUAL ON NAME = 'Peter'
WHEN NOT MATCHED THEN INSERT VALUES ('Peter3', 1, 2)
WHEN MATCHED THEN UPDATE SET NUMBER1 = 2 and NUMBER2 = 3;
Error:
Syntax error in SQL statement "MERGE INTO TESTTABLE AS[*] T USING ...
Without the AS:
MERGE INTO TESTTABLE USING DUAL ON NAME = 'Peter'
WHEN NOT MATCHED THEN INSERT VALUES ('Peter3', 1, 2)
WHEN MATCHED THEN UPDATE SET NUMBER1 = 2 and NUMBER2 = 3
Error:
Syntax error in SQL statement "MERGE INTO TESTTABLE USING[*] DUAL ON NAME = 'Peter'...
Other try:
MERGE INTO TESTTABLE (NAME, NUMBER1, NUMBER2)
KEY(NAME) VALUES('PETER')
WHEN MATCHED THEN UPDATE SET NUMBER1 = 2, NUMBER2 = 3
WHEN NOT MATCHED THEN INSERT VALUES ('Peter3', 1, 2);
Error:
Syntax error in SQL statement "MERGE INTO TESTTABLE (NAME, NUMBER1, NUMBER2) KEY(NAME) VALUES('PETER') WHEN[*] MATCHED THEN";
It seems none of code is right on h2database.com.
Any idea?
Thanks for the help!

The documentation on website is for the latest version of H2, currently it is 1.4.199.
Your error message definitely came from some old unsupported version that does not have the standard MERGE statement. You need to upgrade your version of H2 to the latest one.
Note that
MERGE INTO TESTTABLE (NAME, NUMBER1, NUMBER2)
KEY(NAME) VALUES('PETER')
WHEN MATCHED …
is not valid at all. There are two different incompatible MERGE statements in H2. The H2-specific one that has KEY clause and VALUES clause (or a subquery) and the standard one that has additional source table, ON condition, and WHEN MATCHED / WHEN NOT MATCHED clauses. Both commands are separately described in the documentation. You can't mix them.

Related

PGSQL - How to efficiently flatten key/value table [duplicate]

Does any one know how to create crosstab queries in PostgreSQL?
For example I have the following table:
Section Status Count
A Active 1
A Inactive 2
B Active 4
B Inactive 5
I would like the query to return the following crosstab:
Section Active Inactive
A 1 2
B 4 5
Is this possible?
Install the additional module tablefunc once per database, which provides the function crosstab(). Since Postgres 9.1 you can use CREATE EXTENSION for that:
CREATE EXTENSION IF NOT EXISTS tablefunc;
Improved test case
CREATE TABLE tbl (
section text
, status text
, ct integer -- "count" is a reserved word in standard SQL
);
INSERT INTO tbl VALUES
('A', 'Active', 1), ('A', 'Inactive', 2)
, ('B', 'Active', 4), ('B', 'Inactive', 5)
, ('C', 'Inactive', 7); -- ('C', 'Active') is missing
Simple form - not fit for missing attributes
crosstab(text) with 1 input parameter:
SELECT *
FROM crosstab(
'SELECT section, status, ct
FROM tbl
ORDER BY 1,2' -- needs to be "ORDER BY 1,2" here
) AS ct ("Section" text, "Active" int, "Inactive" int);
Returns:
Section | Active | Inactive
---------+--------+----------
A | 1 | 2
B | 4 | 5
C | 7 | -- !!
No need for casting and renaming.
Note the incorrect result for C: the value 7 is filled in for the first column. Sometimes, this behavior is desirable, but not for this use case.
The simple form is also limited to exactly three columns in the provided input query: row_name, category, value. There is no room for extra columns like in the 2-parameter alternative below.
Safe form
crosstab(text, text) with 2 input parameters:
SELECT *
FROM crosstab(
'SELECT section, status, ct
FROM tbl
ORDER BY 1,2' -- could also just be "ORDER BY 1" here
, $$VALUES ('Active'::text), ('Inactive')$$
) AS ct ("Section" text, "Active" int, "Inactive" int);
Returns:
Section | Active | Inactive
---------+--------+----------
A | 1 | 2
B | 4 | 5
C | | 7 -- !!
Note the correct result for C.
The second parameter can be any query that returns one row per attribute matching the order of the column definition at the end. Often you will want to query distinct attributes from the underlying table like this:
'SELECT DISTINCT attribute FROM tbl ORDER BY 1'
That's in the manual.
Since you have to spell out all columns in a column definition list anyway (except for pre-defined crosstabN() variants), it is typically more efficient to provide a short list in a VALUES expression like demonstrated:
$$VALUES ('Active'::text), ('Inactive')$$)
Or (not in the manual):
$$SELECT unnest('{Active,Inactive}'::text[])$$ -- short syntax for long lists
I used dollar quoting to make quoting easier.
You can even output columns with different data types with crosstab(text, text) - as long as the text representation of the value column is valid input for the target type. This way you might have attributes of different kind and output text, date, numeric etc. for respective attributes. There is a code example at the end of the chapter crosstab(text, text) in the manual.
db<>fiddle here
Effect of excess input rows
Excess input rows are handled differently - duplicate rows for the same ("row_name", "category") combination - (section, status) in the above example.
The 1-parameter form fills in available value columns from left to right. Excess values are discarded.
Earlier input rows win.
The 2-parameter form assigns each input value to its dedicated column, overwriting any previous assignment.
Later input rows win.
Typically, you don't have duplicates to begin with. But if you do, carefully adjust the sort order to your requirements - and document what's happening.
Or get fast arbitrary results if you don't care. Just be aware of the effect.
Advanced examples
Pivot on Multiple Columns using Tablefunc - also demonstrating mentioned "extra columns"
Dynamic alternative to pivot with CASE and GROUP BY
\crosstabview in psql
Postgres 9.6 added this meta-command to its default interactive terminal psql. You can run the query you would use as first crosstab() parameter and feed it to \crosstabview (immediately or in the next step). Like:
db=> SELECT section, status, ct FROM tbl \crosstabview
Similar result as above, but it's a representation feature on the client side exclusively. Input rows are treated slightly differently, hence ORDER BY is not required. Details for \crosstabview in the manual. There are more code examples at the bottom of that page.
Related answer on dba.SE by Daniel Vérité (the author of the psql feature):
How do I generate a pivoted CROSS JOIN where the resulting table definition is unknown?
SELECT section,
SUM(CASE status WHEN 'Active' THEN count ELSE 0 END) AS active, --here you pivot each status value as a separate column explicitly
SUM(CASE status WHEN 'Inactive' THEN count ELSE 0 END) AS inactive --here you pivot each status value as a separate column explicitly
FROM t
GROUP BY section
You can use the crosstab() function of the additional module tablefunc - which you have to install once per database. Since PostgreSQL 9.1 you can use CREATE EXTENSION for that:
CREATE EXTENSION tablefunc;
In your case, I believe it would look something like this:
CREATE TABLE t (Section CHAR(1), Status VARCHAR(10), Count integer);
INSERT INTO t VALUES ('A', 'Active', 1);
INSERT INTO t VALUES ('A', 'Inactive', 2);
INSERT INTO t VALUES ('B', 'Active', 4);
INSERT INTO t VALUES ('B', 'Inactive', 5);
SELECT row_name AS Section,
category_1::integer AS Active,
category_2::integer AS Inactive
FROM crosstab('select section::text, status, count::text from t',2)
AS ct (row_name text, category_1 text, category_2 text);
DB Fiddle here:
Everything works: https://dbfiddle.uk/iKCW9Uhh
Without CREATE EXTENSION tablefunc; you get this error: https://dbfiddle.uk/j8W1CMvI
ERROR: function crosstab(unknown, integer) does not exist
LINE 4: FROM crosstab('select section::text, status, count::text fro...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
Solution with JSON aggregation:
CREATE TEMP TABLE t (
section text
, status text
, ct integer -- don't use "count" as column name.
);
INSERT INTO t VALUES
('A', 'Active', 1), ('A', 'Inactive', 2)
, ('B', 'Active', 4), ('B', 'Inactive', 5)
, ('C', 'Inactive', 7);
SELECT section,
(obj ->> 'Active')::int AS active,
(obj ->> 'Inactive')::int AS inactive
FROM (SELECT section, json_object_agg(status,ct) AS obj
FROM t
GROUP BY section
)X
Sorry this isn't complete because I can't test it here, but it may get you off in the right direction. I'm translating from something I use that makes a similar query:
select mt.section, mt1.count as Active, mt2.count as Inactive
from mytable mt
left join (select section, count from mytable where status='Active')mt1
on mt.section = mt1.section
left join (select section, count from mytable where status='Inactive')mt2
on mt.section = mt2.section
group by mt.section,
mt1.count,
mt2.count
order by mt.section asc;
The code I'm working from is:
select m.typeID, m1.highBid, m2.lowAsk, m1.highBid - m2.lowAsk as diff, 100*(m1.highBid - m2.lowAsk)/m2.lowAsk as diffPercent
from mktTrades m
left join (select typeID,MAX(price) as highBid from mktTrades where bid=1 group by typeID)m1
on m.typeID = m1.typeID
left join (select typeID,MIN(price) as lowAsk from mktTrades where bid=0 group by typeID)m2
on m1.typeID = m2.typeID
group by m.typeID,
m1.highBid,
m2.lowAsk
order by diffPercent desc;
which will return a typeID, the highest price bid and the lowest price asked and the difference between the two (a positive difference would mean something could be bought for less than it can be sold).
There's a different dynamic method that I've devised, one that employs a dynamic rec. type (a temp table, built via an anonymous procedure) & JSON. This may be useful for an end-user who can't install the tablefunc/crosstab extension, but can still create temp tables or run anon. proc's.
The example assumes all the xtab columns are the same type (INTEGER), but the # of columns is data-driven & variadic. That said, JSON aggregate functions do allow for mixed data types, so there's potential for innovation via the use of embedded composite (mixed) types.
The real meat of it can be reduced down to one step if you want to statically define the rec. type inside the JSON recordset function (via nested SELECTs that emit a composite type).
dbfiddle.uk
https://dbfiddle.uk/N1EzugHk
Crosstab function is available under the tablefunc extension. You'll have to create this extension one time for the database.
CREATE EXTENSION tablefunc;
You can use the below code to create pivot table using cross tab:
create table test_Crosstab( section text,
status text,
count numeric)
insert into test_Crosstab values ( 'A','Active',1)
,( 'A','Inactive',2)
,( 'B','Active',4)
,( 'B','Inactive',5)
select * from crosstab(
'select section
,status
,count
from test_crosstab'
)as ctab ("Section" text,"Active" numeric,"Inactive" numeric)

How to construct dynamic SQL where condition against JSON column

I have a SQL table that stores data in Json format. I am using sample data below to understand the issue. Each document type has its own JSON structure.
DocumentID DocumentTypeID Status JsonData
----------------------------------------------------------------------------
1 2 Active {"FirstName":"Foo","LastName":"Bar","States":"[OK]"}
2 2 Active {"FirstName":"James","LastName":"Smith","States":"[TX,NY]"}
3 3 Active {"Make":"Ford","Model":"Focus","Year":"[2020]"}
4 3 Active {"Make":"Tesla","Model":"X","Year":"[2012,2015,2019]"}
then I have another JSON that needs to use in Where condition
#Condition = '{"FirstName": "James",LastName:"Smith","States":[TX]}'
I will also have DocumentTypeID as parameter
So in normal sql if i hard-code the property names then SQL would look something like
SELECT * FROM Documents d
WHERE
d.DocumentTypeID = #DocumentTypeID AND
JSON_VALUE(d.JsonData,'$.FirstName') = JSON_VALUE(#Condition,'$.FirstName') AND
JSON_VALUE(d.JsonData,'$.LastName') = JSON_VALUE(#Condition,'$.LastName') AND
JSON_QUERY(d.JsonData,'$.States') = JSON_QUERY(#Condition,'$.States') -- This line is wrong. I have
-- to check if one array is
-- subset of another array
Given
The property names in JsonData column and Condition will exactly match for a given DocumentTypeID.
I already have another SQL table that stores DocumentType and its Properties. If it helps, I can store json path for each property that can be used in above query to dynamically construct where condition
DocumentTypeID PropertyName JsonPath DataType
---------------------------------------------------------------------------------
2 FirstName $.FirstName String
2 LastName $.LastName String
2 States $.States Array
3 Make $.Make String
3 Model $.Model String
3 Year $.Year Array
ISSUE
For each document type the #condition will have different JSON structure. How do i construct dynamic where condition? Is this even possible in SQL?
I am using C#.NET so i was thinking of constructing SQL query in C# and just execute SQL Query. But before i go that route i want to check if its possible to do this in TSQL
Unfortunately, JSON support was only added to SQL Server in 2016 version, and still have room for improvement. Working with JSON data that contains arrays is quite cumbersome, involving OPENJSON to get the data, and another OPENJSON to get the array data.
An SQL based solution to this is possible - but a I wrote - cumbersome.
First, create and populate sample table (Please save us this step in your future questions):
DECLARE #Documents AS TABLE (
[DocumentID] int,
[DocumentTypeID] int,
[Status] varchar(6),
[JsonData] varchar(100)
);
INSERT INTO #Documents ([DocumentID], [DocumentTypeID], [Status], [JsonData]) VALUES
(1, 2, 'Active', '{"FirstName":"Foo","LastName":"Bar","States":["OK"]}'),
(2, 2, 'Active', '{"FirstName":"James","LastName":"Smith","States":["TX","NY"]}'),
(2, 2, 'Active', '{"FirstName":"James","LastName":"Smith","States":["OK", "NY"]}'),
(2, 2, 'Active', '{"FirstName":"James","LastName":"Smith","States":["OH", "OK"]}'),
(3, 3, 'Active', '{"Make":"Ford","Model":"Focus","Year":[2020]}'),
(4, 3, 'Active', '{"Make":"Tesla","Model":"X","Year":[2012,2015,2019]}');
Note I've added a couple of rows to the sample data, to verify the condition is working properly.
Also, as a side Note - some of the JSON data in the question was improperly formatted - I've had to fix that.
Then, declare the search parameters (Note: I still think sending a JSON string as a search condition is potentially risky):
DECLARE #DocumentTypeID int = 2,
#Condition varchar(100) = '{"FirstName": "James","LastName":"Smith","States":["TX", "OH"]}';
(Note: I've added another state - again to make sure the condition works as it should.)
Then, I've used a common table expression with openjson and cross apply to convert the json condition to tabular data, and joined that cte to the table:
WITH CTE AS
(
SELECT FirstName, LastName, [State]
FROM OPENJSON(#Condition)
WITH (
FirstName varchar(10) '$.FirstName',
LastName varchar(10) '$.LastName',
States nvarchar(max) '$.States' AS JSON
)
CROSS APPLY OPENJSON(States)
WITH (
[State] varchar(2) '$'
)
)
SELECT [DocumentID], [DocumentTypeID], [Status], [JsonData]
FROM #Documents
CROSS APPLY
OPENJSON([JsonData])
WITH(
-- Since we already have to use OPENJSON, no point of also using JSON_VALUE...
FirstName varchar(10) '$.FirstName',
LastName varchar(10) '$.LastName',
States nvarchar(max) '$.States' AS JSON
) As JD
CROSS APPLY OPENJSON(States)
WITH(
[State] varchar(2) '$'
) As JDS
JOIN CTE
ON JD.FirstName = CTE.FirstName
AND JD.LastName = CTE.LastName
AND JDS.[State] = CTE.[State]
WHERE DocumentTypeID = #DocumentTypeID
Results:
DocumentID DocumentTypeID Status JsonData
2 2 Active {"FirstName":"James","LastName":"Smith","States":["TX","NY"]}
2 2 Active {"FirstName":"James","LastName":"Smith","States":["OH", "OK"]}

Trying to manipulate string such as if '26169;#c785643', then the result should be like 'c785643'

I am trying to manipulate string data in a column such as if the given string is '20591;#e123456;#17507;#c567890;#15518;#e135791' or '26169;#c785643', then the
result should be like 'e123456;c567890;e135791' or 'c785643'. The number of digits in between can be of any length.
Some of the things I have tried so far are:
select replace('20591;#e123456;#17507;#c567890;#15518;#e135791','#','');
This leaves me with '20591;e123456;17507;c567890;15518;e135791', which still includes the digits without 'e' or 'c' prefixed to them. i want to get rid of 20591, 17507 and 15518.
Create function that will keep a pattern of '%[#][ec][0-9][;]%' and will get rid of the rest.
The most important advise is: Do not store any data in a delimited string. This is violating the most basic principle of relational database concepts (1.NF).
The second hint is SO-related: Please always add / tag your questions with the appropriate tool. The tag [tsql] points to SQL-Server, but this might be wrong (which would invalidate both answers). Please tag the full product with its version (e.g. [sql-server-2012]). Especially with string splitting there are very important product related changes from version to version.
Now to your question.
Working with (almost) any version of SQL-Server
My suggestion uses a trick with XML:
(credits to Alan Burstein for the mockup)
DECLARE #table TABLE (someid INT IDENTITY, somestring VARCHAR(50));
INSERT #table VALUES ('20591;#e123456;#17507;#c567890;#15518;#e135791'),('26169;#c785643')
--the query
SELECT t.someid,t.somestring,A.CastedToXml
,STUFF(REPLACE(A.CastedToXml.query('/x[contains(text()[1],"#") and empty(substring(text()[1],2,100) cast as xs:int?)]')
.value('.','nvarchar(max)'),'#',';'),1,1,'') TheNewList
FROM #table t
CROSS APPLY(SELECT CAST('<x>' + REPLACE(t.somestring,';','</x><x>') + '</x>' AS XML)) A(CastedToXml);
The idea in short:
By replacing the ; with XML tags </x><x> we can transform your delimited list to XML. I included the intermediate XML into the result set. Just click it to see how this works.
In the next query I use a XQuery predicate first to find entries, which contain a # and second, which do NOT cast to an integer without the #.
The thrid step is specific to XML again. The XPath . in .value() will return all content as one string.
Finally we have to replace the # with ; and cut away the leading ; using STUFF().
UPDATE The same idea, but a bit shorter:
You can try this as well
SELECT t.someid,t.somestring,A.CastedToXml
,REPLACE(A.CastedToXml.query('data(/x[empty(. cast as xs:int?)])')
.value('.','nvarchar(max)'),' ',';') TheNewList
FROM #table t
CROSS APPLY(SELECT CAST('<x>' + REPLACE(t.somestring,';#','</x><x>') + '</x>' AS XML)) A(CastedToXml);
Here I use ;# to split your string and data() to implicitly concatenate your values (blank-separated).
UPDATE 2 for v2017
If you have v2017+ I'd suggest a combination of a JSON splitter and STRING_AGG():
SELECT t.someid,STRING_AGG(A.[value],';') AS TheNewList
FROM #table t
CROSS APPLY OPENJSON(CONCAT('["',REPLACE(t.somestring,';#','","'),'"]')) A
WHERE TRY_CAST(A.[value] AS INT) IS NULL
GROUP BY t.someid;
You did not include the version of SQL Server you are on. If you are using 2016+ you can use SPLIT_STRING, otherwise a good T-SQL splitter will do.
Against a single variable:
DECLARE #somestring VARCHAR(1000) = '20591;#e123456;#17507;#c567890;#15518;#e135791';
SELECT NewString = STUFF((
SELECT ','+split.item
FROM STRING_SPLIT(#somestring,';') AS s
CROSS APPLY (VALUES(REPLACE(s.[value],'#',''))) AS split(item)
WHERE split.item LIKE '[a-z][0-9]%'
FOR XML PATH('')),1,1,'');
Against a table:
NewString
----------------------
e123456,c567890,e135791
-- Against a table
DECLARE #table TABLE (someid INT IDENTITY, somestring VARCHAR(50));
INSERT #table VALUES ('20591;#e123456;#17507;#c567890;#15518;#e135791'),('26169;#c785643')
SELECT t.*, fn.NewString
FROM #table AS t
CROSS APPLY
(
SELECT NewString = STUFF((
SELECT ','+split.item
FROM STRING_SPLIT(t.somestring,';') AS s
CROSS APPLY (VALUES(REPLACE(s.[value],'#',''))) AS split(item)
WHERE split.item LIKE '[a-z][0-9]%'
FOR XML PATH('')),1,1,'')
) AS fn;
Returns:
someid somestring NewString
----------- -------------------------------------------------- -----------------------------
1 20591;#e123456;#17507;#c567890;#15518;#e135791 e123456,c567890,e135791
2 26169;#c785643 c785643

Handling of subselects in PostgreSQL

In the following example we want to find duplicates with different ids
-- table msg
create table msg(
id int primary key,
msg varchar(10));
insert into msg
values (1, 'single'),
(2, 'twice'),
(3, 'twice'),
(4, 'threefold'),
(5, 'threefold'),
(6, 'threefold');
select * from msg;
Counting the number of identical messages with different ids
First example:
select id, msg, (select count(*) from msg m1 where m1.msg = msg) as cnt
from msg;
=> wrong result since "msg" in subselect is identified with "m1.msg", not "msg.msg"
Second example:
select id, msg, (select count(*) from msg m1 where msg.msg = m1.msg) as cnt
from msg;
=> correct result
Third example (may be easier to understand):
select m1.id, m1.msg, (select count(*) from msg m2 where m1.msg = m2.msg) as cnt
from msg m1;
=> correct result
Question:
Is the behaviour of the first example conform with the SQL standard?
Always We should user table alias and refer the same with column name. Otherwise database will assume anyway so we wont get desired result.
referring table alias also solve some performance problem. Assume you have joined n tables and refer tables columns in select clause and where clause. When you execute the query, system will parse the query during that parsing it will check column availability in the respective tables. If you are not use alias name then it has to check all the table and find which table have that column. It may take time. To avoid that we have to refer the table alias name in respective columns. Also to avoid column ambiguously defined error
In subquery we always refer the table alias with the column

Why can't I use WHERE NOT EXISTS with INSERT?

I want to insert a tag named "foo", unless it already exists. So I constructed the following query:
INSERT INTO "tag" ("name") VALUES ('foo')
WHERE NOT EXISTS (SELECT 1 FROM "tag" WHERE ("tag"."name" = 'foo'));
But this will fail with the following error:
ERROR: syntax error at or near "WHERE"
LINE 1: INSERT INTO "tag" ("name") VALUES ('foo') WHERE NOT EXISTS (...
^
I don't understand where the problem with that query is. Especially, since I can provide a subquery instead of VALUES and suddenly the query is perfectly fine:
INSERT INTO "tag" ("name") SELECT 'foo' AS name
WHERE NOT EXISTS (SELECT 1 FROM "tag" WHERE ("tag"."name" = 'foo'));
This results in:
Query returned successfully: 0 rows affected, 11 ms execution time.
It's 0 rows, because the tag already exists.
You can, you just need to use the INSERT INTO ... SELECT ... form.
INSERT INTO "tag" ("name")
SELECT 'foo'
WHERE NOT EXISTS (SELECT 1 FROM "tag" WHERE ("tag"."name" = 'foo'));
However, it doesn't do what you want. At least not under concurrent workloads. You can still get unique violations or duplicate inserts.
See:
How to UPSERT (MERGE, INSERT ... ON DUPLICATE UPDATE) in PostgreSQL?
Insert, on duplicate update in PostgreSQL?