SBT throwing JdbcSQLExceptionSyntax exception for an PostgreSQL query - postgresql

I am running a SQL query using slick to write to a PostgreSQL DB. Why am i getting a Syntax error in SQL statement error? Please assume all configurations are correct.
I have imported import slick.jdbc.PostgresProfile.api._ in the client and import slick.jdbc.H2Profile.api._ in the query builder. I have also separated postgresql and MySQL statements into different builders.
import bbc.rms.client.programmes.util.MySqlStringEscaper
import org.joda.time.DateTime
import slick.jdbc.H2Profile.api._
abstract class PopularBlurProgrammesQueryBuilder extends QueryBuilder with
MySqlStringEscaper {
def incrementBlurScoreQuery(pid: String, date: DateTime): DBIO[Int] = {
sqlu"""
INSERT INTO radio.core_entity_popularity (pid, score, date)
VALUES($pid, 1, ${flooredSQLDateTimeString(date)}
) ON CONFLICT ON CONSTRAINT core_entity_popularity_pkey
DO UPDATE
SET score = core_entity_popularity.score + 1
"""
}
}
````
import slick.jdbc.PostgresProfile.api._
class SlickPopularBlurProgrammesClient[T](database: Database)(implicit
executionContext: ExecutionContext)
extends PopularBlurProgrammesQueryBuilder with
PopularBlurProgrammesClient[T] {
override def writeBlurIncrementedScore(pid: String, date: DateTime):
Future[Int] = {
database.run(incrementBlurScoreQuery(pid, date))
}
}
Expected result is that the exception is not thrown and the integration tests pass. Integration test:
val currentDate = dateTimeFormat.parseDateTime("2018-12-19 16:00:00")
client.writeBlurIncrementedScore("pid", currentDate)
whenReady(client.writeBlurIncrementedScore("pid", currentDate)) {
updatedRows =>
updatedRows must be equalTo 1
}
}
stack trace:
org.h2.jdbc.JdbcSQLException: Syntax error in SQL statement "
INSERT INTO radio.core_entity_popularity (pid, score, date)
VALUES(?, 1, ?
) ON[*] CONFLICT ON CONSTRAINT core_entity_popularity_pkey
DO UPDATE
SET score = core_entity_popularity.score + 1
"; SQL statement:
INSERT INTO radio.core_entity_popularity (pid, score, date)
VALUES(?, 1, ?
) ON CONFLICT ON CONSTRAINT core_entity_popularity_pkey
DO UPDATE
SET score = core_entity_popularity.score + 1
[42000-193]
org.h2.jdbc.JdbcSQLException: Syntax error in SQL statement "
INSERT INTO radio.core_entity_popularity (pid, score, date)
VALUES(?, 1, ?
) ON[*] CONFLICT ON CONSTRAINT core_entity_popularity_pkey
DO UPDATE
SET score = core_entity_popularity.score + 1
"; SQL statement:
INSERT INTO radio.core_entity_popularity (pid, score, date)
VALUES(?, 1, ?
) ON CONFLICT ON CONSTRAINT core_entity_popularity_pkey
DO UPDATE
SET score = core_entity_popularity.score + 1
[42000-193]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.message.DbException.getSyntaxError(DbException.java:191)
at org.h2.command.Parser.getSyntaxError(Parser.java:530)
at org.h2.command.Parser.prepareCommand(Parser.java:257)
at org.h2.engine.Session.prepareLocal(Session.java:561)
at org.h2.engine.Session.prepareCommand(Session.java:502)
at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1203)
at org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:73)
at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:287)
at slick.jdbc.JdbcBackend$SessionDef$class.prepareStatement(JdbcBackend.scala:336)
at slick.jdbc.JdbcBackend$BaseSession.prepareStatement(JdbcBackend.scala:448)
at slick.jdbc.StatementInvoker.results(StatementInvoker.scala:32)
at slick.jdbc.StatementInvoker.iteratorTo(StatementInvoker.scala:21)
at slick.jdbc.Invoker$class.first(Invoker.scala:30)
at slick.jdbc.StatementInvoker.first(StatementInvoker.scala:15)
at slick.jdbc.StreamingInvokerAction$HeadAction.run(StreamingInvokerAction.scala:52)
at slick.jdbc.StreamingInvokerAction$HeadAction.run(StreamingInvokerAction.scala:51)
at slick.basic.BasicBackend$DatabaseDef$$anon$2.liftedTree1$1(BasicBackend.scala:275)
at slick.basic.BasicBackend$DatabaseDef$$anon$2.run(BasicBackend.scala:275)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

The problem you are facing is that you are submitting a specific PostgreSQL query to an H2 database.
The INSERT syntax for PostgreSQL allow the key ON CONFLICT
[ WITH [ RECURSIVE ] with_query [, ...] ]
INSERT INTO table_name [ AS alias ] [ ( column_name [, ...] ) ]
{ DEFAULT VALUES | VALUES ( { expression | DEFAULT } [, ...] ) [, ...] | query }
[ ON CONFLICT [ conflict_target ] conflict_action ]
[ RETURNING * | output_expression [ [ AS ] output_name ] [, ...] ]
where conflict_target can be one of:
( { index_column_name | ( index_expression ) } [ COLLATE collation ] [ opclass ] [, ...] ) [ WHERE index_predicate ]
ON CONSTRAINT constraint_name
and conflict_action is one of:
DO NOTHING
DO UPDATE SET { column_name = { expression | DEFAULT } |
( column_name [, ...] ) = ( { expression | DEFAULT } [, ...] ) |
( column_name [, ...] ) = ( sub-SELECT )
} [, ...]
[ WHERE condition ]
from PostgreSQL docs
While the H2 INSERT syntax is
INSERT INTO tableName
{ [ ( columnName [,...] ) ]
{ VALUES
{ ( { DEFAULT | expression } [,...] ) } [,...] | [ DIRECT ] [ SORTED ] select } } |
{ SET { columnName = { DEFAULT | expression } } [,...] }
from H2 DB docs

The problem was that the postgreSQL is very strict on expressions and did not like the way the date was handled. It didn't explicitly know if the date value was a Timestamp so I had to explicitly call the postgreSQL function to_timestamp(text, text) and also add $ to use variables in the query.

Related

UPDATE SET with different value for each row

I have python dict with relationship between elements and their values. For example:
db_rows_values = {
<element_uuid_1>: 12,
<element_uuid_2>: "abc",
<element_uuid_3>: [123, 124, 125],
}
And I need to update it in one query. I made it in python through the query generation loop with CASE:
sql_query_elements_values_part = " ".join([f"WHEN '{element_row['element_id']}' "
f"THEN '{ujson.dumps(element_row['value'])}'::JSONB "
for element_row in db_row_values])
query_part_elements_values_update = f"""
elements_value_update AS (
UPDATE m2m_entries_n_elements
SET value =
CASE element_id
{sql_query_elements_values_part}
ELSE NULL
END
WHERE element_id = ANY(%(elements_ids)s::UUID[])
AND entry_id = ANY(%(entries_ids)s::UUID[])
RETURNING element_id, entry_id, value
),
But now I need to rewrite it in plpgsql. I can pass db_rows_values as array of ROWTYPE or as json but how can I make something like WHEN THEN part?
Ok, I can pass dict as JSON, convert it to rows with json_to_recordset and change WHEN THEN to SET value = (SELECT.. WHERE)
WITH input_rows AS (
SELECT *
FROM json_to_recordset(
'[
{"element_id": 2, "value":"new_value_1"},
{"element_id": 4, "value": "new_value_2"}
]'
) AS x("element_id" int, "value" text)
)
UPDATE table1
SET value = (SELECT value FROM input_rows WHERE input_rows.element_id = table1.element_id)
WHERE element_id IN (SELECT element_id FROM input_rows);
https://dbfiddle.uk/?rdbms=postgres_14&fiddle=f8b6cd8285ec7757e0d8f38a1becb960

There is a similar function json_merge_patch() in Postgres as in Oracle

My endpoint, accepts client request HTTP method PATCH, a payload content type of JSON Merge Patch (RFC 7396).
https://www.rfc-editor.org/rfc/rfc7396
We used Oracle and it was very convenient to update json content in the database, used function json_merge_patch()
UPDATE table_name SET po_document =
json_mergepatch(po_document, json_by_rfc7396);
https://docs.oracle.com/en/database/oracle/oracle-database/19/adjsn/updating-json-document-json-merge-patch.html
I have not found a similar function in Postgres, jsonb_set() and operators || and #-, not convenient for deep patches of json content.
What's the PostgreSQL best practice for deep patching json content?
Example:
SELECT json_merge_patch(
'{"root": {"k1": "v1", "k2": "v2"} }'::jsonb, -- source JSON
'{"root": {"k1": "upd", "k2": null, "k3": "new"} }'::jsonb -- JSON patch (RFC 7396)
)
Output
{"root": {"k1": "upd","k3": "new"} }
The spec is simple enough to follow with recursion.
create or replace function jsonb_merge_patch(v_basedoc jsonb, v_patch jsonb)
returns jsonb as $$
with recursive patchexpand as(
select '{}'::text[] as jpath, v_patch as jobj, jsonb_typeof(v_patch) as jtype, 0 as lvl
union all
select p.jpath||o.key as jpath, p.jobj->o.key as jobj, jsonb_typeof(p.jobj->o.key) as jtype, p.lvl + 1 as lvl
from patchexpand p
cross join lateral jsonb_each(case when p.jtype = 'object' then p.jobj else '{}'::jsonb end) as o(key, value)
), pathnum as (
select *, row_number() over (order by lvl, jpath) as rn
from patchexpand
), apply as (
select case
when jsonb_typeof(v_basedoc) = 'object' then v_basedoc
else '{}'::jsonb
end as basedoc,
p.rn
from pathnum p
where p.rn = 1
union all
select case
when p.jtype = 'object' then a.basedoc
when p.jtype = 'null' then a.basedoc #- p.jpath
else jsonb_set(a.basedoc, p.jpath, p.jobj)
end as basedoc,
p.rn
from apply a
join pathnum p
on p.rn = a.rn + 1
)
select case
when jsonb_typeof(v_patch) != 'object' then v_patch
else basedoc
end
from apply
order by rn desc
limit 1;
$$
language sql;
Testing with the example in the RFC:
select jsonb_pretty(jsonb_merge_patch('{
"title": "Goodbye!",
"author" : {
"givenName" : "John",
"familyName" : "Doe"
},
"tags":[ "example", "sample" ],
"content": "This will be unchanged"
}'::jsonb,
'{
"title": "Hello!",
"phoneNumber": "+01-123-456-7890",
"author": {
"familyName": null
},
"tags": [ "example" ]
}'::jsonb));
jsonb_pretty
------------------------------------------
{ +
"tags": [ +
"example" +
], +
"title": "Hello!", +
"author": { +
"givenName": "John" +
}, +
"content": "This will be unchanged",+
"phoneNumber": "+01-123-456-7890" +
}
(1 row)
Testing with the example in your question:
SELECT jsonb_merge_patch(
'{"root": {"k1": "v1", "k2": "v2"} }'::jsonb, -- source JSON
'{"root": {"k1": "upd", "k2": null, "k3": "new"} }'::jsonb -- JSON patch (RFC 7396)
);
jsonb_merge_patch
--------------------------------------
{"root": {"k1": "upd", "k3": "new"}}
(1 row)
Leaving my 2 cents for a more compact solution here, based on this post:
CREATE OR REPLACE FUNCTION json_merge_patch("target" jsonb, "patch" jsonb) RETURNS jsonb AS $$
BEGIN
RETURN COALESCE(jsonb_object_agg(
COALESCE("tkey", "pkey"),
CASE
WHEN "tval" ISNULL THEN "pval"
WHEN "pval" ISNULL THEN "tval"
WHEN jsonb_typeof("tval") != 'object' OR jsonb_typeof("pval") != 'object' THEN "pval"
ELSE json_merge_patch("tval", "pval")
END
), '{}'::jsonb)
FROM jsonb_each("target") e1("tkey", "tval")
FULL JOIN jsonb_each("patch") e2("pkey", "pval")
ON "tkey" = "pkey"
WHERE jsonb_typeof("pval") != 'null'
OR "pval" ISNULL;
END;
$$ LANGUAGE plpgsql;
As far as I'm concerned, it follows the RFC 7396.

is there an way to upload 212 columns csv files in PostgreSQL

I have a csv file with 122 columns I am trying this in Postgres. I am trying this
create tble appl_train ();
\copy appl_train FROM '/path/ to /file' DELIMITER ',' CSV HEADER;
I get this error
ERROR: extra data after last expected column
CONTEXT: COPY application_train, line 2: "0,100001,Cash loans,F,N,Y,0,135000.0,568800.0,20560.5,450000.0,Unaccompanied,Working,Higher educatio..."
The error message means that the number of columns of your table is less then the number of columns of your csv files.
If the DDL of your table is exactly what you reported, you created a table with no columns. You have to enumerate (at least) all column name and column data type while creating a table, as reported from documentation:
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name ( [
{ column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ]
| table_constraint
| LIKE parent_table [ like_option ... ] }
[, ... ]
] )
[ INHERITS ( parent_table [, ... ] ) ]
[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE tablespace ]
In your code you should have something like this:
create table appl_train (
first_column_name integer,
second_column_name integer,
third_column_name character varying (20),
// more fields here
)

Recursive JSONB postgres

I am trying to build a recursive CTE in Postgres that supports both arrays and objects, to return a list of key-value pairs and don't seem to be able to find a good example. This is my current code.
with recursive jsonRecurse as
(
select
j.key as Path
,j.key
,j.value
from jsonb_each(to_jsonb('{
"key1": {
"key2": [
{
"key3": "test3",
"key4": "test4"
}
]
},
"key5": [
{
"key6":
[
{
"key7": "test7"
}
]
}
]
}'::jsonb)) j
union all
select
jr.path || '.' || jr2.Key
,jr2.key
,jr2.value
from jsonRecurse jr
left join lateral jsonb_each(jr.value) jr2 on true
where jsonb_typeof(jr.value) = 'object'
)
select
*
from jsonRecurse;
As you can see the code stops recursing as soon as I hit an array instead of an object. I've tried playing around with using a case statement and putting the function call to jsonb_each or jsonb_array_element in the case statement instead but I get an error telling me to use lateral joins instead.
I have used this example table to make the query more readable:
create table my_table(id serial primary key, jdata jsonb);
insert into my_table (jdata) values
('{
"key1": {
"key2": [
{
"key3": "test3",
"key4": "test4"
}
]
},
"key5": [
{
"key6":
[
{
"key7": "test7"
}
]
}
]
}');
You have to join both jsonb_each(value) and jsonb_array_elements(value) conditionally, depending on the type of value:
with recursive extract_all as
(
select
key as path,
value
from my_table
cross join lateral jsonb_each(jdata)
union all
select
path || '.' || coalesce(obj_key, (arr_key- 1)::text),
coalesce(obj_value, arr_value)
from extract_all
left join lateral
jsonb_each(case jsonb_typeof(value) when 'object' then value end)
as o(obj_key, obj_value)
on jsonb_typeof(value) = 'object'
left join lateral
jsonb_array_elements(case jsonb_typeof(value) when 'array' then value end)
with ordinality as a(arr_value, arr_key)
on jsonb_typeof(value) = 'array'
where obj_key is not null or arr_key is not null
)
select *
from extract_all;
Output:
path | value
--------------------+------------------------------------------------
key1 | {"key2": [{"key3": "test3", "key4": "test4"}]}
key5 | [{"key6": [{"key7": "test7"}]}]
key1.key2 | [{"key3": "test3", "key4": "test4"}]
key5.0 | {"key6": [{"key7": "test7"}]}
key1.key2.0 | {"key3": "test3", "key4": "test4"}
key5.0.key6 | [{"key7": "test7"}]
key1.key2.0.key3 | "test3"
key1.key2.0.key4 | "test4"
key5.0.key6.0 | {"key7": "test7"}
key5.0.key6.0.key7 | "test7"
(10 rows)
Elements of json arrays have no keys, we should use their indexes to build a path. Therefore the function jsonb_array_elements() should be called with ordinality. Per the documentation (see 7.2.1.4. Table Functions):
If the WITH ORDINALITY clause is specified, an additional column of type bigint will be added to the function result columns. This column numbers the rows of the function result set, starting from 1.
The function call
jsonb_array_elements(case jsonb_typeof(value) when 'array' then value end)
with ordinality as a(arr_value, arr_key)
returns pairs (value, ordinality) aliased as (arr_value, arr_key).

How to create session level table in PostgreSQL?

I am working on an application using Spring, Hibernate, and PostgreSQL 9.1. The requirement is user can upload bulk data from the browser.
Now the data getting uploaded by each user is very crude and requires lots of validation before it can be put into the actual transaction table. I want a temporary table to be created whenever a user uploads; after data is successfully dumped into this temp table, I will call a procedure to perform the actual work of validating and taking the data from the temp table to the transaction table. If anywhere error is encountered then I will dump logs to any other table so the user can know the status of their upload from the browser.
In PostgreSQL do we have anything like temporary, session-level table?
From the 9.1 manual:
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name ( [
{ column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ]
| table_constraint
| LIKE parent_table [ like_option ... ] }
[, ... ]
] )
[ INHERITS ( parent_table [, ... ] ) ]
[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE tablespace ]
The key word here is TEMPORARY although it is not necessary to the table to be temporary. It could be a permanent table that you truncate before inserting. The whole operation (inserting and validating) would have to be wrapped in a transaction.