Using jsonb_set in update with jOOQ - postgresql

I have a sql query for updating a status value in a data column of type
jsonb in Postgresql that looks like this:
update sample
set updated = now(),
data = jsonb_set(data, '{status}', jsonb 'CANCELLED', true)
where id = 11;
I need to translate that to a working jOOQ query in my Kotlin project ... I
have this so far:
jooq.update(Tables.SAMPLE)
.set(Tables.SAMPLE.UPDATED, OffsetDateTime.now())
.set(Tables.SAMPLE.DATA, field("jsonb_set(data, '{status}', jsonb '\"CANCELLED\"', true)"))
.where(Tables.SAMPLE.ID.eq(id))
.execute()
But the second set fails with None of the following functions can be called with the
arguments supplied error message... What is the correct signature of set
that I can use here?
I am basing my jOOQ syntax on the answer that Lukas Eder provided in Using raw value-expressions in UPDATE with jooq

In an UPDATE statement, you have to match data types in your SET clause on both sides. I.e. SAMPLE.DATA is of type Field<T>, so the expression you're setting it to must also be of type Field<T>.
I'm assuming that SAMPLE.DATA is a Field<JSONB>, so it will be sufficient to write
.set(SAMPLE.DATA, field("json_set(...)", JSONB.class))
Notice that jOOQ 3.12 has introduced this JSONB type. In previous versions, lacking any out-of-the-box jOOQ representation for JSON and JSONB types, the jOOQ code generator may have generated a Field<Object> type for your SAMPLE.DATA column, in case of which your statement would have compiled.

Related

Redshift Spectrum table doesnt recognize array

I have ran a crawler on json S3 file for updating an existing external table.
Once finished I checked the SVL_S3LOG to see the structure of the external table and saw it was updated and I have new column with Array<int> type like expected.
When I have tried to execute select * on the external table I got this error: "Invalid operation: Nested tables do not support '*' in the SELECT clause.;"
So I have tried to detailed the select statement with all columns names:
select name, date, books.... (books is the Array<int> type)
from external_table_a1
and got this error:
Invalid operation: column "books" does not exist in external_table_a1;"
I have also checked under "AWS Glue" the table external_table_a1 and saw that column "books" is recognized and have the type Array<int>.
Can someone explain why my simple query is wrong?
What am I missing?
Querying JSON data is a bit of a hassle with Redshift: when parsing is enabled (eg using the appropriate SerDe configuration) the JSON is stored as a SUPER type. In your case that's the Array<int>.
The AWS documentation on Querying semistructured data seems pretty straightforward, mentioning that PartiQL uses "dotted notation and array subscript for path navigation when accessing nested data". This doesn't work for me, although I don't find any reasons in their SUPER Limitations Documentation.
Solution 1
What I have to do is set the flags set json_serialization_enable to true; and set json_serialization_parse_nested_strings to true; which will parse the SUPER type as JSON (ie back to JSON). I can then use JSON-functions to query the data. Unnesting data gets even crazier because you can only use the unnest syntax select item from table as t, t.items as item on SUPER types. I genuinely don't think that this is the supposed way to query and unnest SUPER objects but that's the only approach that worked for me.
They described that in some older "Amazon Redshift Developer Guide".
Solution 2
When you are writing your query or creating a query Redshift will try to fit the output into one of the basic column data types. If the result of your query does not match any of those types, Redshift will not process the query. Hence, in order to convert a SUPER to a compatible type you will have to unnest it (using the rather peculiar Redshift unnest syntax).
For me, this works in certain cases but I'm not always able to properly index arrays, not can I access the array index (using my_table.array_column as array_entry at array_index syntax).

Gorm Jsonb type stored as bytea

I'm using a locally hosted postgres DB to test queries to a postgres DB in production. The production database has an info field of type jsonb; and I'm trying to mimic this schema locally when using gorm's AutoMigrate. The model I've defined is below:
import "github.com/jinzhu/gorm/dialects/postgres"
type Event struct {
...
Info postgres.Jsonb
...
}
But when I query JSON attributes, e.g. stmt.Where("info->>'attr' = value"), I get the following error:
...
Message:"operator does not exist: bytea ->> unknown", Detail:"", Hint:"No operator matches the given name and argument type(s). You might need to add explicit type casts.",
...
This query works however in the production environment. It seems that the Info field is being stored as bytea instead of jsonb. I'm aware that I can do stmt.Where("encode(info, "escape")::jsonb->>'attr' = value"), but I'd prefer to mimic the production environment more closely (if possible) than change the query to support these unit tests.
I've tried using type tags in the model (e.g. gorm:"type=jsonb") as well as defining my own JSON type implmementing the valuer, scanner, and GormDataTypeInterface as suggested here. None of these approaches have automigrated the type as jsonb.
Is there any way to ensure AutoMigrate creates a table with type jsonb? Thanks!
I was facing the same problem, type JsonB is automigrated to bytea. I solved it by adding the tag gorm:"type:jsonb". It's also mentioned in your question, but you're using gorm:"type=jsonb", which is not correct.

How can I prevent SQL injection with arbitrary JSONB query string provided by an external client?

I have a basic REST service backed by a PostgreSQL database with a table with various columns, one of which is a JSONB column that contains arbitrary data. Clients can store data filling in the fixed columns and provide any JSON as opaque data that is stored in the JSONB column.
I want to allow the client to query the database with constraints on both the fixed columns and the JSONB. It is easy to translate some query parameters like ?field=value and convert that into a parameterized SQL query for the fixed columns, but I want to add an arbitrary JSONB query to the SQL as well.
This JSONB query string could contain SQL injection, how can I prevent this? I think that because the structure of the JSONB data is arbitrary I can't use a parameterized query for this purpose. All the documentation I can find suggests I use parameterized queries, and I can't find any useful information on how to actually sanitize the query string itself, which seems like my only option.
For example a similar question is:
How to prevent SQL Injection in PostgreSQL JSON/JSONB field?
But I can't apply the same solution as I don't know the structure of the JSONB or the query, I can't assume the client wants to query a particular path using a particular operator, the entire JSONB query needs to be freely provided by the client.
I'm using golang, in case there are any existing libraries or code fragments that I can use.
edit: some example queries on the JSONB that the client might do:
(content->>'company') is NULL
(content->>'income')::numeric>80000
content->'company'->>'name'='EA' AND (content->>'income')::numeric>80000
content->'assets'#>'[{"kind":"car"}]'
(content->>'DOB')::TIMESTAMP<'2000-01-30T10:12:18.120Z'::TIMESTAMP
EXISTS (SELECT FROM jsonb_array_elements(content->'assets') asset WHERE (asset->>'value')::numeric > 100000)
Note that these don't cover all possible types of queries. Ideally I want any query that PostgreSQL supports on the JSONB data to be allowed. I just want to check the query to ensure it doesn't contain sql injection. For example, a simplistic and probably inadequate solution would be to not allow any ";" in the query string.
You could allow the users to specify a path within the JSON document, and then parameterize that path within a call to a function like json_extract_path_text. That is, the WHERE clause would look like:
WHERE json_extract_path_text(data, $1) = $2
The path argument is just a string, easily parameterized, which describes the keys to traverse down to the given value, e.g. 'foo.bars[0].name'. The right-hand side of the clause would be parameterized along the same rules as you're using for fixed column filtering.

Parametric query and hstore in PostgreSQL

I have a query with one parameter and am using jmoiron/sqlx to run it against Nominatim database that has a hstore field "name". The query itself is like
SELECT place_id, parent_place_id, name->'name:ru' as name from placesx WHERE admin_level = 3 and parent_place_id IN (?)
The problem when I use sqlx.In, sqlx.Bind and sqlx.Prepare functions that it takes :ru as a query parameter and complains about it.
Question is - how it can be avoided so that I can retrieve specific locale value ('name:en', 'name:de' etc) from hstore without this collision?
So far I use a regular expression and do not unmasrhal string to hstore’ map[string]string since I couldn’t figure out how to retrieve value from it by key.

Returning count of updated rows when UPserting to a Postgres table using jOOQ

I am upserting some data to a Postgres table using jOOQ's insertInto() and onDuplicateKeyUpdate() methods. I want to know later how many duplicates were in my data and hence need to return if a row was inserted or updated.
From my postgres specific research so far, I found RETURNING (not MY_TABLE.xmax = 0) AS updated to be a valid option. However, the auto-generated Java table classes from jOOQ don't seem to give me access to the system columns of postgres like xmax.
Here is my query so far:
dsl.insertInto(MY_TABLE)
.columns(
// pkey columns
MY_TABLE.SHIFT,
MY_TABLE.DATE_UTC,
MY_TABLE.TIME_UTC,
MY_TABLE.DURATION,
)
.values(
shiftId,
utcDateId,
utcTime,
duration
)
.onDuplicateKeyUpdate()
.set(MY_TABLE.DURATION, newDuration)
.returning((MY_TABLE.xmax = 0).`às`("inserted"))
.execute()
This causes the following compile time error:
Error: Kotlin: Unresolved reference: XMAX
I have rechecked my Maven jOOQ table generation configuration and I am not excluding any columns. I have also read through everything I could find on jOOQ's own website but found no useful information for this specific use-case.
Any tips on what I could do here?
In this case you should use jOOQ's SQL templating. Specifically look at the DSL.field() method. Something like this: field("my_table.xmax", int.class).eq(0).