Postgres where json column "in" casting json to uuid - postgresql

Given the following data structure I am trying to count the number of answers to question type messages
questions are identified by either a non null node or options column
answers are identified by non null previous
Ideally I'm hoping to return
| message | answer | count |
|---------------------|---------------|-------|
| Stuffed crust? | Crunchy crust | 2 |
| Stuffed crust? | More cheese! | 1 |
| Pineapple on pizza? | No | 3 |
| Pineapple on pizza? | Yes | 2 |
I assume once I work out how to get around the casting error above I can work out the counting and grouping, but I can't seem to get that far yet.
Query 1 ERROR: ERROR: operator does not exist: json = uuid
LINE 24: where previous->'id'::text in (
^
HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
WITH data (
id,
message,
node,
options,
previous
) AS (
VALUES
('5f0a50c7-2736-45a2-81c0-fad1ca62cbdc'::uuid, 'No', null, null, '{"id": "20c98b37-6cf3-47d1-b93a-606b99bb341a", "node": "pineapple"}'::json),
('ec7cd365-e206-4f21-be37-495914458313'::uuid, 'Yes', null, null, '{"id": "20c98b37-6cf3-47d1-b93a-606b99bb341a", "node": "pineapple"}'::json),
('56240ea2-6bc7-435e-b76f-c874084a234c'::uuid, 'No', null, null, '{"id": "20c98b37-6cf3-47d1-b93a-606b99bb341a", "node": "pineapple"}'::json),
('670d6d09-89d6-4063-ace7-e606f18c2cc2'::uuid, 'Yes', null, null, '{"id": "20c98b37-6cf3-47d1-b93a-606b99bb341a", "node": "pineapple"}'::json),
('25acbc4c-dd27-412c-86b2-8882c80b9c73'::uuid, 'No', null, null, '{"id": "20c98b37-6cf3-47d1-b93a-606b99bb341a", "node": "pineapple"}'::json),
('e7ff8b2b-cc4d-4006-a3c4-9efdc8e458db'::uuid, 'More cheese!', null, null, '{"id": "b18059f0-6d38-4898-bbb7-ebdd7e175b82", "node": "stuffed_crust"}'::json),
('c3aee52f-e30e-4c83-8c90-9ff890dd0e72'::uuid, 'Crunchy crust', null, null, '{"id": "b18059f0-6d38-4898-bbb7-ebdd7e175b82", "node": "stuffed_crust"}'::json),
('965f9936-284f-4e57-838d-bcf90f119a9c'::uuid, 'Crunchy crust', null, null, '{"id": "b18059f0-6d38-4898-bbb7-ebdd7e175b82", "node": "stuffed_crust"}'::json),
-- questions
('b18059f0-6d38-4898-bbb7-ebdd7e175b82'::uuid, 'Stuffed crust?', 'stuffed_crust', '["Crunchy crust","More cheese!"]'::json, null::json),
('20c98b37-6cf3-47d1-b93a-606b99bb341a'::uuid, 'Pineapple on pizza?', 'pineapple', '["Yes","No"]'::json, null::json)
)
SELECT * from data
where previous->'id'::uuid in (
SELECT id::uuid FROM data WHERE options is not null
);
Update
Having had my casting question answered, the query I used to achieve the results i wanted is as follows
select d2.message as question, data.message, count(data.message)
from data
join data as d2 on (data.previous->>'id')::uuid = d2.id
where (data.previous->>'id')::uuid in (
SELECT id FROM data WHERE options is not null
)
group by question, data.message;

The :: operator has a higher precedence than the -> operator, so you need to use parentheses there. You also need to get the ID as text, not as JSONB as there is no direct cast from jsonb to uuid:
where (previous->>'id')::uuid IN (...)

Related

update jsonb column from values in another table

I create a migration to to convert a jsonb column into a one-to-many table.
-- upgrade
insert into device_component (
warranty_request_uuid,
serial_number,
component_type,
description
)
select
warranty_request.uuid,
value->>'serial_number',
value->>'type',
value->>'description'
from
warranty_request,
jsonb_array_elements(warranty_request.device_components)
I need to provide a corresponding downgrade statement. To revert the migration, I am trying something like the below.
-- downgrade
update
warranty_request
set
device_components = jsonb_set(
device_components,
'{}',
jsonb_build_object(
'serial_number', device_component.serial_number,
'type', device_component.component_type,
'description', device_component.description
)
)
from
device_component
where
warranty_request.uuid = device_component.warranty_request_uuid
The problem is that the device_components column contains only null values after downgrading. So nothing is inserted.
How, should the downgrade statement look like to make this work?
I want to upgrade and downgrade between these 2 formats.
Upgrade:
# warranty_request
uuid
-----
abc
# device_compoent
uuid | warranty_request_uuid | serial_number | device_type | description
-----|-----------------------|---------------|-------------|------------
efg | abc | 1 | foo | bar
hij | abc | 2 | foo | bar
Downgrade:
# warranty_request
uuid | device_components
------|----------------------------------------------------------------------------------------------------------------------
abc | [{"serial_number": 1, "type": "foo", "description": bar}, {"serial_number": 2, "type": "foo", "description": bar}]
You can try this :
UPDATE
warranty_request AS w
SET
device_components = a.device_components
FROM
( SELECT d.warranty_request_uuid
, jsonb_agg (jsonb_build_object
( 'serial_number', d.serial_number
, 'type', d.device_type
, 'description', d.description
)
) AS device_components
FROM device_component AS d
GROUP BY d.warranty_request_uuid
) AS a
WHERE w.uuid = a.warranty_request_uuid
see the demo result in dbfiddle.

Postgresql: Can the minus operator not be used with a parameter? Only hardcoded values?

The following query deletes an entry using index:
const deleteGameQuery = `
update users
set games = games - 1
where username = $1
`
If I pass the index as a parameter, nothing is deleted:
const gameIndex = rowsCopy[0].games.findIndex(obj => obj.game == gameID).toString();
const deleteGameQuery = `
update users
set games = games - $1
where username = $2
`
const { rows } = await query(deleteGameQuery, [gameIndex, username]);
ctx.body = rows;
The gameIndex parameter is just a string, the same as if I typed it. So why doesn't it seem to read the value? Is this not allowed?
The column games is a jsonb data type with the following data:
[
{
"game": "cyberpunk-2077",
"status": "Backlog",
"platform": "Any"
},
{
"game": "new-pokemon-snap",
"status": "Backlog",
"platform": "Any"
}
]
The problem is you're passing text instead of an integer. You need to pass an integer. I'm not sure exactly how your database interface works to pass integers, try removing toString() and ensure gameIndex is a Number.
const gameIndex = rowsCopy[0].games.findIndex(obj => obj.game == gameID).
array - integer and array - text mean two different things.
array - 1 removes the second element from the array.
select '[1,2,3]'::jsonb - 1;
[1, 3]
array - '1' searches for the entry '1' and removes it.
select '["1","2","3"]'::jsonb - '1';
["2", "3"]
-- Here, nothing is removed because 1 != '1'.
select '[1,2,3]'::jsonb - '1';
[1, 2, 3]
When you pass in a parameter, it is translated by query according to its type. If you pass a Number it will be translated as 1. If you pass a String it will be translated as '1'. (Or at least that's how it should work, I'm not totally familiar with Javascript database libraries.)
As a side note, this sort of data is better handled as a join table.
create table games (
id bigserial primary key,
name text not null,
status text not null,
platform text not null
);
create table users (
id bigserial primary key,
username text not null
);
create table game_users (
game_id bigint not null references games,
user_id bigint not null references users,
-- If a user can only have each game once.
unique(game_id, user_id)
);
-- User 1 has games 1 and 2. User 2 has game 2.
insert into game_users (game_id, user_id) values (1, 1), (2, 1), (2,2);
-- User 1 no longer has game 1.
delete from game_users where game_id = 1 and user_id = 1;
You would also have a platforms table and a game_platforms join table.
Join tables are a little mind bending, but they're how SQL stores relationships. JSONB is very useful, but it is not a substitute for relationships.
You can try to avoid decomposing objects outside of postgress and manipulate jsonb structure inside the query like this:
create table gameplayers as (select 1 as id, '[
{
"game": "cyberpunk-2077",
"status": "Backlog",
"platform": "Any"
},
{
"game": "new-pokemon-snap",
"status": "Backlog",
"platform": "Any"
},
{
"game": "gameone",
"status": "Backlog",
"platform": "Any"
}
]'::jsonb games);
with
ungroupped as (select * from gameplayers g, jsonb_to_recordset(g.games)
as (game text, status text, platform text)),
filtered as (select id,
jsonb_agg(
json_build_object('game', game,
'status', status,
'platfrom', platform
)
) games
from ungroupped where game not like 'cyberpunk-2077' group by id)
UPDATE gameplayers as g set games=f.games
from filtered f where f.id=g.id;

Different path formats for PostgreSQL JSONB functions

I'm confused by how path uses different formats depending on the function in the PostgreSQL JSONB documentation.
If I had a PostgreSQL table foo that looks like
pk
json_obj
0
{"values": [{"id": "a_b", "value": 5}, {"id": "c_d", "value": 6]}
1
{"values": [{"id": "c_d", "value": 7}, {"id": "e_f", "value": 8]}
Why does this query give me these results?
SELECT json_obj, -- {"values": [{"id": "a_b", "value": 5}, {"id": "c_d", "value": 6]}
json_obj #? '$.values[*].id', -- true
json_obj #> '$.values[*].id', -- ERROR: malformed array literal
json_obj #> '{values, 0, id}', -- "a_b"
JSONB_SET(json_obj, '$.annotations[*].id', '"hi"') -- ERROR: malformed array literal
FROM foo;
Specifically, why does #? support $.values[*].id (described on that page in another section) but JSONB_SET uses some other path format {bar,3,baz}?
Ultimately, what I would like to do and don't know how, is to remove non-alphanumeric characters (e.g. underscores in this example) in all id values represented by the path $.values[*].id.
The reason is that the operators have different data types on the right hand side.
SELECT oprname, oprright::regtype
FROM pg_operator
WHERE oprleft = 'jsonb'::regtype
AND oprname IN ('#?', '#>');
oprname | oprright
---------+----------
#> | text[]
#? | jsonpath
(2 rows)
Similarly, the second argument of jsonb_set is a text[].
Now '$.values[*].id' is a valid jsonpath, but not a valid text[] literal.
Thanks for the answers and comments about why the data types were different.
I wanted to post how I solved my problem:
Ultimately, what I would like to do and don't know how, is to remove
non-alphanumeric characters (e.g. underscores in this example) in all
id values represented by the path $.values[*].id.
WITH unnested AS (
SELECT f.pk, JSONB_ARRAY_ELEMENTS(f.json_obj -> 'values') AS value
FROM foo f
),
updated_values AS (
SELECT un.pk, JSONB_SET(un.value, '{id}', TO_JSONB(LOWER(REGEXP_REPLACE(un.value ->> 'id', '[^a-zA-Z0-9]', '', 'g'))), FALSE) AS new_value
FROM unnested un
WHERE value -> 'id' IS NOT NULL -- Had some values that didn't have 'id' keys
)
UPDATE foo f2
SET json_obj = JSONB_SET(f2.json_obj, '{values}', (SELECT JSONB_AGG(uv.new_value) FROM updated_values uv WHERE uv.pk = f2.pk), FALSE)
WHERE JSONB_PATH_EXISTS(f2.json_obj, '$.values[*].id') -- Had some values that didn't have 'id' keys

Cannot insert NULL as Primary Key value in Postgresql for Flask Sqlalchemy after switching from SQlite

I recently started porting a SQLite database over to PostGreSQL for a Flask site built with SQLAlchemy. I have my schemas in PGSQL and even inserted the data into the database. However, I am unable to run my usual INSERT commands to add information to the database. Normally, I insert new records using SQL Alchemy by leaving the ID column to be NULL and then just setting the other columns. However, that results in the following error:
sqlalchemy.exc.IntegrityError: (psycopg2.IntegrityError) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, 2017-07-24 20:40:37.787393+00, 2017-07-24 20:40:37.787393+00, episode_length_list = [52, 51, 49, 50, 83]
sum_length = 0
for ..., 0, f, 101, 1, 0, 0, , null).
[SQL: 'INSERT INTO submission (date_created, date_modified, code, status, correct, assignment_id, course_id, user_id, assignment_version, version, url) VALUES (CURRENT_TIMESTAMP, CURRENT_TIMESTAMP, %(code)s, %(status)s, %(correct)s, %(assignment_id)s, %(course_id)s, %(user_id)s, %(assignment_version)s, %(version)s, %(url)s) RETURNING submission.id'] [parameters: {'code': 'episode_length_list = [52, 51, 49, 50, 83]\n\nsum_length = 0\n\nfor episode_length in episode_length_list:\n pass\n\nsum_length = sum_length + episode_length\n\nprint(sum_length)\n', 'status': 0, 'correct': False, 'assignment_id': 101, 'course_id': None, 'user_id': 1, 'assignment_version': 0, 'version': 0, 'url': ''}]
Here is my SQL Alchemy table declarations:
class Base(Model):
__abstract__ = True
#declared_attr
def __tablename__(cls):
return cls.__name__.lower()
def __repr__(self):
return str(self)
id = Column(Integer(), primary_key=True)
date_created = Column(DateTime, default=func.current_timestamp())
date_modified = Column(DateTime, default=func.current_timestamp(),
onupdate=func.current_timestamp())
class Submission(Base):
code = Column(Text(), default="")
status = Column(Integer(), default=0)
correct = Column(Boolean(), default=False)
assignment_id = Column(Integer(), ForeignKey('assignment.id'))
course_id = Column(Integer(), ForeignKey('course.id'))
user_id = Column(Integer(), ForeignKey('user.id'))
assignment_version = Column(Integer(), default=0)
version = Column(Integer(), default=0)
url = Column(Text(), default="")
I created the schema by calling db.create_all() in a script.
Checking the PostGreSQL side, we can see the constructed table:
Table "public.submission"
Column | Type | Modifiers | Storage | Stats target | Description
--------------------+--------------------------+-----------+----------+--------------+-------------
id | bigint | not null | plain | |
date_created | timestamp with time zone | | plain | |
date_modified | timestamp with time zone | | plain | |
code | text | | extended | |
status | bigint | | plain | |
correct | boolean | | plain | |
assignment_id | bigint | | plain | |
user_id | bigint | | plain | |
assignment_version | bigint | | plain | |
version | bigint | | plain | |
url | text | | extended | |
course_id | bigint | | plain | |
Indexes:
"idx_16881_submission_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
"submission_course_id_fkey" FOREIGN KEY (course_id) REFERENCES course(id)
"submission_user_id_fkey" FOREIGN KEY (user_id) REFERENCES "user"(id)
Has OIDs: no
I'm still new to this, but shouldn't there be a sequence?
Any insight or suggestions on what to look for next would be super appreciated.
It is standard SQL that a PRIMARY KEY is UNIQUE and NOT NULL. PostgreSQL enforces the standard, and does not allow you to have any (not a single one) NULL on the table. Other databases allow you to have one NULL, therefore, the different behaviour.
PostgreSQL current documentation on Primary Keys clearly states it:
5.3.4. Primary Keys
A primary key constraint indicates that a column, or group of columns, can be used as a unique identifier for rows in the table. This requires that the values be both unique and not null.
If you want your PRIMARY KEY to be a synthetic (i.e.: not natural) sequence number, you should define it with type BIGSERIAL instead of BIGINT. I don't know the details on how this is achieved using SQLAlchemy, but look at the references.
When you then INSERT into your table, the id should NOT be in the INSERT column list (it should not be set to null, just not be there). I.e.:
This will generate a new id:
INSERT INTO public.submission (code) VALUES ('Some code') ;
will work.
This won't:
INSERT INTO public.submission (id, code) VALUES (NULL, 'Some code') ;
I guess SQLAlchemy should be smart enough to generate the proper SQL INSERT statements, once properly configured.
Reference:
Why isn't SQLAlchemy creating serial columns?
Ultimately, I discovered what went wrong, and it was definitely my fault. The process I used to load the old data into the database (pgloader) was doing more than just loading data - it was somehow overwriting parts of the table definitions! I was able to pg_dump the data out, reset the tables, and then load it back in - everything works as expected. Thanks for sanity checks!

Updating PostgreSQL JSONB key value by adding 1 to existing key value

Im getting error while updating JSON data
CREATE TABLE testTable
AS
SELECT $${
"id": 1,
"value": 100
}$$::jsonb AS jsondata;
and I want to update value to 101 by adding 1, after visiting many websites I found this statement
UPDATE testTable
SET jsondata = JSONB_SET(jsondata, '{value}', (jsondata->>'value')::int + 1);
but above one is giving error "cannot convert jsonb to int"
and my expected output is
{
"id": 1,
"value": 101
}
Look at the signature of jsonb_set (using \df jsonb_set)
Schema | Name | Result data type | Argument data types | Type
------------+-----------+------------------+----------------------------------------------------------------------------------------+--------
pg_catalog | jsonb_set | jsonb | jsonb_in jsonb, path text[], replacement jsonb, create_if_missing boolean DEFAULT true | normal
What you want is this..
UPDATE testTable
SET jsondata = jsonb_set(
jsondata,
ARRAY['value'],
to_jsonb((jsondata->>'value')::int + 1)
)
;