Postgres Update Number Values in a JSONB field to be text - postgresql

I have a table with a JSON column and some of the values in it are numbers but I want all the values to be text. For example, I have {"budget": 500}, but I want it to be {"budget":"500"}. I have tried using the JSONB_SET function but even after postgres returns N rows updated, when I go to retrieve the records, they are still numbers. I was hoping that somebody may have encountered this issue. Here's what I've tried that isn't working.
UPDATE my_table
SET data = JSONB_SET(data, '{budget}', data->'budget'::text)
WHERE data ? 'budget' = true;
Since this is a very large table, hardcoding values is not feasible. If anybody knows why this isn't working or if there is something that does work, please let me know, thank you!

You can enforce converting a JSONB number to text with the function quote_ident():
UPDATE my_table
SET data = jsonb_set(data, '{budget}', quote_ident(data->>'budget')::jsonb)
WHERE data ? 'budget'
-- you can add this condition to avoid updating non-numbers
-- AND jsonb_typeof(data->'budget') = 'number'
Note that data->'budget'::text does nothing as the cast refers to 'budget', not a JSON object and the expression is equivalent to data->'budget'.

Related

How in PostgreSQL to update an attribute inside a column with a JSONB and also keeping the data already inside

I have a situation where in a table called tokens I have a column called data
The data columns consist of something like this as a '{}'::jsonb
{"recipientId": "xxxxxxxx"}
My goal is to have as follow to update old data to new DB design and requirements
{"recipientIds": ["xxxxxxxx"]}
The reason is that the naming was changed and the value will be an array of recipients.
I don't know how to achieve this
change recipientIdto recipientIds
change the value format to an array but to not loose the data
Also this need to be done only where I have a type in ('INVITE_ECONSENT_SIGNATURE', 'INVITE_ECONSENT_RECIPIENT')
The table looks as follow is a simple table which contains few columns.
The data is the only one as '{}'::jsonb.
id
type
data
1
type1
data1
2
type2
data1
As an edit what I tried to do and partially solved my problem but cannot understand how to se the value to be [value]
update
"token"
set
"data" = data - 'recipientId' || jsonb_build_object('recipientIds', data->'recipientId')
where
"type" in ('INVITE_ECONSENT_RECIPIENT')
I can have now a recipientids: value but need to have recipientids: [value]
You were close, you need to pass an array as the second parameter of the jsonb_build_object() function:
(data - 'recipientId')||jsonb_build_object(
'recipientIds',
jsonb_build_array(data -> 'recipientId')
)

Sequelize how to use aggregate function on Postgres JSONB column

I have created one table with JSONB column as "data"
And the sample value of that column is
[{field_id:1, value:10},{field_id:2, value:"some string"}]
Now there are multiple rows like this..
What i want ?
I want to use aggregate function on "data" column such that, i should
get
Sum of all value where field_id = 1;
Avg of value where field_id = 1;
I have searched alot on google but not able to find a proper solution.
sometimes it says "Field doesn't exist" and some times it says "from clause missing"
I tried referring like data.value & also data -> value lastly data ->> value
But nothing is working.
Please let me know the solution if any one knows,
Thanks in advance.
Your attributes should be something like this, so you instruct it to run the function on a specific value:
attributes: [
[sequelize.fn('sum', sequelize.literal("data->>'value'")), 'json_sum'],
[sequelize.fn('avg', sequelize.literal("data->>'value'")), 'json_avg']
]
Then in WHERE, you reference field_id in a similar way, using literal():
where: sequelize.literal("data->>'field_id' = 1")
Your example also included a string for the value of "value" which of course won't work. But if the basic Sequelize setup works on a good set of data, you can enhance the WHERE clause to test for numeric "value" data, there are good examples here: Postgres query to check a string is a number
Hopefully this gets you close. In my experience with Sequelize + Postgres, it helps to run the program in such a way that you see what queries it creates, like in a terminal where the output is streaming. On the way to a working statement, you'll either create objects which Sequelize doesn't like, or Sequelize will create bad queries which Postgres doesn't like. If the query looks close, take it into pgAdmin for further work, then try to reproduce your adjustments in Sequelize. Good luck!

Set nextval sequence data type to integer only

I have an issues running around my mind regarding default for 'id' field in my postgresql database. Here is the syntax:-
nextval('unsub_keyword_id_seq'::regclass)
However I'm not really understands even after read the documentations & I would like to set the value only for integer(digit only). I try to alter the column by change regclass to other OIDs but each time it will return errors.
Really appreciate if can get this solved very soon.
Update:
It just come to my idea on the data type for the column after I try & error with the code that will produce the id for the column.
Is integer(postgresql in this case) have it's own default length or not?
If I need to to insert long id, should I set the column length?
Kindly advise.
sorry if my questions quite confusing. your comments may help me to improve it.
From the comments:
I need to insert an id with length of 50 with consist of 2 alphabets & the rest is numeric. the problems occur as the data type is in integer & the data inserting in unsuccessful. is it possible to insert my desired data by retain the data type to integer?
If I understand this correctly, you probably need to format a string, e.g.
format('%s%s', 'XX', nextval('some_sequence_name'))

Temp Tables Calculating Fields

I am joining two tables and outputting to a csv file. This has worked ok,
But I would like to create a calculated field (an integer field multiplied by a decimal field) and output that as one of the columns.
I am struggling at the moment to calculate the field and store it.
CREATE TEMP-TABLE tth2.
tth2:CREATE-LIKE(buf-woins-hndl).
tth2:ADD-LIKE-FIELD("ttqtyhrs","work_order.est_ltime").
tth2:TEMP-TABLE-PREPARE("ordx2").
bh2 = tth2:DEFAULT-BUFFER-HANDLE.
FOR EACH wo_instr NO-LOCK:
bh2:BUFFER-CREATE.
bh2:BUFFER-COPY(buf-woins-hndl).
ASSIGN bh2:BUFFER-VALUE("ttqtyhrs") = bh2:BUFFER-VALUE ("craft_nbr") *
bh2:BUFFER-VALUE("std_hrs").
END.
I am trying store the result of the calculation in temp table field ttqtyhrs
I get an error message
Invalid datatype for argument to method 'BUFFER-VALUE'. Expecting 'integer' (5442)
when I try to compile.
I would be grateful for any pointers
Andy
Most likely you want to something like this:
ASSIGN
bh2:BUFFER-FIELD("ttqtyhrs"):BUFFER-VALUE() = bh2:BUFFER-FIELD("craft_nbr"):BUFFER-VALUE() * bh2:BUFFER-FIELD("std_hrs"):BUFFER-VALUE().
BUFFER-VALUE takes an integer representing the index if the field is an extent/array. You need to pinpoint the BUFFER-FIELD!

replacing characters in a CLOB column (db2)

I have a CLOB(2000000) field in a db2 (v10) database, and I would like to run a simple UPDATE query on it to replace each occurances of "foo" to "baaz".
Since the contents of the field is more then 32k, I get the following error:
"{some char data from field}" is too long.. SQLCODE=-433, SQLSTATE=22001
How can I replace the values?
UPDATE:
The query was the following (changed UPDATE into SELECT for easier testing):
SELECT REPLACE(my_clob_column, 'foo', 'baaz') FROM my_table WHERE id = 10726
UPDATE 2
As mustaccio pointed out, REPLACE does not work on CLOB fields (or at least not without doing a cast to VARCHAR on the data entered - which in my case is not possible since the size of the data is more than 32k) - the question is about finding an alternative way to acchive the REPLACE functionallity for CLOB fields.
Thanks,
krisy
Finally, since I have found no way to this by an SQL query, I ended up exporting the table, editing its lob content in Notepad++, and importing the table back again.
Not sure if this applies to your case: There are 2 different REPLACE functions offered by DB2, SYSIBM.REPLACE and SYSFUN.REPLACE. The version of REPLACE in SYSFUN accepts CLOBs and supports values up to 1 MByte. In case your values are longer than you would need to write your own (SQL-based?) function.
BTW: You can check function resolution by executing "values(current path)"