Postgres: update value of TEXT column (CLOB) - postgresql

I have a column of type TEXT which is supposed to represent a CLOB value and I'm trying to update its value like this:
UPDATE my_table SET my_column = TEXT 'Text value';
Normally this column is written and read by Hibernate and I noticed that values written with Hibernate are stored as integers (perhaps some internal Postgres reference to the CLOB data).
But when I try to update the column with the above SQL, the value is stored as a string and when Hibernate tries to read it, I get the following error: Bad value for type long : ["Text value"]
I tried all the options described in this answer but the result is always the same. How do I insert/update a TEXT column using SQL?

In order to update a cblob created by Hibernate you should use functions to handling large objects:
the documentation can be found in the following links:
https://www.postgresql.org/docs/current/lo-interfaces.html
https://www.postgresql.org/docs/current/lo-funcs.html
Examples:
To query:
select mytable.*, convert_from(loread(lo_open(mycblobfield::int, x'40000'::int), x'40000'::int), 'UTF8') from mytable where mytable.id = 4;
Obs:
x'40000' is corresponding to read mode (INV_WRITE)
To Update:
select lowrite(lo_open(16425, x'60000'::int), convert_to('this an updated text','UTF8'));
Obs:
x'60000' = INV_WRITE + INV_READ is corresponding to read and write mode (INV_WRITE + IV_READ).
The number 16425 is an example loid (large object id) which already exists in a record in your table. It's that integer number you can see as value in the blob field created by Hinernate.
To Insert:
select lowrite(lo_open(lo_creat(-1), x'60000'::int), convert_to('this is a new text','UTF8'));
Obs:
lo_creat(-1) generate a new large object a returns its loid

Related

Postgres : Unable to extract data from a bytea column which stores json array data

I'm trying to extract data from a bytea column which stores JSON data in Postgres 11.9 version.
However, the my code is throwing an error:
ERROR: invalid input syntax for type json
DETAIL: Token "" is invalid.
CONTEXT: JSON data, line 1: ...
Here is the sample data:
create table EMPLOYEE (PAYMENT bytea,NAME character varying);
insert into EMPLOYEE
values ('[{"totalCode":{"code":"EMPLOYER_TAXES"},"totalValue":{"amount":122.5,"currencyCode":"USD"}},{"totalCode":{"code":"OTHER_PAYMENTS"},"totalValue":{"amount":0.0,"currencyCode":"USD"}},{"totalCode":{"code":"GROSS_PAY"},"totalValue":{"amount":1000.0,"currencyCode":"USD"}},{"totalCode":{"code":"TOTAL_HOURS"},"totalValue":{"amount":40.0}}]'::bytea,'Tom')
;
Here is my query:
SELECT *
FROM EMPLOYEE left outer join lateral
jsonb_array_elements(PAYMENT::text::jsonb) element1 on true ;
Please help me in accessing data from this array. Data is always JSON in format.
There was a restriction to use bytea for this column.
You are making your life unnecessary hard by storing JSON values in a bytea column. Just because this is the recommended way in Oracle, doesn't mean this is a good choice for Postgres.
The correct solution is to change that column to jsonb. You will have to have a DBMS specific layer in your application anyway as the actual functions and operators you are using are very different.
Having said that, you can get away with this awful choice by using the convert_from() method:
select e.name, element1.*
from employee e
left join lateral jsonb_array_elements(convert_from(PAYMENT, 'UTF-8')::jsonb) element1 on true;
I also think you should change your INSERT statement to do an explicit conversion from text to bytea so that you can be sure the correct encoding is used:
insert into employee (payment, name)
values (convert_to('[{...}]', 'UTF-8'),'Tom');
But again: the only correct solution is to change that column to jsonb (or least json)

DB2: invalid use of one of the following: an untyped parameter marker, the DEFAULT keyword, or a null value

A user can only modify the ST_ASSMT_NM and , CAN_DT columns in the ST_ASSMT_REF record. In our system, we keep history in the same table and we never really update a record, we just insert a new row to represent the updated record. As a result, the "active" record is the record with the greatest LAST_TS timestamp value for a VENDR_ID. To prevent the possibility of an update to columns that cannot be changed, I wrote the logical UPDATE so that it retrieves the non-changable values from the original record and copies them to the new one being created. For the fields that can be modified, I pass them as params,
INSERT INTO GSAS.ST_ASSMT_REF
(
VENDR_ID
,ST_ASSMT_NM
,ST_CD
,EFF_DT
,CAN_DT
,LAST_TS
,LAST_OPER_ID
)
SELECT
ORIG_ST_ASSMT_REF.VENDR_ID
,#ST_ASSMT_NM
,ORIG_ST_ASSMT_REF.ST_CD
,ORIG_ST_ASSMT_REF.EFF_DT
,#CAN_DT
,CURRENT TIMESTAMP
,#LAST_OPER_ID
FROM
(
SELECT
ST_ASSMT_REF_ACTIVE_V.VENDR_ID
,ST_ASSMT_REF_ACTIVE_V.ST_ASSMT_NM
,ST_ASSMT_REF_ACTIVE_V.ST_CD
,ST_ASSMT_REF_ACTIVE_V.EFF_DT
,ST_ASSMT_REF_ACTIVE_V.CAN_DT
,CURRENT TIMESTAMP
,ST_ASSMT_REF_ACTIVE_V.LAST_OPER_ID
FROM
G2YF.ST_ASSMT_REF_ACTIVE_V ST_ASSMT_REF_ACTIVE_V --The view of only the most recent, active records
WHERE
ST_ASSMT_REF_ACTIVE_V.VENDR_ID = #VENDR_ID
) ORIG_ST_ASSMT_REF;
However, I am getting this error:
DB2 SP
:
ERROR [42610] [IBM][DB2] SQL0418N The statement was not processed because the statement contains an invalid use of one of the following: an untyped parameter marker, the DEFAULT keyword, or a null value.
It appears as though DB2 will not allow me to use a variable in a SELECT statement. For example, when I do this in TOAD for DB2:
select 1, #vendorId from SYSIBM.SYSDUMMY1
I get a popup dialog box. When I provide any string value, I get the same error.
I usually use SQL Server and I'm pretty sure I wouldn't have an issue doing this but I am not sure how to handle it get.
Suggestions? I know that I could do this in two seperate commands, 1 query SELECT to retreive the original VALUES and then supply the returned values and the modified ones to the INSERT command, but I should be able to do thios in one. Why can't I?
As you mentioned in your comment, DB2 is really picky about data types, and it wants you to cast your variables into the right data types. Even if you are passing in NULLs, sometimes DB2 wants you to cast the NULL to the data type of the target column.
Here is another answer I have on the topic.

Show all numeric rows or vice-versa postgresql

I have a table named "temp_table" and a column named "temp_column" of type varchar. The problem is "temp_column" must be of type integer. If I will just automatically update the table into type integer, it will generate an error since some data has non-numeric data in it.
I want a query that will show all rows if "temp_column" has non-numeric values in it (or the other way around) and update or SET the value accordingly. I'm having a hard time since ISNUMERIC is not available in postgresql.
how to do this?
This will show all rows where you have non-integer values in that column. It uses a regular expression to find all values that have anything else than just numbers in it:
select *
from temp_table
where temp_column ~ '[^0-9]';
this can also be used in an update statement:
update temp_table
set temp_column = null
where temp_column ~ '[^0-9]';
This will also filter out "numeric" values like 3.14 as those aren't integers.

How to perform update operations on columns of type JSONB in Postgres 9.4

Looking through the documentation for the Postgres 9.4 datatype JSONB, it is not immediately obvious to me how to do updates on JSONB columns.
Documentation for JSONB types and functions:
http://www.postgresql.org/docs/9.4/static/functions-json.html
http://www.postgresql.org/docs/9.4/static/datatype-json.html
As an examples, I have this basic table structure:
CREATE TABLE test(id serial, data jsonb);
Inserting is easy, as in:
INSERT INTO test(data) values ('{"name": "my-name", "tags": ["tag1", "tag2"]}');
Now, how would I update the 'data' column? This is invalid syntax:
UPDATE test SET data->'name' = 'my-other-name' WHERE id = 1;
Is this documented somewhere obvious that I missed? Thanks.
If you're able to upgrade to Postgresql 9.5, the jsonb_set command is available, as others have mentioned.
In each of the following SQL statements, I've omitted the where clause for brevity; obviously, you'd want to add that back.
Update name:
UPDATE test SET data = jsonb_set(data, '{name}', '"my-other-name"');
Replace the tags (as oppose to adding or removing tags):
UPDATE test SET data = jsonb_set(data, '{tags}', '["tag3", "tag4"]');
Replacing the second tag (0-indexed):
UPDATE test SET data = jsonb_set(data, '{tags,1}', '"tag5"');
Append a tag (this will work as long as there are fewer than 999 tags; changing argument 999 to 1000 or above generates an error. This no longer appears to be the case in Postgres 9.5.3; a much larger index can be used):
UPDATE test SET data = jsonb_set(data, '{tags,999999999}', '"tag6"', true);
Remove the last tag:
UPDATE test SET data = data #- '{tags,-1}'
Complex update (delete the last tag, insert a new tag, and change the name):
UPDATE test SET data = jsonb_set(
jsonb_set(data #- '{tags,-1}', '{tags,999999999}', '"tag3"', true),
'{name}', '"my-other-name"');
It's important to note that in each of these examples, you're not actually updating a single field of the JSON data. Instead, you're creating a temporary, modified version of the data, and assigning that modified version back to the column. In practice, the result should be the same, but keeping this in mind should make complex updates, like the last example, more understandable.
In the complex example, there are three transformations and three temporary versions: First, the last tag is removed. Then, that version is transformed by adding a new tag. Next, the second version is transformed by changing the name field. The value in the data column is replaced with the final version.
Ideally, you don't use JSON documents for structured, regular data that you want to manipulate inside a relational database. Use a normalized relational design instead.
JSON is primarily intended to store whole documents that do not need to be manipulated inside the RDBMS. Related:
JSONB with indexing vs. hstore
Updating a row in Postgres always writes a new version of the whole row. That's the basic principle of Postgres' MVCC model. From a performance perspective, it hardly matters whether you change a single piece of data inside a JSON object or all of it: a new version of the row has to be written.
Thus the advice in the manual:
JSON data is subject to the same concurrency-control considerations as
any other data type when stored in a table. Although storing large
documents is practicable, keep in mind that any update acquires a
row-level lock on the whole row. Consider limiting JSON documents to a
manageable size in order to decrease lock contention among updating
transactions. Ideally, JSON documents should each represent an atomic
datum that business rules dictate cannot reasonably be further
subdivided into smaller datums that could be modified independently.
The gist of it: to modify anything inside a JSON object, you have to assign a modified object to the column. Postgres supplies limited means to build and manipulate json data in addition to its storage capabilities. The arsenal of tools has grown substantially with every new release since version 9.2. But the principal remains: You always have to assign a complete modified object to the column and Postgres always writes a new row version for any update.
Some techniques how to work with the tools of Postgres 9.3 or later:
How do I modify fields inside the new PostgreSQL JSON datatype?
This answer has attracted about as many downvotes as all my other answers on SO together. People don't seem to like the idea: a normalized design is superior for regular data. This excellent blog post by Craig Ringer explains in more detail:
"PostgreSQL anti-patterns: Unnecessary json/hstore dynamic columns"
Another blog post by Laurenz Albe, another official Postgres contributor like Craig and myself:
JSON in PostgreSQL: how to use it right
This is coming in 9.5 in the form of jsonb_set by Andrew Dunstan based on an existing extension jsonbx that does work with 9.4
For those that run into this issue and want a very quick fix (and are stuck on 9.4.5 or earlier), here is a potential solution:
Creation of test table
CREATE TABLE test(id serial, data jsonb);
INSERT INTO test(data) values ('{"name": "my-name", "tags": ["tag1", "tag2"]}');
Update statement to change jsonb value
UPDATE test
SET data = replace(data::TEXT,': "my-name"',': "my-other-name"')::jsonb
WHERE id = 1;
Ultimately, the accepted answer is correct in that you cannot modify an individual piece of a jsonb object (in 9.4.5 or earlier); however, you can cast the jsonb column to a string (::TEXT) and then manipulate the string and cast back to the jsonb form (::jsonb).
There are two important caveats
this will replace all values equaling "my-name" in the json (in the case you have multiple objects with the same value)
this is not as efficient as jsonb_set would be if you are using 9.5
update the 'name' attribute:
UPDATE test SET data=data||'{"name":"my-other-name"}' WHERE id = 1;
and if you wanted to remove for example the 'name' and 'tags' attributes:
UPDATE test SET data=data-'{"name","tags"}'::text[] WHERE id = 1;
This question was asked in the context of postgres 9.4,
however new viewers coming to this question should be aware that in postgres 9.5,
sub-document Create/Update/Delete operations on JSONB fields are natively supported by the database, without the need for extension functions.
See: JSONB modifying operators and functions
I wrote small function for myself that works recursively in Postgres 9.4. I had same problem (good they did solve some of this headache in Postgres 9.5).
Anyway here is the function (I hope it works well for you):
CREATE OR REPLACE FUNCTION jsonb_update(val1 JSONB,val2 JSONB)
RETURNS JSONB AS $$
DECLARE
result JSONB;
v RECORD;
BEGIN
IF jsonb_typeof(val2) = 'null'
THEN
RETURN val1;
END IF;
result = val1;
FOR v IN SELECT key, value FROM jsonb_each(val2) LOOP
IF jsonb_typeof(val2->v.key) = 'object'
THEN
result = result || jsonb_build_object(v.key, jsonb_update(val1->v.key, val2->v.key));
ELSE
result = result || jsonb_build_object(v.key, v.value);
END IF;
END LOOP;
RETURN result;
END;
$$ LANGUAGE plpgsql;
Here is sample use:
select jsonb_update('{"a":{"b":{"c":{"d":5,"dd":6},"cc":1}},"aaa":5}'::jsonb, '{"a":{"b":{"c":{"d":15}}},"aa":9}'::jsonb);
jsonb_update
---------------------------------------------------------------------
{"a": {"b": {"c": {"d": 15, "dd": 6}, "cc": 1}}, "aa": 9, "aaa": 5}
(1 row)
As you can see it analyze deep down and update/add values where needed.
Maybe:
UPDATE test SET data = '"my-other-name"'::json WHERE id = 1;
It worked with my case, where data is a json type
Matheus de Oliveira created handy functions for JSON CRUD operations in postgresql. They can be imported using the \i directive. Notice the jsonb fork of the functions if jsonb if your data type.
9.3 json https://gist.github.com/matheusoliveira/9488951
9.4 jsonb https://gist.github.com/inindev/2219dff96851928c2282
Updating the whole column worked for me:
UPDATE test SET data='{"name": "my-other-name", "tags": ["tag1", "tag2"]}' where id=1;

How to update an XML column in DB2 with XMLQuery?

I want to update an XML column in DB2 with dynamic values or you can say with values that I'll pick from another table and insert them in the xml column.
I know how to insert a node along with its value that we provide by
hard coding it, e.g.
<data>some_value</data>
I want to do it in the following way:
UPDATE my_table SET my_table_column = XMLQuery(..... <data>???</data>)
WHERE my_table_id = other_table_id;
Where I place ??? I need a kind of select statement here which will come up with actual value for the node.