I have a table with a uuid column, and some of the rows are missing the data. I need to insert data into this uuid column. The data is entered manually, so we are suffixing with other column data to differentiate, but it gives me an error.
UPDATE schema.table
SET uuid_column = CONCAT ('f7949f56-8840-5afa-8c6d-3b0f6e7f93e9', '-', id_column)
WHERE id_column = '1234';
Error: [42804] ERROR: column "uuid_column" is of type uuid but expression is of type text
Hint: You will need to rewrite or cast the expression.
Position: 45
I also tried
UPDATE schema.table
SET uuid_column = CONCAT ('f7949f56-8840-5afa-8c6d-3b0f6e7f93e9', '-', id_column)::uuid
WHERE id_column = '1234';
Error: [22P02] ERROR: invalid input syntax for uuid: "f7949f56-8840-5afa-8c6d-3b0f6e7f93e9-1234"
An UUID consists of 16 bytes, which you see displayed in hexadecimal notation.
You cannot have a UUID with fewer or more bytes.
I recommend using the type bytea if you really need to do such a thing.
Related
How can I concatenate a string inside of a concatenated jsonb object in postgresql? In other words, I am using the JSONb concatenate operator as well as the text concatenate operator in the same query and running into trouble.
Or... if there is a totally different query I should be executing, I'd appreciate hearing suggestions. The goal is to update a row containing a jsonb column. We don't want to overwrite existing key value pairs in the jsonb column that are not provided in the query and we also want to update multiple rows at once.
My query:
update contacts as c set data = data || '{"geomatch": "MATCH","latitude":'||v.latitude||'}'
from (values (16247746,40.814140),
(16247747,20.900840),
(16247748,20.890570)) as v(contact_id,latitude) where c.contact_id = v.contact_id
The Error:
ERROR: invalid input syntax for type json
LINE 85: update contacts as c set data = data || '{"geomatch": "MATCH...
^
DETAIL: The input string ended unexpectedly.
CONTEXT: JSON data, line 1: {"geomatch": "MATCH","latitude":
SQL state: 22P02
Character: 4573
You might be looking for
SET data = data || ('{"geomatch": "MATCH","latitude":'||v.latitude||'}')::jsonb
-- ^^ jsonb ^^ text ^^ text
but that's not how one should build JSON objects - that v.latitude might not be a valid JSON literal, or even contain some injection like "", "otherKey": "oops". (Admittedly, in your example you control the values, and they're numbers so it might be fine, but it's still a bad practice). Instead, use jsonb_build_object:
SET data = data || jsonb_build_object('geomatch', 'MATCH', 'latitude', v.latitude)
There are two problems. The first is operator precedence preventing your concatenation of a jsonb object to what is read a text object. The second is that the concatenation of text pieces requires a cast to jsonb.
This should work:
update contacts as c
set data = data || ('{"geomatch": "MATCH","latitude":'||v.latitude||'}')::jsonb
from (values (16247746,40.814140),
(16247747,20.900840),
(16247748,20.890570)) as v(contact_id,latitude)
where c.contact_id = v.contact_id
;
I have the following sample data for demo:
Table:
create table tbl_json
(
id json
);
Some values:
insert into tbl_json values('[{"id":1},{"id":2},{"id":3}]');
Query: Convert/cast id into integer from json column.
Tried:
select json_array_elements(id)->>'id'::int ids
from tbl_json;
Getting an error:
ERROR: invalid input syntax for integer: "id"
The ::int cast is applied to 'id' because it has a higher precedence.
select (json_array_elements(id)->>'id')::int ids
from tbl_json;
I have a table in postgres with column 'col' with age values. This column also contains n/a values.
When I am applying a condition of age < 15, I am getting below error:
[Code: 0, SQL State: 22P02] ERROR: invalid input syntax for integer: "n/a"
I am using below query to handle the n/a values but I am still getting the same error:
ALTER TABLE tb
ADD COLUMN col CHARACTER VARYING;
UPDATE tb
Set col =
CASE
WHEN age::int <= 15
THEN 'true'
ELSE 'false'
END
;'
Please see 'age' is in text format in my table. I have two questions here:
How can I set the datatype while creating the initial table (in the create table statement)?
How can I handle n/a values in above case statement?
Thanks
You should really fix your data model and store numbers in integer columns.
You can get around you current problem, by converting your invalid "numbers" to null:
UPDATE tb
Set col = CASE
WHEN nullif(age, 'n/a')::int <= 15 THEN 'true'
ELSE 'false'
END;
And it seems col should be a boolean rather than a text column as well.
I'm trying to update a nullable date column with NULL and for some reason Postgres takes NULL as text and gives the below error
UPDATE tbl
SET
order_date = data.order_date
FROM
(VALUES (NULL, 100))
AS data(order_date,id)
WHERE data.id = tbl.id
And the error shows:
[42804] ERROR: column "order_date" is of type date but expression is
of type text
Hint: You will need to rewrite or cast the expression.
I can fix this by explicitly converting NULL to date as below:
NULL::date
But, Is there a way to achieve this without explicit type conversion?
You can avoid the explicit cast by copying data types from the target table:
UPDATE tbl
SET order_date = data.order_date
FROM (
VALUES
((NULL::tbl).order_date, (NULL::tbl).id)
(NULL, 100)
) data(order_date, id)
WHERE data.id = tbl.id;
The added dummy row with NULL values is filtered by WHERE data.id = tbl.id.
Related answer with detailed explanation:
Casting NULL type when updating multiple rows
I have a table with two columns:
1) id SERIAL PRIMARY KEY 2) BYTEA
I am trying to fetch all the rows using PGresult * res = PQexecParams(conn, "select * from table",0,NULL,NULL,NULL,NULL,1); ==> The last argument = 1 specify results to be in binary format.
Due to the last argument, I am able to fetch BYTEA column properly but the "id" column is also returned in a format that I can't understand(probably in BYTEA format). Is there a way to convert the "id" value returned by the above mentioned PQexecParams to integer? I am using PQgetvalue API to fetch results.