Conversion array types - postgresql

I have in table column, which type is CHARACTER VARYING[] (that is array)
I need concatenate existed rows whith other array
This is my code:
UPDATE my_table SET
col = array_cat(col, ARRAY['5','6','7'])
returned error: function array_cat(character varying[], text[]) does not exist
Reason error is that array types not matches right?
Question: how to convert this array ARRAY['5','6','7'] as CHARACTER VARYING[] type ?

Cast to varchar[]:
> SELECT ARRAY['5','6','7']::varchar[], pg_typeof( ARRAY['5','6','7']::varchar[] );
SELECT ARRAY['5','6','7']::varchar[], pg_typeof( ARRAY['5','6','7']::varchar[] );
array | pg_typeof
---------+---------------------
{5,6,7} | character varying[]
You can use the PostgreSQL specific ::varchar[] or the standard CAST(colname AS varchar[])... though as arrays are not consistent across database implementations there won't be much advantage to using the standard syntax.

Related

Text and jsonb concatenation in a single postgresql query

How can I concatenate a string inside of a concatenated jsonb object in postgresql? In other words, I am using the JSONb concatenate operator as well as the text concatenate operator in the same query and running into trouble.
Or... if there is a totally different query I should be executing, I'd appreciate hearing suggestions. The goal is to update a row containing a jsonb column. We don't want to overwrite existing key value pairs in the jsonb column that are not provided in the query and we also want to update multiple rows at once.
My query:
update contacts as c set data = data || '{"geomatch": "MATCH","latitude":'||v.latitude||'}'
from (values (16247746,40.814140),
(16247747,20.900840),
(16247748,20.890570)) as v(contact_id,latitude) where c.contact_id = v.contact_id
The Error:
ERROR: invalid input syntax for type json
LINE 85: update contacts as c set data = data || '{"geomatch": "MATCH...
^
DETAIL: The input string ended unexpectedly.
CONTEXT: JSON data, line 1: {"geomatch": "MATCH","latitude":
SQL state: 22P02
Character: 4573
You might be looking for
SET data = data || ('{"geomatch": "MATCH","latitude":'||v.latitude||'}')::jsonb
-- ^^ jsonb ^^ text ^^ text
but that's not how one should build JSON objects - that v.latitude might not be a valid JSON literal, or even contain some injection like "", "otherKey": "oops". (Admittedly, in your example you control the values, and they're numbers so it might be fine, but it's still a bad practice). Instead, use jsonb_build_object:
SET data = data || jsonb_build_object('geomatch', 'MATCH', 'latitude', v.latitude)
There are two problems. The first is operator precedence preventing your concatenation of a jsonb object to what is read a text object. The second is that the concatenation of text pieces requires a cast to jsonb.
This should work:
update contacts as c
set data = data || ('{"geomatch": "MATCH","latitude":'||v.latitude||'}')::jsonb
from (values (16247746,40.814140),
(16247747,20.900840),
(16247748,20.890570)) as v(contact_id,latitude)
where c.contact_id = v.contact_id
;

Check if character varying is between range of numbers

I hava data in my database and i need to select all data where 1 column number is between 1-100.
Im having problems, because i cant use - between 1 and 100; Because that column is character varying, not integer. But all data are numbers (i cant change it to integer).
Code;
dst_db1.eachRow("Select length_to_fault from diags where length_to_fault between 1 AND 100")
Error - operator does not exist: character varying >= integer
Since your column supposed to contain numeric values but is defined as text (or version of text) there will be times when it does not i.e. You need 2 validations: that the column actually contains numeric data and that it falls into your value restriction. So add the following predicates to your query.
and length_to_fault ~ '^\+?\d+(\.\d*)?$'
and length_to_fault::numeric <# ('[1.0,100.0]')::numrange;
The first builds a regexp that insures the column is a valid floating point value. The second insures the numeric value fall within the specified numeric range. See fiddle.
I understand you cannot change the database, but this looks like a good place for a check constraint esp. if n/a is the only non-numeric are allowed. You may want to talk with your DBA ans see about the following constraint.
alter table diags
add constraint length_to_fault_check
check ( lower(length_to_fault) = 'n/a'
or ( length_to_fault ~ '^\+?\d+(\.\d*)?$'
and length_to_fault::numeric <# ('[1.0,100.0]')::numrange
)
);
Then your query need only check that:
lower(lenth_to_fault) != 'n/a'
The below PostgreSQL query will work
SELECT length_to_fault FROM diags WHERE regexp_replace(length_to_fault, '[\s+]', '', 'g')::numeric BETWEEN 1 AND 100;

How to use array operators for type bytea[]?

Is it possible to use array operators on a type of bytea[]?
For example:
CREATE TABLE test (
metadata bytea[]
);
SELECT * FROM test WHERE test.metadata && ANY($1);
// could not find array type for data type bytea[]
If it's not possible, is there an alternative approach without changing the type from bytea[]?
postgresql 12.x
Do not use ANY, just compare the arrays directly using an array constructor and array functions
CREATE TABLE test (
metadata bytea[]
);
INSERT INTO public.test (metadata) VALUES('{"x","y"}');
SELECT * FROM test t WHERE metadata && array[E'\x78'::bytea];
When using ANY, the left-hand expression is evaluated and compared to each element of the right-hand array using the given operator, which must yield a Boolean result. So the original sql was trying to do something like bytea[] && bytea.
This applies not only for bytea[], but any array type e.g text[] or integer[].

Postgres: create index on attribute of attribute in JSONB column?

I'm working in Postgres 9.6.5. I have the following table:
id | integer
data | jsonb
The data in the data column is nested, in the form:
{ 'identification': { 'registration_number': 'foo' }}
I'd like to index registration_number, so I can query on it. I've tried this (based on this answer):
CREATE INDEX ON mytable((data->>'identification'->>'registration_number'));
But got this:
ERROR: operator does not exist: text ->> unknown
LINE 1: CREATE INDEX ON psc((data->>'identification'->>'registration... ^
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
What am I doing wrong?
You want:
CREATE INDEX ON mytable((data -> 'identification' ->> 'registration_number'));
The -> operator returns the jsonb object under the key, and the ->> operator returns the jsonb object under the key as text. The most notable difference between the two operators is that ->> will "unwrap" string values (i.e. remove double quotes from the TEXT representation).
The error you're seeing is reported because data ->> 'identification' returns text, and the subsequent ->> is not defined for the text type.
Since version 9.3 Postgres has the #> and #>> operators. This operators allow the user to specify a path (using an array of text) inside jsonb column to get the value.
You could use this operator to achieve your goal in a simpler way.
CREATE INDEX ON mytable((data #>> '{identification, registration_number}'));

Postgres query error

I have a query in postgres
insert into c_d (select * from cd where ak = '22019763');
And I get the following error
ERROR: column "region" is of type integer but expression is of type character varying
HINT: You will need to rewrite or cast the expression.
An INSERT INTO table1 SELECT * FROM table2 depends entirely on order of the columns, which is part of the table definition. It will line each column of table1 up with the column of table2 with the same order value, regardless of names.
The problem you have here is whatever column from cd with the same order value as c_d of the table "region" has an incompatible type, and an implicit typecast is not available to clear the confusion.
INSERT INTO SELECT * statements are stylistically bad form unless the two tables are defined, and will forever be defined, exactly the same way. All it takes is for a single extra column to get added to cd, and you'll start getting errors about extraneous extra columns.
If it is at all possible, what I would suggest is explicitly calling out the columns within the SELECT statement. You can call a function to change type within each of the column references (or you could define a new type cast to do this implicitly -- see CREATE CAST), and you can use AS to set the column label to match that of your target column.
If you can't do this for some reason, indicate that in your question.
Check out the PostgreSQL insert documentation. The syntax is:
INSERT INTO table [ ( column [, ...] ) ]
{ DEFAULT VALUES | VALUES ( { expression | DEFAULT } [, ...] ) | query }
which here would look something like:
INSERT INTO c_d (column1, column2...) select * from cd where ak = '22019763'
This is the syntax you want to use when inserting values from one table to another where the column types and order are not exactly the same.