Running postgres 11.3. Here's the sql code
create type _stats_agg_accum_type AS (
cnt bigint,
min double precision,
max double precision,
m1 double precision,
m2 double precision,
m3 double precision,
m4 double precision,
q double precision[],
n double precision[],
np double precision[],
dn double precision[]
);
create aggregate stats_agg(double precision) (
sfunc = _stats_agg_accumulator,
stype = _stats_agg_accum_type,
finalfunc = _stats_agg_finalizer,
combinefunc = _stats_agg_combiner,
parallel = safe,
initcond = '(0,,, 0, 0, 0, 0, {}, {1,2,3,4,5}, {1,2,3,4,5}, {0,0.25,0.5,0.75,1})'
);
Which gives me
ERROR: malformed array literal: "{1"
DETAIL: Unexpected end of input.
SQL state: 22P02
The empty array literal works ok. I've also tried a one element literal {1} which works fine. Whenever I have 2 or more elements it gives me this error.
As a work around I could pass in empty arrays and initialize them on the first pass, but that's ugly.
You need quotes around your arrays, and that's because the array is in a text version of a row.
Easy to test by taking your input as a row and see how postgres formats it (single quotes needed around arrays here because {} is an array in text):
SELECT ROW(0,NULL,NULL, 0, 0, 0, 0, '{}', '{1,2,3,4,5}', '{1,2,3,4,5}', '{0,0.25,0.5,0.75,1}')
Returns:
(0,,,0,0,0,0,{},"{1,2,3,4,5}","{1,2,3,4,5}","{0,0.25,0.5,0.75,1}")
Therefore you need to do:
...
initcond = '(0,,,0,0,0,0,{},"{1,2,3,4,5}","{1,2,3,4,5}","{0,0.25,0.5,0.75,1}")'
Why quotes are not required on an array which is empty or has only one value:
Multiple values in an array are comma-delimited, and fields within a row are also comma-delimited. If you supply a row as '(0,{1,2})', PG will interpret this as three fields: 0, {1, 2}. Naturally in that case you'll get an error about a malformed array. Putting a field in quotes means everything within those quotes is one field. Therefore '(0,"{1,2}")' will be interpreted correctly as 0, {1,2}. If the array is empty or contains only one value, there will be no comma, so there is no problem parsing that field correctly.
Related
In my sqlalchemy ( sqlalchemy = "^1.4.36" ) query I have a clause:
.filter( some_model.some_field[2].in_(['item1', 'item2']) )
where some_field is jsonb and the value in some_field value in the db formatted like this:
["something","something","123"]
or
["something","something","0123"]
note: some_field[2] is always digits-only double-quoted string, sometimes with leading zeroes and sometimes without them.
The query works fine for cases like this:
.filter( some_model.some_field[2].in_(['123', '345']) )
and fails when the values in the in_ clause have leading zeroes:
e.g. .filter( some_model.some_field[2].in_(['0123', '0345']) ) fails.
The error it gives:
cursor.execute(statement, parameters)\\npsycopg2.errors.InvalidTextRepresentation: invalid input syntax for type json\\nLINE 3: ...d_on) = 2 AND (app_cache.value_metadata -> 2) IN (\\'0123\\'\\n ^\\nDETAIL: Token \"0123\" is invalid.
Again, in the case of '123' (or any string of digits without leading zero) instead of '0123' the error is not thrown.
What is wrong with having leading zeroes for the strings in the list of in_ clause? Thanks.
UPDATE: basically, sqlachemy's IN_ assumes int input and fails accordingly. There must be some reasoning behind this behavior, can't tell what it is. I removed that filter fromm the query and did the filtering of the ouput in python code afterwards.
The problem here is that the values in the IN clause are being interpreted by PostgreSQL as JSON representations of integers, and an integer with a leading zero is not valid JSON.
The IN clause has a value of type jsonb on the left hand side. The values on the right hand side are not explicitly typed, so Postgres tries to find the best match that will allow them to be compared with a jsonb value. This type is jsonb, so Postgres attempts to cast the values to jsonb. This works for values without a leading zero, because digits in single quotes without leading zeroes are valid representations of integers in JSON:
test# select '123'::jsonb;
jsonb
═══════
123
(1 row)
but it doesn't work for values with leading zeroes, because they are not valid JSON:
test# select '0123'::jsonb;
ERROR: invalid input syntax for type json
LINE 1: select '0123'::jsonb;
^
DETAIL: Token "0123" is invalid.
CONTEXT: JSON data, line 1: 0123
Assuming that you expect some_field[2].in_(['123', '345']) and some_field[2].in_(['0123', '345']) to match ["something","something","123"] and ["something","something","123"] respectively, you can either serialise the values to JSON yourself:
some_field[2].in_([json.dumps(x) for x in ['0123', '345']])
or use the contained_by operator (<# in PostgreSQL), to test whether some_field[2] is present in the list of values:
some_field[2].contained_by(['0123', '345'])
or cast some_field[2] to text (that is, use the ->> operator) so that the values are compared as text, not JSON.
some_field[2].astext.in_(['0123', '345'])
I have two tables. I want to update the emodnet_code column values of the table named 2018_01 based on the column emodnet_type of another table named shiptype_emodnet and using the matching of values of two other columns: column aisshiptype from 2018_01 table and column aisshiptype from shyptype_emodnet table. Query returned successfully but 0 rows affected:
UPDATE "2018_01"
SET emodnet_code = shiptype_emodnet.emodnet_type
FROM "shiptype_emodnet"
WHERE '2018_01.aisshiptype' = 'shiptype_emodnet.aisshiptype';
You are comparing string constants in your WHERE clause, not columns. So your where clause:
WHERE '2018_01.aisshiptype' = 'shiptype_emodnet.aisshiptype';
is always false, because the string literal '2018_01.aisshiptype' is never the same as the string literal 'shiptype_emodnet.aisshiptype'. So your where condition is essentially the same as:
where false
Identifiers need to be quoted with double quotes ("). Single quotes (') are only for string literals.
UPDATE "2018_01"
SET emodnet_code = shiptype_emodnet.emodnet_type
FROM "shiptype_emodnet"
WHERE "2018_01".aisshiptype = shiptype_emodnet.aisshiptype;
And you only need the double quotes for columns or tables that use names that are illegal in SQL or were created using double quotes and mixed case.
Can you try:
UPDATE "2018_01" t
SET t.emodnet_code = (SELECT shiptype_emodnet.emodnet_type
FROM shiptype_emodnet
WHERE t.aisshiptype = shiptype_emodnet.aisshiptype
Limit 1);
You should add limit 1 for update each row
Simply want to compare double precision column values that return results that are also double precision.
Using PostgreSQL 9.6 I haven't found a way to compare arrays of float or numeric data types without converting them to integers.
select * from
(
select
t1."SubdSec-in-DCA_id",
t1."float_range" as left_side,
t2."float_range" as right_side,
t1."float_range"::numeric[] - t2."float_range"::numeric[] as results
from "TEST" t1
cross join lateral (select "float_range" from "TEST") t2
) t3
order by "SubdSec-in-DCA_id",left_side, right_side
t1."float_range"::int[] & t2."float_range"::int[] as results,
works: but return values as integers
result: "T_S07414_DCA4117","{197.598,205.382}","{146.5,146.9}","{198,205}"
t1."float_range"::float[] & t2."float_range"::float[] as results,
fails: ERROR: operator does not exist: double precision[] & double precision[]
t1."float_range"::numeric[] & t2."float_range"::numeric[] as results
fails: ERROR: operator does not exist: numeric[] & numeric[]
I've created an aggregate function with the following:
CREATE FUNCTION rtrim(mychar) RETURNS mychar
AS '$libdir/libmy_pgmod', 'mycharrtrim'
LANGUAGE C IMMUTABLE STRICT;
CREATE OR REPLACE FUNCTION mychar_max( mychar, mychar ) RETURNS mychar
AS '$libdir/libmy_pgmod', 'mychar_max'
LANGUAGE C IMMUTABLE STRICT;
CREATE OR REPLACE FUNCTION mychar_min( mychar, mychar ) RETURNS mychar
AS '$libdir/libmy_pgmod', 'mychar_min'
LANGUAGE C IMMUTABLE STRICT;
CREATE AGGREGATE MAX( lzchar ) (
SFUNC = mychar_max,
STYPE = mychar,
SORTOP = >
);
CREATE AGGREGATE MIN( mychar ) (
SFUNC = mychar_min,
STYPE = mychar,
SORTOP = <
);
The mychar is a type that is defined with 2 type modifiers. The first type modifier is the length of the string and the 2nd is the CCSID of the string since we are tying to simulate a zOS string. I then create at table like the following:
create table t1 (c1 mychar(20, 1208), c2 char(20));
Within my C code I then try to do a describe of the following statement:
select c1, max(c1), max(c2) from t1 group by c1;
The describe returns fine, however, when I try to retrieve the data from the describe using the following code:
char *colName = PQfname( result, hvNum );
int colTmod = PQfmod( result, hvNum );
int colSize = PQfsize( result, hvNum );
Oid oid = PQftype( result, hvNum );
Oid tblOid = PQftable( result, hvNum );
For the first column I get the expected values (colName, colTmod, oid and tblOid). For the 2nd column (max(c1)) it returns max as the colName (which I expected), it also correctly returns the correct oid. However, for colTmod it returns -1. Is there something that I need to do to get the proper colTmod value returned in this case? For the max(c2) column which is a native char it correctly returns everything as expected including the colTmod as 24. There must be something I am doing incorrectly that results in my implementation of the char or the aggregate function not returning the type modification value correctly.
I am not 100% certain, but I am pretty sure that the result of an aggregate function has no type modifiers.
I tried your experiment with a column defined as numeric(7,2), and like you I got -1 for PQfmod when I queried the maximum.
numeric's max aggregate is defined using numeric_larger, which is defined as:
Datum
numeric_larger(PG_FUNCTION_ARGS)
{
Numeric num1 = PG_GETARG_NUMERIC(0);
Numeric num2 = PG_GETARG_NUMERIC(1);
/*
* Use cmp_numerics so that this will agree with the comparison operators,
* particularly as regards comparisons involving NaN.
*/
if (cmp_numerics(num1, num2) > 0)
PG_RETURN_NUMERIC(num1);
else
PG_RETURN_NUMERIC(num2);
}
So it is as simple as it can get and returns one of the input values.
If type modifiers are not preserved there, I'd guess they are never preserved in an aggregate.
I have in table column, which type is CHARACTER VARYING[] (that is array)
I need concatenate existed rows whith other array
This is my code:
UPDATE my_table SET
col = array_cat(col, ARRAY['5','6','7'])
returned error: function array_cat(character varying[], text[]) does not exist
Reason error is that array types not matches right?
Question: how to convert this array ARRAY['5','6','7'] as CHARACTER VARYING[] type ?
Cast to varchar[]:
> SELECT ARRAY['5','6','7']::varchar[], pg_typeof( ARRAY['5','6','7']::varchar[] );
SELECT ARRAY['5','6','7']::varchar[], pg_typeof( ARRAY['5','6','7']::varchar[] );
array | pg_typeof
---------+---------------------
{5,6,7} | character varying[]
You can use the PostgreSQL specific ::varchar[] or the standard CAST(colname AS varchar[])... though as arrays are not consistent across database implementations there won't be much advantage to using the standard syntax.