PostgreSQL - How to find length of sub array - postgresql

Trying to get the length of array in array.
RAISE NOTICE '%', a[1][2][1]; -- ok, returns proper value
RAISE NOTICE '%', array_length(a, 1); -- ok, returns array length
But this won't work:
RAISE NOTICE '%', array_length(a[1], 1); -- Error: function array_length(text, integer) does not exist
Why, if a[1] is array?

PostgreSQL's arrays are not arrays of arrays like in some other languages. In PostgreSQL, there are only multidimensional arrays, and it is not easy to take slice. So your question in Postgres has not sense, just there are not sub arrays.
You can take info about the dimensions of the array:
(2022-09-09 18:45:44) postgres=# select array_dims(ARRAY[[[1,2,3],[1,2,3]],[[2,3,4],[1,2,3]]]);
┌─────────────────┐
│ array_dims │
╞═════════════════╡
│ [1:2][1:2][1:3] │
└─────────────────┘
(1 row)
or you can see the sizes
(2022-09-09 18:46:30) postgres=# select array_length(ARRAY[[[1,2,3],[1,2,3]],[[2,3,4],[1,2,3]]],1);
┌──────────────┐
│ array_length │
╞══════════════╡
│ 2 │
└──────────────┘
(1 row)
(2022-09-09 18:47:54) postgres=# select array_length(ARRAY[[[1,2,3],[1,2,3]],[[2,3,4],[1,2,3]]],2);
┌──────────────┐
│ array_length │
╞══════════════╡
│ 2 │
└──────────────┘
(1 row)
(2022-09-09 18:47:56) postgres=# select array_length(ARRAY[[[1,2,3],[1,2,3]],[[2,3,4],[1,2,3]]],3);
┌──────────────┐
│ array_length │
╞══════════════╡
│ 3 │
└──────────────┘
(1 row)
When you use not enough indexes, then Postgres try to make some slices, but when you don't specify slice correctly, then you get NULL value (this is not the array), and then there cannot be used function array_length:
-- not fully specified cell in array
(2022-09-09 18:55:51) postgres=# select (array[[1,2,3],[4,5,6]])[1];
┌───────┐
│ array │
╞═══════╡
│ ∅ │
└───────┘
(1 row)
(2022-09-09 18:56:14) postgres=# select (array[[1,2,3],[4,5,6]])[1:1];
┌───────────┐
│ array │
╞═══════════╡
│ {{1,2,3}} │
└───────────┘
(1 row)
(2022-09-09 18:56:21) postgres=# select (array[[1,2,3],[4,5,6]])[1:2];
┌───────────────────┐
│ array │
╞═══════════════════╡
│ {{1,2,3},{4,5,6}} │
└───────────────────┘
(1 row)

Your slice is bad expressed, example on three dimensional array 2x2x2:
select array_length(('{{{1,2},{3,4}},{{5,6},{7,8}}}'::integer[][][])[1:1],2);
then since the slice is an array of an array you should ask for second dimension.
In the docs https://www.postgresql.org/docs/current/arrays.html

Related

Equivalent for ANYDATA datatype in PostgreSQL

While migrating oracle to Postgres, we have datatype ANYDATA.
Is there any equivalent in postgres for ANYDATA
Postgres has not type like this type. Theoretically it should not be hard to implement this type in extension, but I don't know about some implementation.
Today almost all values can be effectively encoded to jsonb data type. But there are not exact information about original datatype. But I think so it can work.
(2023-01-24 06:23:21) postgres=# select to_jsonb(1);
┌──────────┐
│ to_jsonb │
╞══════════╡
│ 1 │
└──────────┘
(1 row)
(2023-01-24 06:24:34) postgres=# select to_jsonb(row(10,20,30));
┌────────────────────────────────┐
│ to_jsonb │
╞════════════════════════════════╡
│ {"f1": 10, "f2": 20, "f3": 30} │
└────────────────────────────────┘
(1 row)
(2023-01-24 06:24:46) postgres=# select to_jsonb(current_date);
┌──────────────┐
│ to_jsonb │
╞══════════════╡
│ "2023-01-24" │
└──────────────┘
(1 row)

Counting the same positional bits in postgresql bitmasks

I am trying to count each same position bit of multiple bitmasks in postgresql, here is an example of the problem:
Suppose i have three bitmasks (in binary) like:
011011011100110
100011010100101
110110101010101
Now what I want to do is to get the total count of bits in each separate column, considering the above masks as three rows and multiple columns.
e.g The first column have count 2, the second one have count 2, the third one have count of 1 and so on...
In actual i have total of 30 bits in each bitmasks in my database. I want to do it in PostgreSQL. I am open for further explanation of the problem if needed.
You could do it by using the get_bit functoin and a couple of joins:
SELECT sum(bit) FILTER (WHERE i = 0) AS count_0,
sum(bit) FILTER (WHERE i = 1) AS count_1,
...
sum(bit) FILTER (WHERE i = 29) AS count_29
FROM bits
CROSS JOIN generate_series(0, 29) AS i
CROSS JOIN LATERAL get_bit(b, i) AS bit;
The column with the bit string is b in my example.
You could use the bitwise and & operator and bigint arithmetic so long as your bitstrings contain 63 bits or fewer:
# create table bmasks (mask bit(15));
CREATE TABLE
# insert into bmasks values ('011011011100110'), ('100011010100101'), ('110110101010101');
INSERT 0 3
# with masks as (
select (2 ^ x)::bigint::bit(15) as mask, x as posn
from generate_series(0, 14) as gs(x)
)
select m.posn, m.mask, sum((b.mask & m.mask > 0::bit(15))::int) as set_bits
from masks m
cross join bmasks b
group by m.posn, m.mask;
┌──────┬─────────────────┬──────────┐
│ posn │ mask │ set_bits │
├──────┼─────────────────┼──────────┤
│ 0 │ 000000000000001 │ 2 │
│ 1 │ 000000000000010 │ 1 │
│ 2 │ 000000000000100 │ 3 │
│ 3 │ 000000000001000 │ 0 │
│ 4 │ 000000000010000 │ 1 │
│ 5 │ 000000000100000 │ 2 │
│ 6 │ 000000001000000 │ 2 │
│ 7 │ 000000010000000 │ 2 │
│ 8 │ 000000100000000 │ 1 │
│ 9 │ 000001000000000 │ 2 │
│ 10 │ 000010000000000 │ 3 │
│ 11 │ 000100000000000 │ 1 │
│ 12 │ 001000000000000 │ 1 │
│ 13 │ 010000000000000 │ 2 │
│ 14 │ 100000000000000 │ 2 │
└──────┴─────────────────┴──────────┘
(15 rows)

How to create a recursive cte query that will push parent ids and grandparent ids into an array

I have a postgresql table that I am trying to create. This is my cte and I am inserting values here
BEGIN;
CREATE TABLE section (
id SERIAL PRIMARY KEY,
parent_id INTEGER REFERENCES section(id) DEFERRABLE,
name TEXT NOT NULL UNIQUE );
SET CONSTRAINTS ALL DEFERRED;
INSERT INTO section VALUES (1, NULL, 'animal');
INSERT INTO section VALUES (2, NULL, 'mineral');
INSERT INTO section VALUES (3, NULL, 'vegetable');
INSERT INTO section VALUES (4, 1, 'dog');
INSERT INTO section VALUES (5, 1, 'cat');
INSERT INTO section VALUES (6, 4, 'doberman');
INSERT INTO section VALUES (7, 4, 'dachshund');
INSERT INTO section VALUES (8, 3, 'carrot');
INSERT INTO section VALUES (9, 3, 'lettuce');
INSERT INTO section VALUES (10, 11, 'paradox1');
INSERT INTO section VALUES (11, 10, 'paradox2');
SELECT setval('section_id_seq', (select max(id) from section));
WITH RECURSIVE last_run(parent_id, id_list, name_list) AS (
???
SELECT id_list, name_list
FROM last_run ???
WHERE ORDER BY id_list;
ROLLBACK;
I know that a recursive query is the best possible way, but am not sure how exactly to implement it. What exactly goes in the ???
What im trying to get is the table below:
id_list | name_list
---------+------------------------
{1} | animal
{2} | mineral
{3} | vegetable
{4,1} | dog, animal
{5,1} | cat, animal
{6,4,1} | doberman, dog, animal
{7,4,1} | dachshund, dog, animal
{8,3} | carrot, vegetable
{9,3} | lettuce, vegetable
{10,11} | paradox1, paradox2
{11,10} | paradox2, paradox1
You could to use several recursive CTEs in single query: one for the valid tree and another one for paradoxes:
with recursive
cte as (
select *, array[id] as ids, array[name] as names
from section
where parent_id is null
union all
select s.*, s.id||c.ids, s.name||c.names
from section as s join cte as c on (s.parent_id = c.id)),
paradoxes as (
select *, array[id] as ids, array[name] as names
from section
where id not in (select id from cte)
union all
select s.*, s.id||p.ids, s.name||p.names
from section as s join paradoxes as p on (s.parent_id = p.id)
where s.id <> all(p.ids) -- To break loops
)
select * from cte
union all
select * from paradoxes;
Result:
┌────┬───────────┬───────────┬─────────┬────────────────────────┐
│ id │ parent_id │ name │ ids │ names │
├────┼───────────┼───────────┼─────────┼────────────────────────┤
│ 1 │ ░░░░ │ animal │ {1} │ {animal} │
│ 2 │ ░░░░ │ mineral │ {2} │ {mineral} │
│ 3 │ ░░░░ │ vegetable │ {3} │ {vegetable} │
│ 4 │ 1 │ dog │ {4,1} │ {dog,animal} │
│ 5 │ 1 │ cat │ {5,1} │ {cat,animal} │
│ 8 │ 3 │ carrot │ {8,3} │ {carrot,vegetable} │
│ 9 │ 3 │ lettuce │ {9,3} │ {lettuce,vegetable} │
│ 6 │ 4 │ doberman │ {6,4,1} │ {doberman,dog,animal} │
│ 7 │ 4 │ dachshund │ {7,4,1} │ {dachshund,dog,animal} │
│ 10 │ 11 │ paradox1 │ {10} │ {paradox1} │
│ 11 │ 10 │ paradox2 │ {11} │ {paradox2} │
│ 11 │ 10 │ paradox2 │ {11,10} │ {paradox2,paradox1} │
│ 10 │ 11 │ paradox1 │ {10,11} │ {paradox1,paradox2} │
└────┴───────────┴───────────┴─────────┴────────────────────────┘
Demo
As you can see the result includes two unwanted rows: {10}, {paradox1} and {11}, {paradox2}. It is up to you how to filter them out.
And it is not clear what is the desired result if you append yet another row like INSERT INTO section VALUES (12, 10, 'paradox3'); for instance.

Could not choose a best candidate function. You might need to add explicit type casts

select
c_elementvalue.value AS "VALUE",
c_elementvalue.name AS "NAME",
rv_fact_acct.postingtype AS "POSTINGTYPE",
sum(rv_fact_acct.amtacct) AS "AMNT",
'YTDB' AS "TYPE",
c_period.enddate AS "ENDDATE",
max(ad_client.description) AS "COMPANY"
from
adempiere.c_period,
adempiere.rv_fact_acct,
adempiere.c_elementvalue,
adempiere.ad_client
where
(rv_fact_acct.ad_client_id = ad_client.ad_client_id ) and
(rv_fact_acct.c_period_id = c_period.c_period_id) and
(rv_fact_acct.account_id = c_elementvalue.c_elementvalue_id) and
(rv_fact_acct.dateacct BETWEEN to_date( to_char( '2017-03-01' ,'YYYY') ||'-04-01', 'yyyy-mm-dd') AND '2017-03-31' ) AND
(rv_fact_acct.ad_client_id = 1000000) and
(rv_fact_acct.c_acctschema_id = 1000000 )and
(rv_fact_acct.postingtype = 'B')and
(rv_fact_acct.accounttype in ('R','E') )
group by c_elementvalue.value , c_elementvalue.name , rv_fact_acct.postingtype , c_period.enddate
order by 5 asc, 1 asc
I got an error message, when executing above sql statement(postgres).
Error message:
[Err] ERROR: function to_char(unknown, unknown) is not unique
LINE 68: (rv_fact_acct.dateacct BETWEEN to_date( to_char( '2017-03-...
^
HINT: Could not choose a best candidate function. You might need to add explicit type casts.
This part of your query is problematic:
to_date( to_char( '2017-03-01' ,'YYYY') ||'-04-01', 'yyyy-mm-dd')
There are not any function to_char, that has first parameter string.
postgres=# \df to_char
List of functions
┌────────────┬─────────┬──────────────────┬───────────────────────────────────┬────────┐
│ Schema │ Name │ Result data type │ Argument data types │ Type │
╞════════════╪═════════╪══════════════════╪═══════════════════════════════════╪════════╡
│ pg_catalog │ to_char │ text │ bigint, text │ normal │
│ pg_catalog │ to_char │ text │ double precision, text │ normal │
│ pg_catalog │ to_char │ text │ integer, text │ normal │
│ pg_catalog │ to_char │ text │ interval, text │ normal │
│ pg_catalog │ to_char │ text │ numeric, text │ normal │
│ pg_catalog │ to_char │ text │ real, text │ normal │
│ pg_catalog │ to_char │ text │ timestamp without time zone, text │ normal │
│ pg_catalog │ to_char │ text │ timestamp with time zone, text │ normal │
└────────────┴─────────┴──────────────────┴───────────────────────────────────┴────────┘
(8 rows)
You can cast string 2017-03-01 to date type. PostgreSQL cannot do it self, because there are more variants: numeric,timestamp, ...
postgres=# select to_date( to_char( '2017-03-01'::date ,'YYYY') ||'-04-01', 'yyyy-mm-dd');
┌────────────┐
│ to_date │
╞════════════╡
│ 2017-04-01 │
└────────────┘
(1 row)
Usually, using string operations for date time operations are wrong. PostgreSQL (and all SQL databases) has great functions for date arithmetic.
For example - the task "get first date of following month" can be done with expression:
postgres=# select date_trunc('month', current_date + interval '1month')::date;
┌────────────┐
│ date_trunc │
╞════════════╡
│ 2017-05-01 │
└────────────┘
(1 row)
You can write custom function in SQL language (macro):
postgres=# create or replace function next_month(date)
returns date as $$
select date_trunc('month', $1 + interval '1month')::date $$
language sql;
CREATE FUNCTION
postgres=# select next_month(current_date);
┌────────────┐
│ next_month │
╞════════════╡
│ 2017-05-01 │
└────────────┘
(1 row)
It isn't clear what logic you intend to use for filtering by the account date, but your current use of to_char() and to_date() appears to be the cause of the error. If you just want to grab records from March 2017, then use the following:
rv_fact_acct.dateacct BETWEEN '2017-03-01' AND '2017-03-31'
If you give us more information about what you are trying to do, this can be updated accordingly.

Postgresql 9.6 concatenation of varchar

I wonder why concatenation of two varchar gives me text type in result.
select 'Plural'::varchar || 'sight'::varchar;
Type 'text' of concatenation I see in output of PGAdmin3 (server: 9.4).
test=> \doS ||
List of operators
┌────────────┬──────┬───────────────┬────────────────┬─────────────┬─────────────────────────────────────┐
│ Schema │ Name │ Left arg type │ Right arg type │ Result type │ Description │
├────────────┼──────┼───────────────┼────────────────┼─────────────┼─────────────────────────────────────┤
│ pg_catalog │ || │ anyarray │ anyarray │ anyarray │ concatenate │
│ pg_catalog │ || │ anyarray │ anyelement │ anyarray │ append element onto end of array │
│ pg_catalog │ || │ anyelement │ anyarray │ anyarray │ prepend element onto front of array │
│ pg_catalog │ || │ anynonarray │ text │ text │ concatenate │
│ pg_catalog │ || │ bit varying │ bit varying │ bit varying │ concatenate │
│ pg_catalog │ || │ bytea │ bytea │ bytea │ concatenate │
│ pg_catalog │ || │ jsonb │ jsonb │ jsonb │ concatenate │
│ pg_catalog │ || │ text │ anynonarray │ text │ concatenate │
│ pg_catalog │ || │ text │ text │ text │ concatenate │
│ pg_catalog │ || │ tsquery │ tsquery │ tsquery │ OR-concatenate │
│ pg_catalog │ || │ tsvector │ tsvector │ tsvector │ concatenate │
└────────────┴──────┴───────────────┴────────────────┴─────────────┴─────────────────────────────────────┘
(11 rows)
There is no || operator for varchar. What happens is that PostgreSQL casts the varchar to text (that is the preferred type in this type category).
The result of the operation will then be text as well.
Table 9-8. SQL String Functions and Operators
string || string - return type text
Also read on type Type conversion, Eg String Concatenation Operator Type Resolution.
And also I text is default string type for postgres, so when you mix data types they will default to text for string.
Because they we're created that way :
https://www.postgresql.org/docs/9.4/static/functions-string.html