I wonder why concatenation of two varchar gives me text type in result.
select 'Plural'::varchar || 'sight'::varchar;
Type 'text' of concatenation I see in output of PGAdmin3 (server: 9.4).
test=> \doS ||
List of operators
┌────────────┬──────┬───────────────┬────────────────┬─────────────┬─────────────────────────────────────┐
│ Schema │ Name │ Left arg type │ Right arg type │ Result type │ Description │
├────────────┼──────┼───────────────┼────────────────┼─────────────┼─────────────────────────────────────┤
│ pg_catalog │ || │ anyarray │ anyarray │ anyarray │ concatenate │
│ pg_catalog │ || │ anyarray │ anyelement │ anyarray │ append element onto end of array │
│ pg_catalog │ || │ anyelement │ anyarray │ anyarray │ prepend element onto front of array │
│ pg_catalog │ || │ anynonarray │ text │ text │ concatenate │
│ pg_catalog │ || │ bit varying │ bit varying │ bit varying │ concatenate │
│ pg_catalog │ || │ bytea │ bytea │ bytea │ concatenate │
│ pg_catalog │ || │ jsonb │ jsonb │ jsonb │ concatenate │
│ pg_catalog │ || │ text │ anynonarray │ text │ concatenate │
│ pg_catalog │ || │ text │ text │ text │ concatenate │
│ pg_catalog │ || │ tsquery │ tsquery │ tsquery │ OR-concatenate │
│ pg_catalog │ || │ tsvector │ tsvector │ tsvector │ concatenate │
└────────────┴──────┴───────────────┴────────────────┴─────────────┴─────────────────────────────────────┘
(11 rows)
There is no || operator for varchar. What happens is that PostgreSQL casts the varchar to text (that is the preferred type in this type category).
The result of the operation will then be text as well.
Table 9-8. SQL String Functions and Operators
string || string - return type text
Also read on type Type conversion, Eg String Concatenation Operator Type Resolution.
And also I text is default string type for postgres, so when you mix data types they will default to text for string.
Because they we're created that way :
https://www.postgresql.org/docs/9.4/static/functions-string.html
Related
We have an app which displays a table like this :
this is what it looks like in database :
┌──────────┬──────────────┬─────────────┬────────────┬──────────┬──────────────────┐
│ BatchId │ ProductCode │ StageValue │ StageUnit │ StageId │ StageLineNumber │
├──────────┼──────────────┼─────────────┼────────────┼──────────┼──────────────────┤
│ 0B001 │ 150701 │ LEDI2B4015 │ │ 37222 │ 1 │
│ 0B001 │ 150701 │ 16.21 │ KG │ 37222 │ 1 │
│ 0B001 │ 150701 │ 73.5 │ │ 37222 │ 2 │
│ 0B001 │ 150701 │ LEDI2B6002 │ KG │ 37222 │ 2 │
└──────────┴──────────────┴─────────────┴────────────┴──────────┴──────────────────┘
I would like to query the database so that the output looks like this :
┌──────────┬──────────────┬────────────────────┬─────────────┬────────────┬──────────┬──────────────────┐
│ BatchId │ ProductCode │ LoadedProductCode │ StageValue │ StageUnit │ StageId │ StageLineNumber │
├──────────┼──────────────┼────────────────────┼─────────────┼────────────┼──────────┼──────────────────┤
│ 0B001 │ 150701 │ LEDI2B4015 │ 16.21 │ KG │ 37222 │ 1 │
│ 0B001 │ 150701 │ LEDI2B6002 │ 73.5 │ KG │ 37222 │ 2 │
└──────────┴──────────────┴────────────────────┴─────────────┴────────────┴──────────┴──────────────────┘
Is that even possible ?
My PostgreSQL Server version is 14.X
I have looked for many threads with "merge two columns and add new one" but none of them seem to be what I want.
DB Fiddle link
SQL Fiddle (in case) link
It's possible to get your output, but it's going to be prone to errors. You should seriously rethink your data model, if at all possible. Storing floats as text and trying to parse them is going to lead to many problems.
That said, here's a query that will work, at least for your sample data:
SELECT batchid,
productcode,
max(stagevalue) FILTER (WHERE stagevalue ~ '^[a-zA-Z].*') as loadedproductcode,
max(stagevalue::float) FILTER (WHERE stagevalue !~ '^[a-zA-Z].*') as stagevalue,
max(stageunit),
stageid,
stagelinenumber
FROM datas
GROUP BY batchid, productcode, stageid, stagelinenumber;
Note that max is just used because you need an aggregate function to combine with the filter. You could replace it with min and get the same result, at least for these data.
Let's say I have a Table people with the following columns:
name/string, mothers_hierachy/ltree
"josef", "maria.jenny.lisa"
How do I find all mothers of Josef in the people Table?
I'm searching for such a expression like this one: (That actually works)
SELECT * FROM people where name IN (
SELECT mothers_hierachy from people where name = "josef"
)
You can cast the names to ltree and then use index() to see if they are contained:
# select * from people;
┌───────┬───────────────────────┐
│ name │ mothers_hierarchy │
├───────┼───────────────────────┤
│ josef │ maria.jenny.lisa │
│ maria │ maria │
│ jenny │ maria.jenny │
│ lisa │ maria.jenny.lisa │
│ kate │ maria.jenny.lisa.kate │
└───────┴───────────────────────┘
(5 rows)
# select *
from people j
join people m
on index(j.mothers_hierarchy, m.name::ltree) >= 0
where j.name = 'josef';
┌───────┬───────────────────┬───────┬───────────────────┐
│ name │ mothers_hierarchy │ name │ mothers_hierarchy │
├───────┼───────────────────┼───────┼───────────────────┤
│ josef │ maria.jenny.lisa │ maria │ maria │
│ josef │ maria.jenny.lisa │ jenny │ maria.jenny │
│ josef │ maria.jenny.lisa │ lisa │ maria.jenny.lisa │
└───────┴───────────────────┴───────┴───────────────────┘
(3 rows)
How can a select statement that produces an indented tree:
ROOT
└── 4C403FD6
└── CD7AF8E1
└── A5E58A3F
└── 84B543BE
└── 7FFFC907
└── 08302734
└── AB25CF41
└── 6BDCBEAF
└── 7AC84293
└── 235C1120
└── EA412283
└── 5C5E94E4
be revised to also include same-level connecting lines:
ROOT
└── 4C403FD6
│ └── CD7AF8E1
│ └── A5E58A3F
│ └── 84B543BE
│ │ └── 7FFFC907
│ │ │ └── 08302734
│ │ │ └── AB25CF41
│ │ │ └── 6BDCBEAF
│ │ │ └── 7AC84293
│ │ └── 235C1120
│ └── EA412283
└── 5C5E94E4
CREATE TABLE Dev.Tree ([Hierarchy] [sys].[hierarchyid] NOT NULL, Token NVARCHAR(10) NOT NULL);
INSERT INTO Dev.Tree (Hierarchy, Token)
VALUES
(N'/', N'ROOT'),
(N'/0/', N'4C403FD6'),
(N'/0/1/', N'CD7AF8E1'),
(N'/0/1/1/', N'A5E58A3F'),
(N'/0/1/1/1/', N'84B543BE'),
(N'/0/1/1/2/', N'EA412283'),
(N'/0/1/1/1/0/', N'7FFFC907'),
(N'/0/1/1/1/0/1/', N'08302734'),
(N'/0/1/1/1/0/1/', N'AB25CF41'),
(N'/0/1/1/1/0/1/1/', N'6BDCBEAF'),
(N'/0/1/1/1/0/1/1/', N'7AC84293'),
(N'/0/1/1/1/1/', N'235C1120'),
(N'/1/', N'5C5E94E4');
-- U+202F: Narrow No-Break Space ↓ (good w/Consolas)
SELECT IIF(Hierarchy.GetLevel()=0, Token, CONCAT(REPLICATE(N' ', Hierarchy.GetLevel()-1), N'└── ', Token)) AS Tree
FROM Dev.Tree
ORDER BY Hierarchy;
I have a table with below structure in postgres where id is the primary key.
┌──────────────────────────────────┬──────────────────┬───────────┬──────────┬──────────────────────────────────────────────────────────────┬──────────┬──────────────┬─────────────┐
│ Column │ Type │ Collation │ Nullable │ Default │ Storage │ Stats target │ Description │
├──────────────────────────────────┼──────────────────┼───────────┼──────────┼──────────────────────────────────────────────────────────────┼──────────┼──────────────┼─────────────┤
│ id │ bigint │ │ │ │ plain │ │ │
│ requested_external_total_taxable │ bigint │ │ │ │ plain │ │ │
│ requested_external_total_tax │ bigint │ │ │ │ plain │ │ │
│ store_address.country │ text │ │ │ │ extended │ │ │
│ store_address.city │ text │ │ │ │ extended │ │ │
│ store_address.postal_code │ text │
I want to convert the store_address fields to a jsonb column.
┌──────────────────────────────────┬──────────────────┬───────────┬──────────┬──────────────────────────────────────────────────────────────┬──────────┬──────────────┬─────────────┐
│ Column │ Type │ Collation │ Nullable │ Default │ Storage │ Stats target │ Description │
├──────────────────────────────────┼──────────────────┼───────────┼──────────┼──────────────────────────────────────────────────────────────┼──────────┼──────────────┼─────────────┤
│ id │ bigint │ │ │ │ plain │ │ │
│ requested_external_total_taxable │ bigint │ │ │ │ plain │ │ │
│ requested_external_total_tax │ bigint │ │ │ │ plain │ │ │
│ store_address │ jsonb │ │ │ │ extended │ │ │
Any efficient of doing this?
You will need to add a new column, UPDATE the table and populating the new jsonb column. After that you can drop the old columns:
alter table the_table
add store_address jsonb;
update the_table
set store_address = jsonb_build_object('country', "store_address.country",
'city', "store_address.city",
'postal_code', "store_address.postal_code");
alter table the_table
drop "store_address.country",
drop "store_address.city",
drop "store_address.postal_code"
select
c_elementvalue.value AS "VALUE",
c_elementvalue.name AS "NAME",
rv_fact_acct.postingtype AS "POSTINGTYPE",
sum(rv_fact_acct.amtacct) AS "AMNT",
'YTDB' AS "TYPE",
c_period.enddate AS "ENDDATE",
max(ad_client.description) AS "COMPANY"
from
adempiere.c_period,
adempiere.rv_fact_acct,
adempiere.c_elementvalue,
adempiere.ad_client
where
(rv_fact_acct.ad_client_id = ad_client.ad_client_id ) and
(rv_fact_acct.c_period_id = c_period.c_period_id) and
(rv_fact_acct.account_id = c_elementvalue.c_elementvalue_id) and
(rv_fact_acct.dateacct BETWEEN to_date( to_char( '2017-03-01' ,'YYYY') ||'-04-01', 'yyyy-mm-dd') AND '2017-03-31' ) AND
(rv_fact_acct.ad_client_id = 1000000) and
(rv_fact_acct.c_acctschema_id = 1000000 )and
(rv_fact_acct.postingtype = 'B')and
(rv_fact_acct.accounttype in ('R','E') )
group by c_elementvalue.value , c_elementvalue.name , rv_fact_acct.postingtype , c_period.enddate
order by 5 asc, 1 asc
I got an error message, when executing above sql statement(postgres).
Error message:
[Err] ERROR: function to_char(unknown, unknown) is not unique
LINE 68: (rv_fact_acct.dateacct BETWEEN to_date( to_char( '2017-03-...
^
HINT: Could not choose a best candidate function. You might need to add explicit type casts.
This part of your query is problematic:
to_date( to_char( '2017-03-01' ,'YYYY') ||'-04-01', 'yyyy-mm-dd')
There are not any function to_char, that has first parameter string.
postgres=# \df to_char
List of functions
┌────────────┬─────────┬──────────────────┬───────────────────────────────────┬────────┐
│ Schema │ Name │ Result data type │ Argument data types │ Type │
╞════════════╪═════════╪══════════════════╪═══════════════════════════════════╪════════╡
│ pg_catalog │ to_char │ text │ bigint, text │ normal │
│ pg_catalog │ to_char │ text │ double precision, text │ normal │
│ pg_catalog │ to_char │ text │ integer, text │ normal │
│ pg_catalog │ to_char │ text │ interval, text │ normal │
│ pg_catalog │ to_char │ text │ numeric, text │ normal │
│ pg_catalog │ to_char │ text │ real, text │ normal │
│ pg_catalog │ to_char │ text │ timestamp without time zone, text │ normal │
│ pg_catalog │ to_char │ text │ timestamp with time zone, text │ normal │
└────────────┴─────────┴──────────────────┴───────────────────────────────────┴────────┘
(8 rows)
You can cast string 2017-03-01 to date type. PostgreSQL cannot do it self, because there are more variants: numeric,timestamp, ...
postgres=# select to_date( to_char( '2017-03-01'::date ,'YYYY') ||'-04-01', 'yyyy-mm-dd');
┌────────────┐
│ to_date │
╞════════════╡
│ 2017-04-01 │
└────────────┘
(1 row)
Usually, using string operations for date time operations are wrong. PostgreSQL (and all SQL databases) has great functions for date arithmetic.
For example - the task "get first date of following month" can be done with expression:
postgres=# select date_trunc('month', current_date + interval '1month')::date;
┌────────────┐
│ date_trunc │
╞════════════╡
│ 2017-05-01 │
└────────────┘
(1 row)
You can write custom function in SQL language (macro):
postgres=# create or replace function next_month(date)
returns date as $$
select date_trunc('month', $1 + interval '1month')::date $$
language sql;
CREATE FUNCTION
postgres=# select next_month(current_date);
┌────────────┐
│ next_month │
╞════════════╡
│ 2017-05-01 │
└────────────┘
(1 row)
It isn't clear what logic you intend to use for filtering by the account date, but your current use of to_char() and to_date() appears to be the cause of the error. If you just want to grab records from March 2017, then use the following:
rv_fact_acct.dateacct BETWEEN '2017-03-01' AND '2017-03-31'
If you give us more information about what you are trying to do, this can be updated accordingly.