Whats the meaning of select attributeName(tableName) from tablename in postgresql - postgresql

Using Postgresql I have an apparently strange behavior that I don't understand
Assume to have a simple table
create table employee (
number int primary key,
surname varchar(20) not null,
name varchar(20) not null);
It is well clear for me the meaning of
select name from employee
However, I obtain all the names also with
select name(employee) from employee
and I do not understand this last statement.
I'm using PostgreSQL 13 and pgadmin 4

I'd like to expand #Abelisto's answer with this quotation from PostgreSQL docs:
Another special syntactical behavior associated with composite values is that we can use functional notation for extracting a field of a composite value. The simple way to explain this is that the notations field(table) and table.field are interchangeable. For example, these queries are equivalent:
SELECT c.name FROM inventory_item c WHERE c.price > 1000;
SELECT name(c) FROM inventory_item c WHERE price(c) > 1000;
...
This equivalence between functional notation and field notation makes it possible to use functions on composite types to implement “computed fields”. An application using the last query above wouldn't need to be directly aware that somefunc isn't a real column of the table.

Just an assumption.
There are two syntactic ways in PostgreSQL to call a function that receives a row as its argument. For example:
create table t(x int, y int); insert into t values(1, 2);
create function f(a t) returns int language sql as 'select a.x+a.y';
select f(t), t.f from t;
┌───┬───┐
│ f │ f │
├───┼───┤
│ 3 │ 3 │
└───┴───┘
Probably it is implemented to make the syntax same for columns also:
select f(t), t.f, x(t), t.x from t;
┌───┬───┬───┬───┐
│ f │ f │ x │ x │
├───┼───┼───┼───┤
│ 3 │ 3 │ 1 │ 1 │
└───┴───┴───┴───┘

Related

Formatting hstore column Postgres

I'm trying to find the best way to format a hstore column (see screenshot) my goal is to have the same format based on the screenshot as the "updated_column. I was thinking about a case statement like :
Case when json_column -> id then 'id:'
any suggestion would be appreciated.
Migration approach:
Add new column with type text like you want it
make sure new data directly enters the new column as the string you want (pre-formatted at the backend)
Create a migration function that converts json column data batchwise into your new string table. You can use postgres replace/.. operations to reformat it. You can also use an external python script/...
remove the json column after the migration is done
Let me see what / how you have tried and then we can see how to improve/solve your issues.
So I think i found a temporary solution that will work, but I think like #Bergi mentioned a view might be more appropriate.
For now I will just use something like:
concat(concat(concat(concat('id',':',column -> 'id')
,' ','auth_id',':',column -> 'auth_id')
,' ','type',':',column -> 'type')
,' ','transaction',':',column -> 'transaction')
You can use some function to make it generic:
Let's get some example:
select '{"a":1,"b":2}'::json;
┌───────────────┐
│ json │
├───────────────┤
│ {"a":1,"b":2} │
└───────────────┘
(1 row)
Back to text:
select '{"a":1,"b":2}'::json::text;
┌───────────────┐
│ text │
├───────────────┤
│ {"a":1,"b":2} │
└───────────────┘
(1 row)
Now, remove the undesired tokens {}" with a regex:
select regexp_replace('{"a":1,"b":2}'::json::varchar, '["{}]+', '', 'g');
┌────────────────┐
│ regexp_replace │
├────────────────┤
│ a:1,b:2 │
└────────────────┘
(1 row)
and you can wrap it into a function:
create function text_from_json(json) returns text as $$select regexp_replace($1::text, '["{}]+', '', 'g')$$ language sql;
CREATE FUNCTION
Testing the function now:
tsdb=> select text_from_json('{"a":1,"b":2}'::json);
┌────────────────┐
│ text_from_json │
├────────────────┤
│ a:1,b:2 │
└────────────────┘

Can a Postgres daterange include infinity as an upper bound?

I can't see how to create a daterange with infinity as an inclusive upper bound. Postgres converts both inputs to an exclusive upper bound:
create table dt_overlap (
id serial primary key,
validity daterange not null
);
insert into dt_overlap (validity) values
('["2019-01-01", infinity]'),
('["2019-02-02", infinity)');
table dt_overlap;
id │ validity
────┼───────────────────────
1 │ [2019-01-01,infinity)
2 │ [2019-02-02,infinity)
select id,
upper(validity),
upper_inf(validity),
not isfinite(upper(validity)) as is_inf
from dt_overlap;
id │ upper │ upper_inf │ is_inf
────┼──────────┼───────────┼────────
1 │ infinity │ f │ t
2 │ infinity │ f │ t
That both values give the same results is kind of expected, since the inclusive upper bound infinity] was coerced to an exclusive upper bound infinity).
The same problem does not exist for the lower end of the range since the daterange keeps an inclusive lower bound and thus lower_inf() returns true.
Tested and reproduced with Postgresql 9.6.5 and Postgresql 10.3.
Any ideas?
Another way of creating an unbounded range is to leave out the upper bound completely, e.g. '["2019-01-01",)'
with dt_overlap (validity) as (
values
('["2019-01-01", infinity]'::daterange),
('["2019-02-01",]'::daterange)
)
select validity,
upper_inf(validity)
from dt_overlap;
results in
validity | upper_inf
----------------------+----------
[2019-01-01,infinity) | false
[2019-02-01,) | true

How to specify PostGIS geography value in a composite type literal?

I have a custom composite type:
CREATE TYPE place AS (
name text,
location geography(point, 4326)
);
I want to create a value of that type using a literal:
SELECT $$("name", "ST_GeogFromText('POINT(121.560800 29.901200)')")$$::place;
This fails with:
HINT: "ST" <-- parse error at position 2 within geometry
ERROR: parse error - invalid geometry
But this executes just fine:
SELECT ST_GeogFromText('POINT(121.560800 29.901200)');
I wonder what's the correct way to specify PostGIS geography value in a composite type literal?
You are trying to push a function call ST_GeogFromText into a text string. This will not be allowed, as it creates a possibility for SQL injection.
In second call you need ST_GeogFromText to mark the type of input. For a composite type, you did that already in type definition, so you can skip that part:
[local] gis#gis=# SELECT $$("name", "POINT(121.560800 29.901200)")$$::place;
┌───────────────────────────────────────────────────────────┐
│ place │
├───────────────────────────────────────────────────────────┤
│ (name,0101000020E610000032E6AE25E4635E40BB270F0BB5E63D40) │
└───────────────────────────────────────────────────────────┘
(1 row)
Time: 0,208 ms
Another option would be to use non-literal form, which allows function calls:
[local] gis#gis=# SELECT ('name', ST_GeogFromText('POINT(121.560800 29.901200)'))::place;;
┌───────────────────────────────────────────────────────────┐
│ row │
├───────────────────────────────────────────────────────────┤
│ (name,0101000020E610000032E6AE25E4635E40BB270F0BB5E63D40) │
└───────────────────────────────────────────────────────────┘
(1 row)
Time: 5,004 ms

Postgres using named column in where clause

In Postgres I'm struggling with this syntax. It works in mysql but not sure what I'm doing wrong is.
So let's say I have a json document. I want to select a column in that document and return the result as "text"
So my query would look like this.
SELECT member_id, data->>'username' AS username
FROM player.player
Returns this as expected.
Now lets say I want to select a name from the column so my query would look like this.
SELECT member_id, data->>'username' AS username
FROM player.player WHERE username LIKE 'sam'
When I run the query I get this.
'
Why does it do that? The json I'm returning is returning as text data type since I'm using json->> on a column.
PostgreSQL is based on SQL standard, and there are not possible to use a alias on same query level. You should to use derived table and filter on higher level:
postgres=# select 1 as x where x = 1;
ERROR: column "x" does not exist
LINE 1: select 1 as x where x = 1;
^
postgres=# select * from (select 1 as x) s where x = 1;
┌───┐
│ x │
╞═══╡
│ 1 │
└───┘
(1 row)

pg_column_size reports different vastly sizes for table.* than specific columns

I have a simple example where pg_column_size is reporting vastly different values. I think it has to do with whether or not it's considering TOASTed values, but I'm not sure. Here's the setup:
CREATE TABLE foo (bar TEXT);
INSERT INTO foo (bar) VALUES (repeat('foo', 100000));
SELECT pg_column_size(bar) as col, pg_column_size(foo.*) as table FROM foo;
What I'm seeing in Postgres 9.6 is,
col table
3442 300028
There's an order of magnitude difference here. Thoughts? What's the right way for me to calculate the size of the row? One idea I have is,
SELECT pg_column_size(bar), pg_column_size(foo.*) - octet_length(bar) + pg_column_size(bar) FROM foo;
Which should subtract out the post-TOAST size and add in the TOAST size.
Edit: My proposed work around only works on character columns, e.g. won't work on JSONB.
The first value is the compressed size of the TOASTed value, while the second value is the uncompressed size of the whole row.
SELECT 'foo'::regclass::oid;
┌───────┐
│ oid │
├───────┤
│ 36344 │
└───────┘
(1 row)
SELECT sum(length(chunk_data)) FROM pg_toast.pg_toast_36344;
┌──────┐
│ sum │
├──────┤
│ 3442 │
└──────┘
(1 row)
foo.* (or foo for that matter) is a “wholerow reference” in PostgreSQL, its data type is foo (which is created when the table is created).
PostgreSQL knows that foo.bar is stored externally, so it returns its size as it is in the TOAST table, but foo (a composite type) isn't, so you get the total size.
See the relevant piece of code from src/backend/access/heap/tuptoaster.c:
Size
toast_datum_size(Datum value)
{
struct varlena *attr = (struct varlena *) DatumGetPointer(value);
Size result;
if (VARATT_IS_EXTERNAL_ONDISK(attr))
{
/*
* Attribute is stored externally - return the extsize whether
* compressed or not. We do not count the size of the toast pointer
* ... should we?
*/
struct varatt_external toast_pointer;
VARATT_EXTERNAL_GET_POINTER(toast_pointer, attr);
result = toast_pointer.va_extsize;
}
[...]
else
{
/*
* Attribute is stored inline either compressed or not, just calculate
* the size of the datum in either case.
*/
result = VARSIZE(attr);
}
return result;
}