Unnesting arrays of different dimensions - postgresql

I am trying to use the unnest optimisation to bundle rows into a single insert and reduce the number of RTTs to the database.
My basic insert query looks like so:
INSERT INTO table (fielda, fieldb, fieldc)
SELECT fielda, fieldb, fieldc
FROM unnest($1, $2, $3)) as a (fielda, fieldb, fieldc)
And this works beautifully, so long as fieldc is a scalar value.
If fieldc happens to be a real[] for example, and thus $3 is a real[][], this no longer works because unnest unnests to reals, not real[]s.
Somehow I need to get unnest to produce rows, only unnested by 1 level. There is a very helpful existing answer here: https://stackoverflow.com/a/8142998/15531756 which implements a function for unnesting a single array, but this creates a set of values, rather than a set of rows containing everything I need. As far as I am aware I can't join this array onto the normal unnest for the other parameters.
So how would one insert bundled data like this where a column has an array value type?
At the moment I have resorted to dynamically generating a query
INSERT INTO table (fielda, fieldb, fieldc)
VALUES
($1, $2, $3),
($4, $5, $6)
... and so on
but this is markedly slower, even re-using the same statement.

Related

Return identifier of conflicted insert in postgres [duplicate]

I have the following UPSERT in PostgreSQL 9.5:
INSERT INTO chats ("user", "contact", "name")
VALUES ($1, $2, $3),
($2, $1, NULL)
ON CONFLICT("user", "contact") DO NOTHING
RETURNING id;
If there are no conflicts it returns something like this:
----------
| id |
----------
1 | 50 |
----------
2 | 51 |
----------
But if there are conflicts it doesn't return any rows:
----------
| id |
----------
I want to return the new id columns if there are no conflicts or return the existing id columns of the conflicting columns.
Can this be done? If so, how?
The currently accepted answer seems ok for a single conflict target, few conflicts, small tuples and no triggers. It avoids concurrency issue 1 (see below) with brute force. The simple solution has its appeal, the side effects may be less important.
For all other cases, though, do not update identical rows without need. Even if you see no difference on the surface, there are various side effects:
It might fire triggers that should not be fired.
It write-locks "innocent" rows, possibly incurring costs for concurrent transactions.
It might make the row seem new, though it's old (transaction timestamp).
Most importantly, with PostgreSQL's MVCC model UPDATE writes a new row version for every target row, no matter whether the row data changed. This incurs a performance penalty for the UPSERT itself, table bloat, index bloat, performance penalty for subsequent operations on the table, VACUUM cost. A minor effect for few duplicates, but massive for mostly dupes.
Plus, sometimes it is not practical or even possible to use ON CONFLICT DO UPDATE. The manual:
For ON CONFLICT DO UPDATE, a conflict_target must be provided.
A single "conflict target" is not possible if multiple indexes / constraints are involved. But here is a related solution for multiple partial indexes:
UPSERT based on UNIQUE constraint with NULL values
Back on the topic, you can achieve (almost) the same without empty updates and side effects. Some of the following solutions also work with ON CONFLICT DO NOTHING (no "conflict target"), to catch all possible conflicts that might arise - which may or may not be desirable.
Without concurrent write load
WITH input_rows(usr, contact, name) AS (
VALUES
(text 'foo1', text 'bar1', text 'bob1') -- type casts in first row
, ('foo2', 'bar2', 'bob2')
-- more?
)
, ins AS (
INSERT INTO chats (usr, contact, name)
SELECT * FROM input_rows
ON CONFLICT (usr, contact) DO NOTHING
RETURNING id --, usr, contact -- return more columns?
)
SELECT 'i' AS source -- 'i' for 'inserted'
, id --, usr, contact -- return more columns?
FROM ins
UNION ALL
SELECT 's' AS source -- 's' for 'selected'
, c.id --, usr, contact -- return more columns?
FROM input_rows
JOIN chats c USING (usr, contact); -- columns of unique index
The source column is an optional addition to demonstrate how this works. You may actually need it to tell the difference between both cases (another advantage over empty writes).
The final JOIN chats works because newly inserted rows from an attached data-modifying CTE are not yet visible in the underlying table. (All parts of the same SQL statement see the same snapshots of underlying tables.)
Since the VALUES expression is free-standing (not directly attached to an INSERT) Postgres cannot derive data types from the target columns and you may have to add explicit type casts. The manual:
When VALUES is used in INSERT, the values are all automatically
coerced to the data type of the corresponding destination column. When
it's used in other contexts, it might be necessary to specify the
correct data type. If the entries are all quoted literal constants,
coercing the first is sufficient to determine the assumed type for all.
The query itself (not counting the side effects) may be a bit more expensive for few dupes, due to the overhead of the CTE and the additional SELECT (which should be cheap since the perfect index is there by definition - a unique constraint is implemented with an index).
May be (much) faster for many duplicates. The effective cost of additional writes depends on many factors.
But there are fewer side effects and hidden costs in any case. It's most probably cheaper overall.
Attached sequences are still advanced, since default values are filled in before testing for conflicts.
About CTEs:
Are SELECT type queries the only type that can be nested?
Deduplicate SELECT statements in relational division
With concurrent write load
Assuming default READ COMMITTED transaction isolation. Related:
Concurrent transactions result in race condition with unique constraint on insert
The best strategy to defend against race conditions depends on exact requirements, the number and size of rows in the table and in the UPSERTs, the number of concurrent transactions, the likelihood of conflicts, available resources and other factors ...
Concurrency issue 1
If a concurrent transaction has written to a row which your transaction now tries to UPSERT, your transaction has to wait for the other one to finish.
If the other transaction ends with ROLLBACK (or any error, i.e. automatic ROLLBACK), your transaction can proceed normally. Minor possible side effect: gaps in sequential numbers. But no missing rows.
If the other transaction ends normally (implicit or explicit COMMIT), your INSERT will detect a conflict (the UNIQUE index / constraint is absolute) and DO NOTHING, hence also not return the row. (Also cannot lock the row as demonstrated in concurrency issue 2 below, since it's not visible.) The SELECT sees the same snapshot from the start of the query and also cannot return the yet invisible row.
Any such rows are missing from the result set (even though they exist in the underlying table)!
This may be ok as is. Especially if you are not returning rows like in the example and are satisfied knowing the row is there. If that's not good enough, there are various ways around it.
You can check the row count of the output and repeat the statement if it does not match the row count of the input. May be good enough for the rare case. The point is to start a new query (can be in the same transaction), which will then see the newly committed rows.
Or check for missing result rows within the same query and overwrite those with the brute force trick demonstrated in Alextoni's answer.
WITH input_rows(usr, contact, name) AS ( ... ) -- see above
, ins AS (
INSERT INTO chats AS c (usr, contact, name)
SELECT * FROM input_rows
ON CONFLICT (usr, contact) DO NOTHING
RETURNING id, usr, contact -- we need unique columns for later join
)
, sel AS (
SELECT 'i'::"char" AS source -- 'i' for 'inserted'
, id, usr, contact
FROM ins
UNION ALL
SELECT 's'::"char" AS source -- 's' for 'selected'
, c.id, usr, contact
FROM input_rows
JOIN chats c USING (usr, contact)
)
, ups AS ( -- RARE corner case
INSERT INTO chats AS c (usr, contact, name) -- another UPSERT, not just UPDATE
SELECT i.*
FROM input_rows i
LEFT JOIN sel s USING (usr, contact) -- columns of unique index
WHERE s.usr IS NULL -- missing!
ON CONFLICT (usr, contact) DO UPDATE -- we've asked nicely the 1st time ...
SET name = c.name -- ... this time we overwrite with old value
-- SET name = EXCLUDED.name -- alternatively overwrite with *new* value
RETURNING 'u'::"char" AS source -- 'u' for updated
, id --, usr, contact -- return more columns?
)
SELECT source, id FROM sel
UNION ALL
TABLE ups;
It's like the query above, but we add one more step with the CTE ups, before we return the complete result set. That last CTE will do nothing most of the time. Only if rows go missing from the returned result, we use brute force.
More overhead, yet. The more conflicts with pre-existing rows, the more likely this will outperform the simple approach.
One side effect: the 2nd UPSERT writes rows out of order, so it re-introduces the possibility of deadlocks (see below) if three or more transactions writing to the same rows overlap. If that's a problem, you need a different solution - like repeating the whole statement as mentioned above.
Concurrency issue 2
If concurrent transactions can write to involved columns of affected rows, and you have to make sure the rows you found are still there at a later stage in the same transaction, you can lock existing rows cheaply in the CTE ins (which would otherwise go unlocked) with:
...
ON CONFLICT (usr, contact) DO UPDATE
SET name = name WHERE FALSE -- never executed, but still locks the row
...
And add a locking clause to the SELECT as well, like FOR UPDATE.
This makes competing write operations wait till the end of the transaction, when all locks are released. So be brief.
More details and explanation:
How to include excluded rows in RETURNING from INSERT ... ON CONFLICT
Is SELECT or INSERT in a function prone to race conditions?
Deadlocks?
Defend against deadlocks by inserting rows in consistent order. See:
Deadlock with multi-row INSERTs despite ON CONFLICT DO NOTHING
Data types and casts
Existing table as template for data types ...
Explicit type casts for the first row of data in the free-standing VALUES expression may be inconvenient. There are ways around it. You can use any existing relation (table, view, ...) as row template. The target table is the obvious choice for the use case. Input data is coerced to appropriate types automatically, like in the VALUES clause of an INSERT:
WITH input_rows AS (
(SELECT usr, contact, name FROM chats LIMIT 0) -- only copies column names and types
UNION ALL
VALUES
('foo1', 'bar1', 'bob1') -- no type casts here
, ('foo2', 'bar2', 'bob2')
)
...
This does not work for some data types. See:
Casting NULL type when updating multiple rows
... and names
This also works for all data types.
While inserting into all (leading) columns of the table, you can omit column names. Assuming table chats in the example only consists of the 3 columns used in the UPSERT:
WITH input_rows AS (
SELECT * FROM (
VALUES
((NULL::chats).*) -- copies whole row definition
('foo1', 'bar1', 'bob1') -- no type casts needed
, ('foo2', 'bar2', 'bob2')
) sub
OFFSET 1
)
...
Aside: don't use reserved words like "user" as identifier. That's a loaded footgun. Use legal, lower-case, unquoted identifiers. I replaced it with usr.
I had exactly the same problem, and I solved it using 'do update' instead of 'do nothing', even though I had nothing to update. In your case it would be something like this:
INSERT INTO chats ("user", "contact", "name")
VALUES ($1, $2, $3),
($2, $1, NULL)
ON CONFLICT("user", "contact")
DO UPDATE SET
name=EXCLUDED.name
RETURNING id;
This query will return all the rows, regardless they have just been inserted or they existed before.
WITH e AS(
INSERT INTO chats ("user", "contact", "name")
VALUES ($1, $2, $3),
($2, $1, NULL)
ON CONFLICT("user", "contact") DO NOTHING
RETURNING id
)
SELECT * FROM e
UNION
SELECT id FROM chats WHERE user=$1, contact=$2;
The main purpose of using ON CONFLICT DO NOTHING is to avoid throwing error, but it will cause no row returns. So we need another SELECT to get the existing id.
In this SQL, if it fails on conflicts, it will return nothing, then the second SELECT will get the existing row; if it inserts successfully, then there will be two same records, then we need UNION to merge the result.
Upsert, being an extension of the INSERT query can be defined with two different behaviors in case of a constraint conflict: DO NOTHING or DO UPDATE.
INSERT INTO upsert_table VALUES (2, 6, 'upserted')
ON CONFLICT DO NOTHING RETURNING *;
id | sub_id | status
----+--------+--------
(0 rows)
Note as well that RETURNING returns nothing, because no tuples have been inserted. Now with DO UPDATE, it is possible to perform operations on the tuple there is a conflict with. First note that it is important to define a constraint which will be used to define that there is a conflict.
INSERT INTO upsert_table VALUES (2, 2, 'inserted')
ON CONFLICT ON CONSTRAINT upsert_table_sub_id_key
DO UPDATE SET status = 'upserted' RETURNING *;
id | sub_id | status
----+--------+----------
2 | 2 | upserted
(1 row)
For insertions of a single item, I would probably use a coalesce when returning the id:
WITH new_chats AS (
INSERT INTO chats ("user", "contact", "name")
VALUES ($1, $2, $3)
ON CONFLICT("user", "contact") DO NOTHING
RETURNING id
) SELECT COALESCE(
(SELECT id FROM new_chats),
(SELECT id FROM chats WHERE user = $1 AND contact = $2)
);
For insertions of multiples items, you can put the values on a temporary WITH and reference them later:
WITH chats_values("user", "contact", "name") AS (
VALUES ($1, $2, $3),
($4, $5, $6)
), new_chats AS (
INSERT INTO chats ("user", "contact", "name")
SELECT * FROM chat_values
ON CONFLICT("user", "contact") DO NOTHING
RETURNING id
) SELECT id
FROM new_chats
UNION
SELECT chats.id
FROM chats, chats_values
WHERE chats.user = chats_values.user
AND chats.contact = chats_values.contact;
Note: as per Erwin's comment, on the off-chance your application will try to 'upsert' the same data concurrently (two workers trying to insert <unique_field> = 1 at the same time), and such data does not exist on the table yet, you should change the isolation level of your transaction before running the 'upsert':
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
On that specific case, one of the two transactions will be aborted. If this case happens a lot on your application, you might want to just do 2 separate queries, otherwise, handling the error and re-doing the query is just easier and faster.
Building on Erwin's answer above (terrific answer btw, never would have gotten here without it!), this is where I ended up. It solves a couple additional potential problems - it allows for duplicates (which would otherwise throw an error) by doing a select distinct on the input set, and it ensures that the returned IDs exactly match the input set, including the same order and allowing duplicates.
Additionally, and one part that was important for me, it significantly reduces the number of unnecessary sequence advancements using the new_rows CTE to only try inserting the ones that aren't already in there. Considering the possibility of concurrent writes, it will still hit some conflicts in that reduced set, but the later steps will take care of that. In most cases, sequence gaps aren't a big deal, but when you're doing billions of upserts, with a high percentage of conflicts, it can make the difference between using an int or a bigint for the ID.
Despite being big and ugly, it performs extremely well. I tested it extensively with millions of upserts, high concurrency, high numbers of collisions. Rock solid.
I've packaged it as a function, but if that's not what you want it should be easy to see how to translate to pure SQL. I've also changed the example data to something simple.
CREATE TABLE foo
(
bar varchar PRIMARY KEY,
id serial
);
CREATE TYPE ids_type AS (id integer);
CREATE TYPE bars_type AS (bar varchar);
CREATE OR REPLACE FUNCTION upsert_foobars(_vals bars_type[])
RETURNS SETOF ids_type AS
$$
BEGIN
RETURN QUERY
WITH
all_rows AS (
SELECT bar, ordinality
FROM UNNEST(_vals) WITH ORDINALITY
),
dist_rows AS (
SELECT DISTINCT bar
FROM all_rows
),
new_rows AS (
SELECT d.bar
FROM dist_rows d
LEFT JOIN foo f USING (bar)
WHERE f.bar IS NULL
),
ins AS (
INSERT INTO foo (bar)
SELECT bar
FROM new_rows
ORDER BY bar
ON CONFLICT DO NOTHING
RETURNING bar, id
),
sel AS (
SELECT bar, id
FROM ins
UNION ALL
SELECT f.bar, f.id
FROM dist_rows
JOIN foo f USING (bar)
),
ups AS (
INSERT INTO foo AS f (bar)
SELECT d.bar
FROM dist_rows d
LEFT JOIN sel s USING (bar)
WHERE s.bar IS NULL
ORDER BY bar
ON CONFLICT ON CONSTRAINT foo_pkey DO UPDATE
SET bar = f.bar
RETURNING bar, id
),
fin AS (
SELECT bar, id
FROM sel
UNION ALL
TABLE ups
)
SELECT f.id
FROM all_rows a
JOIN fin f USING (bar)
ORDER BY a.ordinality;
END
$$ LANGUAGE plpgsql;
If all you want is to upsert a single row
Then you can simplify things rather significantly by using a simple EXISTS check:
WITH
extant AS (
SELECT id FROM chats WHERE ("user", "contact") = ($1, $2)
),
inserted AS (
INSERT INTO chats ("user", "contact", "name")
SELECT $1, $2, $3
WHERE NOT EXISTS (SELECT NULL FROM extant)
RETURNING id
)
SELECT id FROM inserted
UNION ALL
SELECT id FROM extant
Since there is no ON CONFLICT clause, there is no update – only an insert, and only if necessary. So no unnecessary updates, no unnecessary write locks, no unnecessary sequence increments. No casts required either.
If the write lock was a feature in your use case, you can use SELECT FOR UPDATE in the extant expression.
And if you need to know whether a new row was inserted, you can add a flag column in the top-level UNION:
SELECT id, TRUE AS inserted FROM inserted
UNION ALL
SELECT id, FALSE FROM extant
The simplest, most performant solution is
BEGIN;
INSERT INTO chats ("user", contact, name)
VALUES ($1, $2, $3), ($2, $1, NULL)
ON CONFLICT ("user", contact) DO UPDATE
SET name = excluded.name
WHERE false
RETURNING id;
SELECT id
FROM chats
WHERE (user, contact) IN (($1, $2), ($2, $1));
COMMIT;
The DO UPDATE WHERE false locks but does not update the row, which is a feature, not a bug, since it ensures that another transaction cannot delete the row.
Some comments have want to distinguish between updated and created rows.
In that case, simply add txid_current() = xmin AS created to the select.
I modified the amazing answer by Erwin Brandstetter, which won't increment the sequence, and also won't write-lock any rows. I'm relatively new to PostgreSQL, so please feel free to let me know if you see any drawbacks to this method:
WITH input_rows(usr, contact, name) AS (
VALUES
(text 'foo1', text 'bar1', text 'bob1') -- type casts in first row
, ('foo2', 'bar2', 'bob2')
-- more?
)
, new_rows AS (
SELECT
c.usr
, c.contact
, c.name
, r.id IS NOT NULL as row_exists
FROM input_rows AS r
LEFT JOIN chats AS c ON r.usr=c.usr AND r.contact=c.contact
)
INSERT INTO chats (usr, contact, name)
SELECT usr, contact, name
FROM new_rows
WHERE NOT row_exists
RETURNING id, usr, contact, name
This assumes that the table chats has a unique constraint on columns (usr, contact).
Update: added the suggested revisions from spatar (below). Thanks!
Yet another update, per Revinand comment:
WITH input_rows(usr, contact, name) AS (
VALUES
(text 'foo1', text 'bar1', text 'bob1') -- type casts in first row
, ('foo2', 'bar2', 'bob2')
-- more?
)
, new_rows AS (
INSERT INTO chats (usr, contact, name)
SELECT
c.usr
, c.contact
, c.name
FROM input_rows AS r
LEFT JOIN chats AS c ON r.usr=c.usr AND r.contact=c.contact
WHERE r.id IS NULL
RETURNING id, usr, contact, name
)
SELECT id, usr, contact, name, 'new' as row_type
FROM new_rows
UNION ALL
SELECT id, usr, contact, name, 'update' as row_type
FROM input_rows AS ir
INNER JOIN chats AS c ON ir.usr=c.usr AND ir.contact=c.contact
I haven't tested the above, but if you're finding that the newly inserted rows are being returned multiple times, then you can either change the UNION ALL to just UNION, or (better), just remove the first query, altogether.

Transpose/Pivot a table in Postgres

I am trying for hours to transpose one table into another one this way:
My idea is to grab on an expression (which can be a simple SELECT * FROM X INNER JOIN Y ...), and transpose it into a MATERIALIZED VIEW.
The problem is that the original table can have an arbitrary number of rows (hence columns in the transposed table). So I was not able to find a working solution, not even with colpivot.
Can this ever be done?
Use conditional aggregation:
select "user",
max(value) filter (where property = 'Name') as name,
max(value) filter (where property = 'Age') as age,
max(value) filter (where property = 'Address') as addres
from the_table
group by "user";
A fundamental restriction of SQL is, that all columns of a query must be known to the database before it starts running that query.
There is no way you can have a "dynamic" number of columns (evaluated at runtime) in SQL.
Another alternative is to aggregate everything into a JSON value:
select "user",
jsonb_object_agg(property, value) as properties
from the_table
group by "user";

Postgres in-filter complexity

I have a query like this:
SELECT var_2
FROM table_a
WHERE var_1 IN (<Large list of values>);
Lets say table_a has n rows and the large list of values is of length t, whcih is less than n. What is then the complexity of this query? O(n), O(n*log(t)) or O(n*t) I'm using postgres 10.
Sometimes, rewriting a large IN list to a JOIN against a VALUES list, creates the better execution plan.
So a query like this:
select column_2
from the_table
where column_1 in (1,2,3,4);
If the list does not contain duplicate values, the above can be rewritten to:
select t.column_2
from the_table t
join (
values (1),(2),(3),(4)
) as v(c1) on v.c1 = t.column_1
To find out, if that improves the query, you will have to check the execution plan

Casting rows to arrays in PostgreSQL

I need to query a table as in
SELECT *
FROM table_schema.table_name
only each row needs to be a TEXT[] with array values corresponding to column values casted to TEXT coming in the same order as in SELECT * so assuming the table has columns a, b and c I need the result to look like
SELECT ARRAY[a::TEXT, b::TEXT, c::TEXT]
FROM table_schema.table_name
only it shouldn't explicitly list columns by name. Ideally it should look like
SELECT as_text_array(a)
FROM table_schema.table_name AS a
The best I came up with looks ugly and relies on "hstore" extension
WITH columnz AS ( -- get ordered column name array
SELECT array_agg(attname::TEXT ORDER BY attnum) AS column_name_array
FROM pg_attribute
WHERE attrelid = 'table_schema.table_name'::regclass AND attnum > 0 AND NOT attisdropped
)
SELECT hstore(a)->(SELECT column_name_array FROM columnz)
FROM table_schema.table_name AS a
I am having a feeling there must be a simpler way to achieve that
UPDATE 1
Another query that achieves the same result but arguably as ugly and inefficient as the first one is inspired by the answer by #bspates. It may be even less efficient but doesn't rely on extensions
SELECT r.text_array
FROM table_schema.table_name AS a
INNER JOIN LATERAL ( -- parse ROW::TEXT presentation of a row
SELECT array_agg(COALESCE(replace(val[1], '""', '"'), NULLIF(val[2], ''))) AS text_array
FROM regexp_matches(a::text, -- parse double-quoted and simple values separated by commas
'(?<=\A\(|,) (?: "( (?:[^"]|"")* )" | ([^,"]*) ) (?=,|\)\Z)', 'xg') AS t(val)
) AS r ON TRUE
It is still far from ideal
UPDATE 2
I tested all 3 options existing at the moment
Using JSON. It doesn't rely on any extensions, it is short to write, easy to understand and the speed is ok.
Using hstore. This alternative is the fastest (>10 times faster than JSON approach on a 100K dataset) but requires an extension. hstore in general is very handy extension to have through.
Using regex to parse TEXT presentation of a ROW. This option is really slow.
A somewhat ugly hack is to convert the row to a JSON value, then unnest the values and aggregate it back to an array:
select array(select (json_each_text(to_json(t))).value) as row_value
from some_table t
Which is to some extent the same as your hstore hack.
If the order of the columns is important, then using json and with ordinality can be used to keep that:
select array(select val
from json_each_text(to_json(t)) with ordinality as t(k,val,idx)
order by idx)
from the_table t
The easiest (read hacky-est) way I can think of is convert to a string first then parse that string into an array. Like so:
SELECT string_to_array(table_name::text, ',') FROM table_name
BUT depending on the size and type of the data in the table, this could perform very badly.

Why is "select table_name from table_name" valid [duplicate]

This question already has an answer here:
What are differences between SQL queries?
(1 answer)
Closed 4 years ago.
This syntax is valid for PostgreSQL:
select T from table_name as T
T seems to become a CSV list of values from all columns in table_name. select T from table_name as T works, and, for that matter, select table_name from table_name. Where is this syntax documented, and what is the datatype of T?
This syntax is not in SQL Server, and (AFAIK) does not exist in any other SQL variant.
If you create a table, Postgres creates a type with the same name in the background. The table is then essentially a "list of that type".
Postgres also allows to reference a complete row as a single "record" - a value built from multiple columns. Those records can be created dynamically through a row constructor.
Each row in a the result of a SELECT statement is implicitly assigned a TYPE - if the row comes from a single table, it's the table's type. Otherwise it's an anonymous type.
When you use the table name in a place where a column would be allowed it references the full row as a single record. If the table is aliased in the select, the type of that record is still the table's type.
So the statement:
select T
from table_name as T;
returns a result with a single column which is a record (of the table's type) containing each column of the table as a field. The default output format of a record is a comma separated list of the values enclosed in parentheses.
Assuming table_name has three columns c1, c2 and c3 the following would essentially do the same thing:
select row(c1, c2, c3)
from table_name;
Note that a record reference can also be used in comparisons, e.g. finding rows that are different between two tables can be done in the following manner
select *
from table_one t1
full outer join table_two t2 on t1.id = t2.id
where t1 <> t2;