Which view has cache_value of sequences? - postgresql

I'm building a query which retrieves a list of sequences on pgsql-9.1.6. Please see my SQL set below...
mydb=# create sequence seq1 cache 10;
CREATE SEQUENCE
mydb=# create sequence seq2 cache 20;
CREATE SEQUENCE
mydb=# \ds
List of relations
Schema | Name | Type | Owner
--------+------+----------+-------
public | seq1 | sequence | pgdba
public | seq2 | sequence | pgdba
(2 rows)
mydb=# \x
Expanded display is on.
mydb=# select * from seq1;
-[ RECORD 1 ]-+--------------------
sequence_name | seq1
last_value | 1
start_value | 1
increment_by | 1
max_value | 9223372036854775807
min_value | 1
cache_value | 10
log_cnt | 0
is_cycled | f
is_called | f
mydb=# select * from seq2;
-[ RECORD 1 ]-+--------------------
sequence_name | seq2
last_value | 1
start_value | 1
increment_by | 1
max_value | 9223372036854775807
min_value | 1
cache_value | 20
log_cnt | 0
is_cycled | f
is_called | f
mydb=# select * from information_schema.sequences;
-[ RECORD 1 ]-----------+--------------------
sequence_catalog | mydb
sequence_schema | public
sequence_name | seq1
data_type | bigint
numeric_precision | 64
numeric_precision_radix | 2
numeric_scale | 0
start_value | 1
minimum_value | 1
maximum_value | 9223372036854775807
increment | 1
cycle_option | NO
-[ RECORD 2 ]-----------+--------------------
sequence_catalog | mydb
sequence_schema | public
sequence_name | seq2
data_type | bigint
numeric_precision | 64
numeric_precision_radix | 2
numeric_scale | 0
start_value | 1
minimum_value | 1
maximum_value | 9223372036854775807
increment | 1
cycle_option | NO
information_schema.sequences has no cache_value. Which view can I join to get cache_value with my sequence list?

Best I'm aware, you're actually viewing where this data is stored right there... The table name is that of the sequence itself. There is no view in the formation schema because it's an implementation detail relatd to Postgres.
Side note: Postgres uses the pg_catalog to create views for within the information schema. The latter really is a cross-platform convenience; the real details are in the catalog. Don't miss psql's --echo-hidden option to find out more about the internals:
http://www.postgresql.org/docs/current/static/app-psql.html
# Output using `psql -E`
test=# create sequence test;
CREATE SEQUENCE
test=# \d+ test
********* QUERY **********
SELECT c.oid,
n.nspname,
c.relname
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relname ~ '^(test)$'
AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 2, 3;
**************************
********* QUERY **********
SELECT c.relchecks, c.relkind, c.relhasindex, c.relhasrules, c.relhastriggers, c.relhasoids, pg_catalog.array_to_string(c.reloptions || array(select 'toast.' || x from pg_catalog.unnest(tc.reloptions) x), ', ')
, c.reltablespace, CASE WHEN c.reloftype = 0 THEN '' ELSE c.reloftype::pg_catalog.regtype::pg_catalog.text END, c.relpersistence
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_class tc ON (c.reltoastrelid = tc.oid)
WHERE c.oid = '25356';
**************************
********* QUERY **********
SELECT * FROM public.test;
**************************
********* QUERY **********
SELECT a.attname,
pg_catalog.format_type(a.atttypid, a.atttypmod),
(SELECT substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid) for 128)
FROM pg_catalog.pg_attrdef d
WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef),
a.attnotnull, a.attnum,
(SELECT c.collname FROM pg_catalog.pg_collation c, pg_catalog.pg_type t
WHERE c.oid = a.attcollation AND t.oid = a.atttypid AND a.attcollation <> t.typcollation) AS attcollation,
NULL AS indexdef,
NULL AS attfdwoptions,
a.attstorage,
CASE WHEN a.attstattarget=-1 THEN NULL ELSE a.attstattarget END AS attstattarget
FROM pg_catalog.pg_attribute a
WHERE a.attrelid = '25356' AND a.attnum > 0 AND NOT a.attisdropped
ORDER BY a.attnum;
**************************
********* QUERY **********
SELECT pg_catalog.quote_ident(nspname) || '.' ||
pg_catalog.quote_ident(relname) || '.' ||
pg_catalog.quote_ident(attname)
FROM pg_catalog.pg_class c
INNER JOIN pg_catalog.pg_depend d ON c.oid=d.refobjid
INNER JOIN pg_catalog.pg_namespace n ON n.oid=c.relnamespace
INNER JOIN pg_catalog.pg_attribute a ON (
a.attrelid=c.oid AND
a.attnum=d.refobjsubid)
WHERE d.classid='pg_catalog.pg_class'::pg_catalog.regclass
AND d.refclassid='pg_catalog.pg_class'::pg_catalog.regclass
AND d.objid=25356
AND d.deptype='a'
**************************
Sequence "public.test"
Column | Type | Value | Storage
---------------+---------+---------------------+---------
sequence_name | name | test | plain
last_value | bigint | 1 | plain
start_value | bigint | 1 | plain
increment_by | bigint | 1 | plain
max_value | bigint | 9223372036854775807 | plain
min_value | bigint | 1 | plain
cache_value | bigint | 1 | plain
log_cnt | bigint | 0 | plain
is_cycled | boolean | f | plain
is_called | boolean | f | plain
test=#

Related

Two oids reference the same table name. How to drop one of them?

I have a table with a primary key and in information_schema.table_constraints I see 2 entries with constraint_type = Primary Key.
mydatabase=> \d foo_table
Table "public.foo_table"
Column | Type | Collation | Nullable | Default
-------------+-----------------------------+-----------+----------+----------
timestamp | timestamp without time zone | | not null |
granularity | interval | | not null |
msisdn | text | | not null | 0
segment | text | | not null |
rating | numeric | | not null | 0
user | text | | not null | ''::text
Indexes:
"foo_table_tmp_pkey1" PRIMARY KEY, btree ("timestamp", granularity, "user", msisdn, segment)
"foo_table_tmp_idx2" btree (segment)
In system table I have an unexpected constraint_name which seems to refer to foo_table
mydatabase=> select * from information_schema.table_constraints where constraint_name = 'foo_table_pk';
-[ RECORD 1 ]------+------------------------
constraint_catalog | mydatabase
constraint_schema | public
constraint_name | foo_table_pk
table_catalog | mydatabase
table_schema | public
table_name | foo_table
constraint_type | PRIMARY KEY
is_deferrable | NO
initially_deferred | NO
enforced | YES
mydatabase=> select * from information_schema.table_constraints where table_name = 'foo_table' and constraint_type = 'PRIMARY KEY';
-[ RECORD 1 ]------+-------------------------------
constraint_catalog | mydatabase
constraint_schema | public
constraint_name | foo_table_tmp_pkey1
table_catalog | mydatabase
table_schema | public
table_name | foo_table
constraint_type | PRIMARY KEY
is_deferrable | NO
initially_deferred | NO
enforced | YES
and if I try to drop the unexpected constraint_name I get an error
analytics=> alter table public.foo_table drop constraint foo_table_pk;
ERROR: constraint "foo_table_pk" of relation "foo_table" does not exist
How may I get rid of this pk?
Initially the primary key of the table was foo_table_pk, then I created another table and renamed it to foo_table so the pk changed to foo_table_tmp_pkey1 but it seems that the old pk name still exists.
Finally from pg_class
mydatabase=> select * from pg_catalog.pg_class where relname= 'foo_table' or oid = 58450659;
-[ RECORD 1 ]-------+---------------------
oid | 58450659
relname | foo_table
relnamespace | 2200
reltype | 58450661
reloftype | 0
relowner | 16384
relam | 2
relfilenode | 58450659
reltablespace | 0
relpages | 0
reltuples | 0
relallvisible | 0
reltoastrelid | 58450662
relhasindex | t
relisshared | f
relpersistence | p
relkind | r
relnatts | 6
relchecks | 0
relhasrules | f
relhastriggers | f
relhassubclass | f
relrowsecurity | f
relforcerowsecurity | f
relispopulated | t
relreplident | d
relispartition | f
relrewrite | 0
relfrozenxid | 154813261
relminmxid | 6
relacl |
reloptions |
relpartbound |
-[ RECORD 2 ]-------+---------------------
oid | 58826168
relname | foo_table
relnamespace | 2200
reltype | 58826170
reloftype | 0
relowner | 16384
relam | 2
relfilenode | 58826168
reltablespace | 0
relpages | 0
reltuples | 0
relallvisible | 0
reltoastrelid | 58826171
relhasindex | t
relisshared | f
relpersistence | p
relkind | r
relnatts | 6
relchecks | 0
relhasrules | f
relhastriggers | f
relhassubclass | f
relrowsecurity | f
relforcerowsecurity | f
relispopulated | t
relreplident | d
relispartition | f
relrewrite | 0
relfrozenxid | 159974278
relminmxid | 6
relacl |
reloptions |
relpartbound |
and I notice that oid= 58450659 is the conrelid in pg_constraint when I search for foo_table_pk
mydatabase=> select * from pg_catalog.pg_constraint where conname = 'foo_table_pk';
-[ RECORD 1 ]-+------------------------
oid | 58450669
conname | foo_table_pk
connamespace | 2200
contype | p
condeferrable | f
condeferred | f
convalidated | t
conrelid | 58450659
contypid | 0
conindid | 58450668
conparentid | 0
confrelid | 0
confupdtype |
confdeltype |
confmatchtype |
conislocal | t
coninhcount | 0
connoinherit | t
conkey | {1,2,6,3,4}
confkey |
conpfeqop |
conppeqop |
conffeqop |
conexclop |
conbin |
So, it seems to me that I have to somehow drop table with oid=58450659
Update
I am doing the swap between tables like:
BEGIN;
DROP TABLE foo_table;
ALTER TABLE foo_table_tmp rename to foo_table;
SELECT rename_primary_key('foo_table', 'foo_table_pk');
COMMIT;
and the rename_primary_key is function that I have created:
CREATE OR REPLACE FUNCTION rename_primary_key(
_tbl TEXT, _newpk TEXT, OUT success bool)
LANGUAGE plpgsql AS
$$
DECLARE
_pk TEXT;
BEGIN
SELECT information_schema.table_constraints.constraint_name INTO _pk
FROM information_schema.table_constraints
WHERE information_schema.table_constraints.table_name = _tbl AND information_schema.table_constraints.constraint_type = 'PRIMARY KEY';
IF NOT FOUND THEN
success := FALSE;
ELSIF _pk = _newpk THEN
success := FALSE;
ELSE
EXECUTE '
ALTER TABLE ' || _tbl::regclass || ' RENAME CONSTRAINT ' || quote_ident(_pk) || ' TO ' || quote_ident(_newpk);
success := TRUE;
END IF;
END
$$;

How to verify that column data_type are the same as foreign key data_type in Postgres?

Is there a way to check if all foreign key columns data_type are the same as the column they point to?
This code is valid and works until a user have an ID bigger than what int4 can handle.
CREATE SCHEMA test;
CREATE TABLE test.users (
id bigserial NOT NULL,
name varchar NULL,
CONSTRAINT user_pk PRIMARY KEY (id)
);
CREATE TABLE test.othertable (
blabla varchar NULL,
userid int4 NULL
);
ALTER TABLE test.othertable ADD CONSTRAINT newtable_fk FOREIGN KEY (userid) REFERENCES test.users(id);
An (incomplete) version, using the bare pg_catalogs instead of the information_schema wrapper:
SELECT version();
DROP SCHEMA test CASCADE;
CREATE SCHEMA test;
SET search_path = test;
CREATE TABLE users (
id bigserial NOT NULL CONSTRAINT user_pk PRIMARY KEY
, name varchar NULL
);
CREATE TABLE othertable (
blabla varchar NULL
, userid int4 NULL CONSTRAINT bad_fk REFERENCES users(id)
, goodid bigint NULL CONSTRAINT good_fk REFERENCES users(id)
);
PREPARE insert_two(bigint, text, text) AS
WITH one AS (
INSERT INTO users (id, name)
VALUES ( $1, $2)
RETURNING id
)
INSERT INTO othertable (userid, goodid, blabla)
SELECT id, id, $3
FROM one
;
EXECUTE insert_two(1, 'one', 'bla1' );
EXECUTE insert_two(2, 'two', 'bla2' );
EXECUTE insert_two(10000000000::bigint, 'toobig', 'bigbla' );
SELECT * FROM users;
SELECT * FROM othertable;
SET search_path = pg_catalog;
-- EXPLAIN ANALYZE
WITH cat AS ( -- Class Attribute Type
SELECT cl.oid AS coid, cl.relname
, at.attnum AS cnum, at.attname
, ty.oid AS toid, ty.typname
FROM pg_class cl
JOIN pg_attribute at ON at.attrelid = cl.oid AND at.attnum > 0 -- suppres system columns
JOIN pg_type ty ON ty.oid = at.atttypid
)
SELECT ns.nspname
, co.*
, source.relname AS source_table, source.attname AS source_column, source.typname AS source_type
, target.relname AS target_table, target.attname AS target_column, target.typname AS target_type
FROM pg_constraint co
JOIN pg_namespace ns ON co.connamespace = ns.oid
-- NOTE: this only covers single-column FKs
JOIN cat source ON source.coid = co.conrelid AND co.conkey[1] = source.cnum
JOIN cat target ON target.coid = co.confrelid AND co.confkey[1] = target.cnum
WHERE 1=1
AND co.contype = 'f'
AND ns.nspname = 'test'
-- commented out the line below, to show the differences between "good" and "bad" FK constraints.
-- AND source.toid <> target.toid
;
Rsults (look at the operators, it is a feature, not a bug!)
version
----------------------------------------------------------------------------------------------------------
PostgreSQL 11.6 on armv7l-unknown-linux-gnueabihf, compiled by gcc (Raspbian 8.3.0-6+rpi1) 8.3.0, 32-bit
(1 row)
NOTICE: drop cascades to 2 other objects
DETAIL: drop cascades to table test.users
drop cascades to table test.othertable
DROP SCHEMA
CREATE SCHEMA
SET
CREATE TABLE
CREATE TABLE
PREPARE
INSERT 0 1
INSERT 0 1
ERROR: integer out of range
id | name
----+------
1 | one
2 | two
(2 rows)
blabla | userid | goodid
--------+--------+--------
bla1 | 1 | 1
bla2 | 2 | 2
(2 rows)
SET
nspname | conname | connamespace | contype | condeferrable | condeferred | convalidated | conrelid | contypid | conindid | conparentid | confrelid | confupdtype | confdeltype | confmatchtype | conislocal | coninhcount | connoinherit | conkey | confkey | conpfeqop | conppeqop | conffeqop | conexclop | conbin | consrc | source_table | source_column | source_type | target_table | target_column | target_type
---------+---------+--------------+---------+---------------+-------------+--------------+----------+----------+----------+-------------+-----------+-------------+-------------+---------------+------------+-------------+--------------+--------+---------+-----------+-----------+-----------+-----------+--------+--------+--------------+---------------+-------------+--------------+---------------+-------------
test | good_fk | 211305 | f | f | f | t | 211317 | 0 | 211315 | 0 | 211308 | a | a | s | t | 0 | t | {3} | {1} | {410} | {410} | {410} | | | | othertable | goodid | int8 | users | id | int8
test | bad_fk | 211305 | f | f | f | t | 211317 | 0 | 211315 | 0 | 211308 | a | a | s | t | 0 | t | {2} | {1} | {416} | {410} | {96} | | | | othertable | userid | int4 | users | id | int8
(2 rows)
I made this query that check this :
select
tc.table_schema,
tc.constraint_name,
tc.table_name,
kcu.column_name,
ccu.table_schema AS foreign_table_schema,
ccu.table_name AS foreign_table_name,
ccu.column_name AS foreign_column_name,
sc.data_type AS data_type,
dc.data_type AS foreign_data_type
FROM information_schema.table_constraints AS tc
JOIN information_schema.key_column_usage AS kcu
ON tc.constraint_name = kcu.constraint_name
AND tc.table_schema = kcu.table_schema
JOIN information_schema.columns sc ON sc.table_schema = kcu.table_schema and sc.table_name = kcu.table_name and sc.column_name = kcu.column_name
JOIN information_schema.constraint_column_usage AS ccu
ON ccu.constraint_name = tc.constraint_name
AND ccu.table_schema = tc.table_schema
JOIN information_schema.columns dc ON dc.table_schema = ccu.table_schema and dc.table_name = ccu.table_name and dc.column_name = ccu.column_name
WHERE tc.constraint_type = 'FOREIGN KEY'
and sc.data_type <> dc.data_type;
It is quite slow, any tips for optimisation is welcome.

Postgresql sequencer not found

I'm creating an item record number generator. The goal is to have a table to house all record number/sequencers for a variety of different types. For example, for a "Part" you may want a number like "110-00001-00". The seqItem table would hold the definition of this number generator (SeqName, preFix, postFix, padding).
InventorySys=# SELECT * FROM information_schema.sequences;
sequence_catalog | sequence_schema | sequence_name | data_type | numeric_precision | numeric_precision_radix | numeric_scale | start_value | minimum_value | maximum_value | increment | cycle_option
------------------+-----------------+---------------+-----------+-------------------+-------------------------+---------------+-------------+---------------+---------------+-----------+--------------
(0 rows)
InventorySys=# \d "SeqItem"
Table "public.SeqItem"
Column | Type | Collation | Nullable | Default
---------+---------+-----------+----------+---------
SeqName | text | | not null |
prefix | text | | |
postfix | text | | |
padding | integer | | not null | 5
Indexes:
"SeqItem_pkey" PRIMARY KEY, btree ("SeqName")
"SeqName" UNIQUE CONSTRAINT, btree ("SeqName")
Triggers:
dropsqeitem AFTER DELETE ON "SeqItem" FOR EACH ROW EXECUTE FUNCTION "RemoveSeq"()
inssqeitem AFTER INSERT ON "SeqItem" FOR EACH ROW EXECUTE FUNCTION "CreateSeq"()
InventorySys=#
When a new record is added to this table, I want to create a new Sequence with the "SeqName". So, I've created the following Trigger/Function:
CREATE OR REPLACE FUNCTION public."CreateSeq"() RETURNS TRIGGER as $CreateSeq$
BEGIN
EXECUTE format('CREATE SEQUENCE %I INCREMENT BY 1 MINVALUE 1 NO MAXVALUE START WITH 1 NO CYCLE', NEW."SeqName");
RETURN NEW;
END
$CreateSeq$ LANGUAGE plpgsql;
CREATE TRIGGER insSqeItem AFTER INSERT ON "SeqItem"
FOR EACH ROW EXECUTE FUNCTION "CreateSeq"();
This works perfect, and with each new record, I get a new sequencer created. I've also created a another function/trigger to delete the sequencer if the row is deleted.
CREATE OR REPLACE FUNCTION public."RemoveSeq"() RETURNS TRIGGER as $RemoveSeq$
BEGIN
EXECUTE format('DROP SEQUENCE IF EXISTS %I', OLD."SeqName");
RETURN NEW;
END
$RemoveSeq$ LANGUAGE plpgsql;
CREATE TRIGGER dropSqeItem AFTER DELETE ON "SeqItem"
FOR EACH ROW EXECUTE FUNCTION "RemoveSeq"();
So far so good! So, Let's add a new record and see that the Sequencer was added:
InventorySys=# INSERT into "SeqItem" ("SeqName", prefix, padding) Values ('testItem1', '115-',6);
INSERT 0 1
InventorySys=# SELECT * FROM "SeqItem";
SeqName | prefix | postfix | padding
-----------+--------+---------+---------
testItem1 | 115- | | 6
(1 row)
InventorySys=# SELECT * FROM information_schema.sequences;
sequence_catalog | sequence_schema | sequence_name | data_type | numeric_precision | numeric_precision_radix | numeric_scale | start_value | minimum_value | maximum_value | increment | cycle_option
------------------+-----------------+---------------+-----------+-------------------+-------------------------+---------------+-------------+---------------+---------------------+-----------+--------------
InventorySys | public | testItem1 | bigint | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO
(1 row)
InventorySys=#
However, when I try to use the newly created sequencer from the trigger I get the following error that the sequencer is not found.
InventorySys=# select CONCAT("prefix", LPAD((select nextval("SeqItem"."SeqName"))::text, "padding", '0') , "postfix") from "SeqItem" where "SeqName" = 'testItem1' ;
ERROR: relation "testitem1" does not exist
InventorySys=#
ERROR: relation "testitem1" does not exist
If I create a new Sequencer without the Trigger, it works fine:
InventorySys=# CREATE SEQUENCE test1;
CREATE SEQUENCE
InventorySys=# SELECT NEXTVAL ('test1');
nextval
---------
1
(1 row)
InventorySys=#
And if I add that sequencer to my query, it works fine:
InventorySys=# select CONCAT("prefix", LPAD((select nextval('test1'))::text, "padding", '0') , "postfix") from "SeqItem" where "SeqName" = 'testItem1' ;
concat
------------
115-000002
(1 row)
InventorySys=#
Both sequencers look fine to me, but the one created by the Trigger I cannot get to work...
InventorySys=# SELECT * FROM information_schema.sequences;
sequence_catalog | sequence_schema | sequence_name | data_type | numeric_precision | numeric_precision_radix | numeric_scale | start_value | minimum_value | maximum_value | increment | cycle_option
------------------+-----------------+---------------+-----------+-------------------+-------------------------+---------------+-------------+---------------+---------------------+-----------+--------------
InventorySys | public | testItem1 | bigint | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO
InventorySys | public | test1 | bigint | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO
(2 rows)
InventorySys=#
Any help would be greatly appreciated!
Ok, I think I figured out my problem. It appears that the sequencer name needs to be all lower case? Or, I should say that if I use all lower case it works just fine...
InventorySys=# INSERT into "SeqItem" ("SeqName", prefix, padding) Values ('testitem3', '110-',4);
INSERT 0 1
InventorySys=# select CONCAT("prefix", LPAD((select nextval("SeqItem"."SeqName"))::text, "padding", '0') , "postfix") from "SeqItem" where "SeqName" = 'testitem3' ;
concat
----------
110-0001
(1 row)
InventorySys=# select CONCAT("prefix", LPAD((select nextval("SeqItem"."SeqName"))::text, "padding", '0') , "postfix") from "SeqItem" where "SeqName" = 'testitem3' ;
concat
----------
110-0002
(1 row)
InventorySys=#
I'm not sure why it will not accept upper and lower case characters...

Postgresql - How to select multiple tables by specific columns and append them

I would like to select a number of tables and select the geometry (geom) and Name columns in each of the tables and append below each other. I have gotten as far as selecting the tables and their columns as shown below:
SELECT TABLE_NAME COLUMN_NAME
FROM INFORMATION_SCHEMA.columns
WHERE (TABLE_NAME LIKE '%HESA' OR
TABLE_NAME LIKE '%HEWH') AND
(COLUMN_NAME = 'geom' AND
COLUMN_NAME = 'Name');
How do you then take the tables:
id | geom | Name | id | geom | Name |
____________________ ____________________
1 | geom1 | Name1 | 1 | geom4 | Name4 |
2 | geom2 | Name2 | 2 | geom5 | Name5 |
3 | geom3 | Name3 | 3 | geom6 | Name6 |
And append the second table below the first, like this:
id | geom | Name |
____________________
1 | geom1 | Name1 |
2 | geom2 | Name2 |
3 | geom3 | Name3 |
1 | geom4 | Name4 |
2 | geom5 | Name5 |
3 | geom6 | Name6 |
Do I use UNION ALL or something else?
https://www.db-fiddle.com/f/75fgQMEWf9LvPj4xYMGWvA/0
based on your sample data:
do
'
declare
r record;
begin
for r in (
SELECT a.TABLE_NAME
FROM INFORMATION_SCHEMA.columns a
JOIN INFORMATION_SCHEMA.columns b on a.TABLE_NAME = b.TABLE_NAME and a.COLUMN_NAME = ''geom'' and b.COLUMN_NAME = ''name''
WHERE (a.TABLE_NAME LIKE ''oranges%'' OR a.TABLE_NAME LIKE ''%_db'')
) loop
execute format(''insert into rslt select geom, name from %I'',r.table_name);
end loop;
end;
'
;
Union All will do the job just fine:
SELECT
*
FROM (
(SELECT * FROM table_one)
UNION ALL
(SELECT * FROM table_two)
) AS tmp
ORDER BY name ASC;
I have added the external SELECT, to show you how you can order the whole result.
DB Fiddle can be found here

Size of temp tables created in a particular session

I created a temp table using below query
Drop table if exists tmp_a;
Create temp table tmp_a
(
id int
);
Insert into tmp_a select generate_series(1,10000);
When I queried pg_stat_activity, it is showing as "IDLE" in current_query column for the above session.
I will get the size of all temp table from pg_class table using this query.
But I want the list of temp tables created for a particular session and the size of those temp tables i.e if I created two temp tables from two different sessions then the result should be like below
procpid | temp table name | size | username
12345 | tmp_a | 20 | gpadmin
12346 | tmp_b | 30 | gpadmin
Please share the query if anyone has it
It's actually simpler than you think --
The temporary schema namesapce is the same as the session id --
So...
SELECT
a.procpid as ProcessID,
a.sess_id as SessionID,
n.nspname as SchemaName,
c.relname as RelationName,
CASE c.relkind
WHEN 'r' THEN 'table'
WHEN 'v' THEN 'view'
WHEN 'i' THEN 'index'
WHEN 'S' THEN 'sequence'
WHEN 's' THEN 'special'
END as RelationType,
pg_catalog.pg_get_userbyid(c.relowner) as RelationOwner,
pg_size_pretty(pg_relation_size(n.nspname ||'.'|| c.relname)) as RelationSize
FROM
pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
LEFT JOIN pg_catalog.pg_stat_activity a ON 'pg_temp_' || a.sess_id::varchar = n.nspname
WHERE c.relkind IN ('r','s')
AND (n.nspname !~ '^pg_toast' and nspname like 'pg_temp%')
ORDER BY pg_relation_size(n.nspname ||'.'|| c.relname) DESC;
And you get --
processid | sessionid | schemaname | relationname | relationtype | relationowner | relationsize
-----------+-----------+------------+--------------+--------------+---------------+--------------
5006 | 9 | pg_temp_9 | tmp_a | table | gpadmin | 384 kB
5006 | 9 | pg_temp_9 | tmp_b | table | gpadmin | 384 kB
(2 rows)
Let's put that process to sleep -- and startup another....
gpadmin=#
[1]+ Stopped psql
[gpadmin#gpdb-sandbox ~]$ psql
psql (8.2.15)
Type "help" for help.
gpadmin=# SELECT nspname
FROM pg_namespace
WHERE oid = pg_my_temp_schema();
nspname
---------
(0 rows)
gpadmin=# Create temp table tmp_a( id int );
NOTICE: Table doesn't have 'DISTRIBUTED BY' clause -- Using column named 'id' as the Greenplum Database data distribution key for this table.
HINT: The 'DISTRIBUTED BY' clause determines the distribution of data. Make sure column(s) chosen are the optimal data distribution key to minimize skew.
CREATE TABLE
gpadmin=# SELECT nspname
FROM pg_namespace
WHERE oid = pg_my_temp_schema();
nspname
---------
pg_temp_10
(1 row)
... run the same query ...
processid | sessionid | schemaname | relationname | relationtype | relationowner | relationsize
-----------+-----------+------------+--------------+--------------+---------------+--------------
5006 | 9 | pg_temp_9 | tmp_a | table | gpadmin | 384 kB
5006 | 9 | pg_temp_9 | tmp_b | table | gpadmin | 384 kB
27365 | 10 | pg_temp_10 | tmp_a | table | gpadmin | 384 kB
(3 rows)