How to find table creation time? - postgresql

How can I find the table creation time in PostgreSQL?
Example: If I created a file I can find the file creation time like that I want to know the table creation time.

I had a look through the pg_* tables, and I couldn't find any creation times in there. It's possible to locate the table files, but then on Linux you can't get file creation time. So I think the answer is that you can only find this information on Windows, using the following steps:
get the database id with select datname, datdba from pg_database;
get the table filenode id with select relname, relfilenode from pg_class;
find the table file and look up its creation time; I think the location should be something like <PostgreSQL folder>/main/base/<database id>/<table filenode id> (not sure what it is on Windows).

You can't - the information isn't recorded anywhere. Looking at the table files won't necessarily give you the right information - there are table operations that will create a new file for you, in which case the date would reset.

I don't think it's possible from within PostgreSQL, but you'll probably find it in the underlying table file's creation time.

Suggested here :
SELECT oid FROM pg_database WHERE datname = 'mydb';
Then (assuming the oid is 12345) :
ls -l $PGDATA/base/12345/PG_VERSION
This workaround assumes that PG_VERSION is the least likely to be modified after the creation.
NB : If PGDATA is not defined, check Where does PostgreSQL store the database?

Check data dir location
SHOW data_directory;
Check For Postgres relation file path :
SELECT pg_relation_filepath('table_name');
you will get the file path of your relation
check for creation time of this file <data-dir>/<relation-file-path>

I tried a different approach to get table creation date which could help for keeping track of dynamically created tables. Suppose you have a table inventory in your database where you manage to save the creation date of the tables.
CREATE TABLE inventory (id SERIAL, tablename CHARACTER VARYING (128), created_at DATE);
Then, when a table you want to keep track of is created it's added in your inventory.
CREATE TABLE temp_table_1 (id SERIAL); -- A dynamic table is created
INSERT INTO inventory VALUES (1, 'temp_table_1', '2020-10-07 10:00:00'); -- We add it into the inventory
Then you could get advantage of pg_tables to run something like this to get existing table creation dates:
SELECT pg_tables.tablename, inventory.created_at
FROM pg_tables
INNER JOIN inventory
ON pg_tables.tablename = inventory.tablename
/*
tablename | created_at
--------------+------------
temp_table_1 | 2020-10-07
*/
For my use-case it is ok because I work with a set of dynamic tables that I need to keep track of.
P.S: Replace inventory in the database with your table name.

I'm trying to follow a different way for obtain this.
Starting from this discussion my solution was:
DROP TABLE IF EXISTS t_create_history CASCADE;
CREATE TABLE t_create_history (
gid serial primary key,
object_type varchar(20),
schema_name varchar(50),
object_identity varchar(200),
creation_date timestamp without time zone
);
--delete event trigger before dropping function
DROP EVENT TRIGGER IF EXISTS t_create_history_trigger;
--create history function
DROP FUNCTION IF EXISTS public.t_create_history_func();
CREATE OR REPLACE FUNCTION t_create_history_func()
RETURNS event_trigger
LANGUAGE plpgsql
AS $$
DECLARE
obj record;
BEGIN
FOR obj IN SELECT * FROM pg_event_trigger_ddl_commands () WHERE command_tag in ('SELECT INTO','CREATE TABLE','CREATE TABLE AS')
LOOP
INSERT INTO public.t_create_history (object_type, schema_name, object_identity, creation_date) SELECT obj.object_type, obj.schema_name, obj.object_identity, now();
END LOOP;
END;
$$;
--ALTER EVENT TRIGGER t_create_history_trigger DISABLE;
--DROP EVENT TRIGGER t_create_history_trigger;
CREATE EVENT TRIGGER t_create_history_trigger ON ddl_command_end
WHEN TAG IN ('SELECT INTO','CREATE TABLE','CREATE TABLE AS')
EXECUTE PROCEDURE t_create_history_func();
In this way you obtain a table that records all the creation tables.

--query
select pslo.stasubtype, pc.relname, pslo.statime
from pg_stat_last_operation pslo
join pg_class pc on(pc.relfilenode = pslo.objid)
and pslo.staactionname = 'CREATE'
Order By pslo.statime desc
will help to accomplish desired results
(tried it on greenplum)

You can get this from pg_stat_last_operation. Here is how to do it:
select * from pg_stat_last_operation where objid = 'table_name'::regclass order by statime;
This table stores following operations:
select distinct staactionname from pg_stat_last_operation;
staactionname
---------------
ALTER
ANALYZE
CREATE
PARTITION
PRIVILEGE
VACUUM
(6 rows)

Related

Postgres 11 throwing cache lookup failed for type errors

Here is the test case and results:
drop table if exists test1;
drop table if exists test2;
drop trigger if exists test1_tr on test1;
drop function if exists tf_test1;
create table test1 (name varchar(8) not null);
create table test2 (name varchar(8) not null);
\echo create trigger function tf_test1
CREATE OR REPLACE FUNCTION tf_test1() RETURNS trigger AS $BODY$
BEGIN
IF TG_OP = 'INSERT' THEN
INSERT INTO test2(name) VALUES (NEW.name);
END IF;
return new;
END
$BODY$
LANGUAGE 'plpgsql';
\echo create trigger test1_tr
CREATE TRIGGER test1_tr
AFTER INSERT OR UPDATE OR DELETE ON test1 FOR EACH ROW
EXECUTE PROCEDURE tf_test1();
\echo Insert
insert into test1 (name) values ('NAME_001');
insert into test1 (name) values ('NAME_002');
insert into test1 (name) values ('NAME_003');
insert into test1 (name) values ('NAME_004');
\echo Select test1
select * from test1;
\echo Select test2
select * from test2;
---------------------------- output -------------------------------
DROP TABLE
DROP TABLE
DROP TABLE
DROP TABLE
DROP TRIGGER
DROP FUNCTION
CREATE TABLE
CREATE TABLE
create trigger function tf_test1
CREATE FUNCTION
create trigger test1_tr
CREATE TRIGGER
Insert
INSERT 0 1
psql:test3.sql:28: ERROR: cache lookup failed for type 113
CONTEXT: SQL statement "INSERT INTO test2(name) VALUES (NEW.name)"
PL/pgSQL function tf_test1() line 4 at SQL statement
INSERT 0 1
INSERT 0 1
Select test1
name
----------
NAME_001
NAME_003
NAME_004
(3 rows)
Select test2
name
----------
NAME_001
NAME_003
NAME_004
(3 rows)
We have several servers running various flavors of RHEL 7.x. All Postgres instances are v11. This is happening on about 1/2 of them. There doesn't seem to be any consistent RH version that is the culprit.
I have queried both pg_class and pg_type for the OID referenced as the missing type. In all cases, the result set is empty.
Any help is appreciated.
I would also appreciate an insight into what's happening with Postgres. I'm a long-time Oracle DBA, but fairly new to Postgres. It seems like an internal Postgres error and not really a code problem, but a web search doesn't turn up much.
Follow-up on this to provide some closure. We had increased our buffer and effective cache size in the Postgresql.conf file and also turned Auditing on (pgaudit extension) full blast...For the machines where the PG memory conf parameters exceeded the physical memory of the machine and auditing was turned on, we would get cache lookup errors. A clue about this was the errors would hop around in the job flow, were not consistent from machine to machine and were effectively unsquashable bugs (dropping the offending trigger would just cause the cache error somewhere else in the job stream).
For now, we have increased the physical memory of the servers and turned auditing off. The cache lookup errors are gone. Further tuning is needed so we can eventually turn auditing back on.

IF... ELSE... two mutually exclusive inserts INTO #temptable

I need to insert either set A or set B of records into a #temptable, depending on certain condition
My pseudo-code:
IF OBJECT_ID('tempdb..#t1') IS NOT NULL DROP TABLE #t1;
IF {some-condition}
SELECT {columns}
INTO #t1
FROM {some-big-table}
WHERE {some-filter}
ELSE
SELECT {columns}
INTO #t1
FROM {some-other-big-table}
WHERE {some-other-filter}
The two SELECTs above are exclusive (guaranteed by the ELSE operator). However, SQL compiler tries to outsmart me and throws the following message:
There is already an object named '#t1' in the database.
My idea of "fixing" this is to create #t1 upfront and then executing a simple INSERT INTO (instead of SELECT... INTO). But I like minimalism and am wondering whether this can be achieved in an easier way i.e. without explicit CREATE TABLE #t1 upfront.
Btw why is it NOT giving me an error on a conditional DROP TABLE in the first line? Just wondering.
You can't have 2 temp tables with the same name in a single SQL batch. One of the MSDN article says "If more than one temporary table is created inside a single stored procedure or batch, they must have different names". You can have this logic with 2 different temp tables or table variable/temp table declared outside the IF-Else block.
Using a Dyamic sql we can handle this situation. As a developoer its not a good practice. Best to use table variable or temp table.
IF 1=2
BEGIN
EXEC ('SELECT 1 ID INTO #TEMP1
SELECT * FROM #TEMP1
')
END
ELSE
EXEC ('SELECT 2 ID INTO #TEMP1
SELECT * FROM #TEMP1
')

Create a temp table (if not exists) for use into a custom procedure

I'm trying to get the hang of using temp tables:
CREATE OR REPLACE FUNCTION test1(user_id BIGINT) RETURNS BIGINT AS
$BODY$
BEGIN
create temp table temp_table1
ON COMMIT DELETE ROWS
as SELECT table1.column1, table1.column2
FROM table1
INNER JOIN -- ............
if exists (select * from temp_table1) then
-- work with the result
return 777;
else
return 0;
end if;
END;
$BODY$
LANGUAGE plpgsql;
I want the row temp_table1 to be deleted immediately or as soon as possible, that's why I added ON COMMIT DELETE ROWS. Obviously, I got the error:
ERROR: relation "temp_table1" already exists
I tried to add IF NOT EXISTS but I couldn't, I simply couldn't find working example of it that would be the I'm looking for.
Your suggestions?
DROP Table each time before creating TEMP table as below:
BEGIN
DROP TABLE IF EXISTS temp_table1;
create temp table temp_table1
-- Your rest Code comes here
The problem of temp tables is that dropping and recreating temp table bloats pg_attribute heavily and therefore one sunny morning you will find db performance dead, and pg_attribute 200+ gb while your db would be like 10gb.
So we're very heavy on temp tables having >500 rps and async i\o via nodejs and thus experienced a very heavy bloating of pg_attribute because of that. All you are left with is a very aggressive vacuuming which halts performance.
All answers given here do not solve this, because they all bloat pg_attribute heavily.
So the solution is elegantly this
create temp table if not exists my_temp_table (description) on commit delete rows;
So you go on playing with temp tables and save your pg_attribute.
You want to DROP term table after commit (not DELETE ROWS), so:
begin
create temp table temp_table1
on commit drop
...
Documentation

Create a temporary table from a selection or insert if table already exist

How to create a temporary table, if it does not already exist, and add the selected rows to it?
CREATE TABLE AS
is the simplest and fastest way:
CREATE TEMP TABLE tbl AS
SELECT * FROM tbl WHERE ... ;
Do not use SELECT INTO. See:
Combine two tables into a new one so that select rows from the other one are ignored
Not sure whether table already exists
CREATE TABLE IF NOT EXISTS ... was introduced in version Postgres 9.1.
For older versions, use the function provided in this related answer:
PostgreSQL create table if not exists
Then:
INSERT INTO tbl (col1, col2, ...)
SELECT col1, col2, ...
Chances are, something is going wrong in your code if the temp table already exists. Make sure you don't duplicate data in the table or something. Or consider the following paragraph ...
Unique names
Temporary tables are only visible within your current session (not to be confused with transaction!). So the table name cannot conflict with other sessions. If you need unique names within your session, you could use dynamic SQL and utilize a SEQUENCE:
Create once:
CREATE SEQUENCE tablename_helper_seq;
You could use a DO statement (or a plpgsql function):
DO
$do$
BEGIN
EXECUTE
'CREATE TEMP TABLE tbl' || nextval('tablename_helper_seq'::regclass) || ' AS
SELECT * FROM tbl WHERE ... ';
RAISE NOTICE 'Temporary table created: "tbl%"' || ', lastval();
END
$do$;
lastval() and currval(regclass) are instrumental to return the dynamically created table name.

postgresql: INSERT INTO ... (SELECT * ...) - II [duplicate]

I'm not sure if its standard SQL:
INSERT INTO tblA
(SELECT id, time
FROM tblB
WHERE time > 1000)
What I'm looking for is: what if tblA and tblB are in different DB Servers.
Does PostgreSql gives any utility or has any functionality that will help to use INSERT query with PGresult struct
I mean SELECT id, time FROM tblB ... will return a PGresult* on using PQexec. Is it possible to use this struct in another PQexec to execute an INSERT command.
EDIT:
If not possible then I would go for extracting the values from PQresult* and create a multiple INSERT statement syntax like:
INSERT INTO films (code, title, did, date_prod, kind) VALUES
('B6717', 'Tampopo', 110, '1985-02-10', 'Comedy'),
('HG120', 'The Dinner Game', 140, DEFAULT, 'Comedy');
Is it possible to create a prepared statement out of this!! :(
As Henrik wrote you can use dblink to connect remote database and fetch result. For example:
psql dbtest
CREATE TABLE tblB (id serial, time integer);
INSERT INTO tblB (time) VALUES (5000), (2000);
psql postgres
CREATE TABLE tblA (id serial, time integer);
INSERT INTO tblA
SELECT id, time
FROM dblink('dbname=dbtest', 'SELECT id, time FROM tblB')
AS t(id integer, time integer)
WHERE time > 1000;
TABLE tblA;
id | time
----+------
1 | 5000
2 | 2000
(2 rows)
PostgreSQL has record pseudo-type (only for function's argument or result type), which allows you query data from another (unknown) table.
Edit:
You can make it as prepared statement if you want and it works as well:
PREPARE migrate_data (integer) AS
INSERT INTO tblA
SELECT id, time
FROM dblink('dbname=dbtest', 'SELECT id, time FROM tblB')
AS t(id integer, time integer)
WHERE time > $1;
EXECUTE migrate_data(1000);
-- DEALLOCATE migrate_data;
Edit (yeah, another):
I just saw your revised question (closed as duplicate, or just very similar to this).
If my understanding is correct (postgres has tbla and dbtest has tblb and you want remote insert with local select, not remote select with local insert as above):
psql dbtest
SELECT dblink_exec
(
'dbname=postgres',
'INSERT INTO tbla
SELECT id, time
FROM dblink
(
''dbname=dbtest'',
''SELECT id, time FROM tblb''
)
AS t(id integer, time integer)
WHERE time > 1000;'
);
I don't like that nested dblink, but AFAIK I can't reference to tblB in dblink_exec body. Use LIMIT to specify top 20 rows, but I think you need to sort them using ORDER BY clause first.
If you want insert into specify column:
INSERT INTO table (time)
(SELECT time FROM
dblink('dbname=dbtest', 'SELECT time FROM tblB') AS t(time integer)
WHERE time > 1000
);
This notation (first seen here) looks useful too:
insert into postagem (
resumopostagem,
textopostagem,
dtliberacaopostagem,
idmediaimgpostagem,
idcatolico,
idminisermao,
idtipopostagem
) select
resumominisermao,
textominisermao,
diaminisermao,
idmediaimgminisermao,
idcatolico ,
idminisermao,
1
from
minisermao
You can use dblink to create a view that is resolved in another database. This database may be on another server.
insert into TABLENAMEA (A,B,C,D)
select A::integer,B,C,D from TABLENAMEB
If you are looking for PERFORMANCE, give where condition inside the db link query.
Otherwise it fetch all data from the foreign table and apply the where condition.
INSERT INTO tblA (id,time)
SELECT id, time FROM dblink('dbname=dbname port=5432 host=10.10.90.190 user=postgresuser password=pass123',
'select id, time from tblB where time>'''||1000||'''')
AS t1(id integer, time integer)
I am going to SELECT Databasee_One(10.0.0.10) data from Database_Two (10.0.0.20)
Connect to 10.0.0.20 and create DBLink Extenstion:
CREATE EXTENSION dblink;
Test the connection for Database_One:
SELECT dblink_connect('host=10.0.0.10 user=postgres password=dummy dbname=DB_ONE');
Create foreign data wrapper and server for global authentication:
CREATE FOREIGN DATA WRAPPER postgres VALIDATOR postgresql_fdw_validator;
You can use this server object for cross database queries:
CREATE SERVER dbonepostgres FOREIGN DATA WRAPPER postgres OPTIONS (hostaddr '10.0.0.10', dbname 'DB_ONE');
Mapping of user and server:
CREATE USER MAPPING FOR postgres SERVER dbonepostgres OPTIONS (user 'postgres', password 'dummy');
Test dblink:
SELECT dblink_connect('dbonepostgres');
Import data from 10.0.0.10 into 10.0.0.20
INSERT INTO tableA
SELECT
column1,
,column2,
...
FROM dblink('dbonepostgres', 'SELECT column1, column2, ... from public.tableA')
AS data(column1 DATATYPE, column2 DATATYPE, ...)
;
Here's an alternate solution, without using dblink.
Suppose B represents the source database and A represents the target database:
Then,
Copy table from source DB to target DB:
pg_dump -t <source_table> <source_db> | psql <target_db>
Open psql prompt, connect to target_db, and use a simple insert:
psql
# \c <target_db>;
# INSERT INTO <target_table>(id, x, y) SELECT id, x, y FROM <source_table>;
At the end, delete the copy of source_table that you created in target_table.
# DROP TABLE <source_table>;