Do Postgres temporary tables exist between multiple connections open at once? - postgresql

Say I open a new npgsqlconnection and create a new temporary table temp1, and then open another new connection. From my understanding a temporary table is only available to the session that opened it, and two open connections shouldn't share the same session. Here the connection strings are identical, and I tried turning pooling off, but that didn't change anything. The pseudo-code is:
var conn1 = new NpgsqlConnection(MyConnectionString)
var conn2 = new NpgsqlConnection(MyConnectionString)
conn1.Open()
conn2.Open()
conn1.Execute("CREATE TEMP TABLE temp1(idx int)")
If I execute the query SELECT COUNT(*) FROM pg_tables WHERE tablename = 'temp1' for both connections this query returns 1. Why would conn2 be able to access the temporary table created on conn1? Is there anyway to prevent this?

Why would conn2 be able to access the temporary table created on conn1?
It can't.
The other connections can see that there is a table via the system catalog, but they cannot access it.
-- Connection 1
test=# SELECT schemaname FROM pg_tables WHERE tablename = 'temp1';
schemaname
------------
pg_temp_3
(1 row)
-- Connection 2
test=# select * from pg_temp_3.temp1;
ERROR: cannot access temporary tables of other sessions

Related

Retrieve tables name from several databases in Postgresql?

I have two databases new_site,old_site I'm connecting to the database server via Postgres user and have full permission and I connect to new_site db.
I need to get tables names for old_site so I tried this:
SELECT table_name
FROM information_schema.tables
WHERE table_catalog = $$old_site$$;
but I get a null as result.
If I run this query:
SELECT table_name
FROM information_schema.tables
WHERE table_catalog = current_database();
I get back the table name and it works.
I expect the output is table name of old_site db, how can I do this?
I was also reading some solutions here like:
Selecting column name from other database table through function in PostgreSQL
But it's not like my case.

does dblink use a different session when performed on the same database?

I'm looking to get around the limitations from how to force a postgres function to not be in a transaction.
I'd like to execute two seperate transactions within a single function and am wondering if using dblink calls on loopback will solve this.
A dblink query uses its own connection created explicitly by dblink_connect or implicitly when a connection info string is delivered in a dblink exec command. In the sense of transactions, you can consider it independent of the transaction in which it ran.
Example setup:
create table test(id int, str text);
insert into test values(1, '');
Two transactions inside an outer one:
begin;
select dblink_connect('db1', 'dbname=test');
select dblink_connect('db2', 'dbname=test');
select dblink_exec('db1', 'begin');
select dblink_exec('db1', 'update test set str = ''db1'' where id = 1');
select dblink_exec('db1', 'commit'); -- UPDATE!
select dblink_exec('db2', 'begin');
select dblink_exec('db2', 'update test set str = ''db2'' where id = 1');
select dblink_exec('db2', 'rollback');
select dblink_disconnect('db1');
select dblink_disconnect('db2');
rollback;
Despite the last rollback, the update in transaction db1 was successful:
select *
from test;
id | str
----+-----
1 | db1
(1 row)
Note that while transactions are essentially independent, sessions are not. The commands in two internal sessions are executed one after the other in a linear manner, and no concurrency can be achieved. A possible conflict would create a deadlock.

Get all database names from multiple servers

We have multiple SQL Servers and most of them are standalone. I am in need of creating a stored procedure / view that would insert all database names into a table from all servers.
Is there a way to do this via a stored procedure or a view? I do not have any powershell or .Net experience.
Here's what I have so far. I just can't figure out how to 'jump' from server to server and add all my results into a real table.
CREATE TABLE ##temp
(
DATABASE_NAME VARCHAR(100),
DATABASE_SIZE INT,
REMARKS VARCHAR(500)
)
INSERT into ##temp
EXEC [sp_databases]
--doing this to also get ServerName along with the db name.
--When I insert into a real table, I'll seperate it into two columns plus remove "#_!_#"
update ##temp
set DATABASE_NAME = (select ##SERVERNAME ) + '#_!_# ' + DATABASE_NAME
where DATABASE_NAME not like '%#_!_#%'
select DATABASE_NAME from ##temp
SQL Server Management Studio allows you to execute a query against multiple servers using the Registered Servers feature. This was added in SQL Server 2008 as this tutorial shows so you shouldn't worry about compatibility.
Running multi-server queries is easy:
From the View menu, select `Registered Servers. This will open a new window similar to the Object Explorer that displays the objects of a single server.
Add connections for all your servers connection details in the Local Server Groups folder
Right-click on the Local Server Groups folder and select New Query. The query you enter here will run an all registered servers.
To find all databases run select * from sys.databases or just sp_databases
SSMS will collect the results from all servers and display them in a grid. If you want the results to go to a single server's table though, you'll have to add the target server as a linked server to all others and use a four-part name to target the target table, eg INSERT INTO myManagementServer.MyDb.dbo.ThatTable...
SQL Server has even more powerful features for managing multiple servers. You can administer multiple servers through a Central Management Server and apply settings to multiple servers through policies. That feature was also added in 2008.
In SQL Server 2008 R2 the SQL Server Utility was added which goes even farther and collects diagnostics, metrics, performance data from multiple servers and stores it in a management warehouse for reporting. Imagine being able to see eg storage and query performance for multiple servers, or free space trends for the last X months.
The drawbacks are that historical data needs space. Collecting it also requires adding some stored procedures to all monitored servers, although this is done automatically.
For this kind of thing it's good to have at least one server that has a linked connection to all the servers you need information for. If you do then you can use this little script I just wrote:
-- (1) Create global temp table used to store results
IF OBJECT_ID('tempdb..##databases') IS NOT NULL DROP TABLE ##databases;
CREATE TABLE ##databases
(
serverDBID int identity,
serverName varchar(100),
databaseName varchar(100),
databaseSize decimal(20,6)
);
-- (2) Create and populate table variable used to collect server names
DECLARE #servers TABLE(id int identity, serverName varchar(100));
INSERT #servers(serverName)
SELECT name FROM sys.servers;
-- (3) loop through each DB and collect database names into ##databases
DECLARE #i int = 1, #serverName varchar(100), #db varchar(100), #sql varchar(8000);
WHILE #i <= (SELECT COUNT(*) FROM #servers)
BEGIN
SELECT #serverName = serverName FROM #servers WHERE id = #i;
SET #sql = 'INSERT ##databases(serverName, databaseName) SELECT '''+#serverName+
''', name FROM master.sys.databases';
EXEC (#sql);
SET #i += 1;
END;
-- (4) Collect database sizes
SET #i = 1; -- reset/re-use #i;
WHILE #i <= (SELECT COUNT(*) FROM ##databases)
BEGIN
SELECT #serverName = serverName, #db = databaseName
FROM ##databases
WHERE serverDBID = #i;
SET #sql =
'UPDATE ##databases
SET databaseSize =
(SELECT sum(size)/128. FROM ['+#serverName+'].['+#db+'].sys.database_files)
WHERE serverDBID = '+CAST(#i AS varchar(4))+';'
BEGIN TRY
EXEC (#sql);
END TRY
BEGIN CATCH
PRINT 'There was an error getting dbsize info for '+#serverName+' > '+#db;
END CATCH;
SET #i += 1;
END;
-- Final Output
SELECT * FROM ##databases;

Where is temporary table created?

Where can I find created temporary table in posgresql folders? If I do select * from temp_table; then I got result, but cannot see it structure of my database in the PgAdmin?
Temporary tables get put into a schema called "pg_temp_NNN", where "NNN" indicates which server backend you're connected to. This is implicitly added to your search path in the session that creates them.
Note that you can't access one connection's temp tables via another connection... so depending on how exactly pgAdmin organises its connections, even being able to find the tables in the object explorer might not be useful.
Here is one way to get the name of the pg_temp_nnn schema for your session:
select distinct 'pg_temp_'||sess_id from pg_stat_activity where procpid = pg_backend_pid()
This will identify the session that is running that SQL statement itself, and returns the session id that it is running under.
You can then use this to list all your temporary tables:
select *
from information_schema.tables
where table_schema =
( select distinct 'pg_temp_'||sess_id
from pg_stat_activity
where procpid = pg_backend_pid()
)
Or to get the table structure:
select *
from information_schema.columns
where table_schema =
( select distinct 'pg_temp_'||sess_id
from pg_stat_activity
where procpid = pg_backend_pid()
)
and table_name = 'my_temp_table'
order by ordinal_position

How to find table creation time?

How can I find the table creation time in PostgreSQL?
Example: If I created a file I can find the file creation time like that I want to know the table creation time.
I had a look through the pg_* tables, and I couldn't find any creation times in there. It's possible to locate the table files, but then on Linux you can't get file creation time. So I think the answer is that you can only find this information on Windows, using the following steps:
get the database id with select datname, datdba from pg_database;
get the table filenode id with select relname, relfilenode from pg_class;
find the table file and look up its creation time; I think the location should be something like <PostgreSQL folder>/main/base/<database id>/<table filenode id> (not sure what it is on Windows).
You can't - the information isn't recorded anywhere. Looking at the table files won't necessarily give you the right information - there are table operations that will create a new file for you, in which case the date would reset.
I don't think it's possible from within PostgreSQL, but you'll probably find it in the underlying table file's creation time.
Suggested here :
SELECT oid FROM pg_database WHERE datname = 'mydb';
Then (assuming the oid is 12345) :
ls -l $PGDATA/base/12345/PG_VERSION
This workaround assumes that PG_VERSION is the least likely to be modified after the creation.
NB : If PGDATA is not defined, check Where does PostgreSQL store the database?
Check data dir location
SHOW data_directory;
Check For Postgres relation file path :
SELECT pg_relation_filepath('table_name');
you will get the file path of your relation
check for creation time of this file <data-dir>/<relation-file-path>
I tried a different approach to get table creation date which could help for keeping track of dynamically created tables. Suppose you have a table inventory in your database where you manage to save the creation date of the tables.
CREATE TABLE inventory (id SERIAL, tablename CHARACTER VARYING (128), created_at DATE);
Then, when a table you want to keep track of is created it's added in your inventory.
CREATE TABLE temp_table_1 (id SERIAL); -- A dynamic table is created
INSERT INTO inventory VALUES (1, 'temp_table_1', '2020-10-07 10:00:00'); -- We add it into the inventory
Then you could get advantage of pg_tables to run something like this to get existing table creation dates:
SELECT pg_tables.tablename, inventory.created_at
FROM pg_tables
INNER JOIN inventory
ON pg_tables.tablename = inventory.tablename
/*
tablename | created_at
--------------+------------
temp_table_1 | 2020-10-07
*/
For my use-case it is ok because I work with a set of dynamic tables that I need to keep track of.
P.S: Replace inventory in the database with your table name.
I'm trying to follow a different way for obtain this.
Starting from this discussion my solution was:
DROP TABLE IF EXISTS t_create_history CASCADE;
CREATE TABLE t_create_history (
gid serial primary key,
object_type varchar(20),
schema_name varchar(50),
object_identity varchar(200),
creation_date timestamp without time zone
);
--delete event trigger before dropping function
DROP EVENT TRIGGER IF EXISTS t_create_history_trigger;
--create history function
DROP FUNCTION IF EXISTS public.t_create_history_func();
CREATE OR REPLACE FUNCTION t_create_history_func()
RETURNS event_trigger
LANGUAGE plpgsql
AS $$
DECLARE
obj record;
BEGIN
FOR obj IN SELECT * FROM pg_event_trigger_ddl_commands () WHERE command_tag in ('SELECT INTO','CREATE TABLE','CREATE TABLE AS')
LOOP
INSERT INTO public.t_create_history (object_type, schema_name, object_identity, creation_date) SELECT obj.object_type, obj.schema_name, obj.object_identity, now();
END LOOP;
END;
$$;
--ALTER EVENT TRIGGER t_create_history_trigger DISABLE;
--DROP EVENT TRIGGER t_create_history_trigger;
CREATE EVENT TRIGGER t_create_history_trigger ON ddl_command_end
WHEN TAG IN ('SELECT INTO','CREATE TABLE','CREATE TABLE AS')
EXECUTE PROCEDURE t_create_history_func();
In this way you obtain a table that records all the creation tables.
--query
select pslo.stasubtype, pc.relname, pslo.statime
from pg_stat_last_operation pslo
join pg_class pc on(pc.relfilenode = pslo.objid)
and pslo.staactionname = 'CREATE'
Order By pslo.statime desc
will help to accomplish desired results
(tried it on greenplum)
You can get this from pg_stat_last_operation. Here is how to do it:
select * from pg_stat_last_operation where objid = 'table_name'::regclass order by statime;
This table stores following operations:
select distinct staactionname from pg_stat_last_operation;
staactionname
---------------
ALTER
ANALYZE
CREATE
PARTITION
PRIVILEGE
VACUUM
(6 rows)