postgresql combine data from different schema tables if that schema exists - postgresql

Need to create a view based on these conditions:
There are several schemas and tables in the db
We will create a union from tables from certain schemas
If the schema don't exist we should skip that schema from our union
It is given that if schema exists the associated table definitely exists, no need to check that.
Query should not give error if any of the schema is not created.
At the time of running query any schema could be missing that is not known until query is run.
So far creating the view using unions is simple enough but I'm not able to figure out what is the best way to include that condition check for schema existence, I'm sorry if this is trivial or duplicate question, any advice or reference could be helpful.
Thanks,
CJ

In postgresql we can use if schema exists:
SELECT schema_name FROM information_schema.schemata WHERE schema_name = 'name';

Related

create (or copy) table schema using postgres_fdw or dblink

I have many tables in different databases and want to bring them to a database.
It seems like I have to create foreign table in the database (where I want to merge them all) with schemas of all the tables.
I am sure, there is a way to automate this (by the way, I am going to use psql command) but I do not know where to start.
what I have found so far is I can use
select * from information_schema.columns
where table_schema = 'public' and table_name = 'mytable'
I added more detail explanation.
I wanted to copy tables from another database
the tables have same column names and data type
using postgres_fdw, I needed to set up a field name and data type for each tables (the table names are also same)
then, I want to union the tables have same name all to have one single table.
for that, I am going to add prefix on table
for instance, mytable in db1, mytable in db2, mytable in db3 as in
db1_mytable, db2_mytable, db3_mytable in my local database.
Thanks to Albe's comment, I managed it and now I need to figure out doing 4th step using psql command.

What is the difference between syscat.tabauth and sysibm.systabauth

What is the difference between these two queries:
select * from syscat.tabauth
select * from sysibm.systabauth where tcreator='SYSCAT' and ttname='TABAUTH'
Are they same ?
EDIT:
1. select grantee from sysibm.systabauth where tcreator='SYSCAT' and ttname='TABAUTH' and selectauth='Y'
select grantee from syscat.tabauth where selectauth='Y'
Would there be any difference in the value of these two queries ???
If I change selectauth to 'N' using sysibm.systabauth. Does that reflect in query 2?
The main difference is that one is a table and the other is a readonly view.
Other differences exist, and they can be version specific.
Different permissions can apply also.
When your target database are always on Linux/Unix/Windows , use SYSCAT schema as IBM tries to keep that unchanged even if the underlying objects change between versions (except where new columns get added ). IBM describes the SYSCAT schema here.
The SYSCAT schema contains many views and is relevant for Linux/Unix/Windows versions of Db2-Servers.
The SYSIBM schema contains many tables and is present on both Z/OS and LUW versions of Db2-servers.
So SYSCAT.TABAUTH is only a view on SYSIBM.SYSTABAUTH and you can see the definition of the view in the catalog with a query like this:
"select substr(text,1,4096) from syscat.views where viewschema='SYSCAT' and viewname='TABAUTH'"
You use GRANT and REVOKE statements to alter the contents of the SYSIBM.TABAUTH table directly, other statements like CREATE/DROP/ALTER table can indirectly change its contents.

If a Postgres DB has unique IDs across its tables, how do you find a row using its ID without knowing its table?

Following the blog of Rob Conery I have set of unique IDs across the tables of my Postgres DB.
Now, using these unique IDs, is there a way to query a row on the DB without knowing what table it is in? Or can those tables be indexed such that if the row is not available on the current table, I just increase the index and I can query to the next table?
In short - if you did not prepared for that - then no. You can prepare for that by generating your own uuid. Please look here. For instance PG has uuid that preserve order. Also uuid v5 has something like namespaces. So you can build hierarchy. However that is done by hashing namespace, and I don't know tool to do opposite inside PG.
If you know all possible tables in advance you could prepare a query that simply UNIONs a search with a tagged type over all tables. In case of two tables named comments and news you could do something like:
PREPARE type_of_id(uuid) AS
SELECT id, 'comments' AS type
FROM comments
WHERE id = $1
UNION
SELECT id, 'news' AS type
FROM news
WHERE id = $1;
EXECUTE type_of_id('8ecf6bb1-02d1-4c04-8875-f1da62b7f720');
Automatically generating this could probably be done by querying pg_catalog.pg_tables and generating the relevant query on the fly.

Alter the column type over several tables

In a PostgreSQL db I'm working on, half of the tables have one particular column, always named the same, that is of type varchar(5). The size became a bit too restricting and I want to change it to varchar(10).
The number of tables in my particular case is actually very manageable to do it by hand. But I was wondering how one could script this with a query for larger dbs. It generally should be possible in just a few steps.
Identify all the tables in the schema, then (?) filter by condition if column present.
Create ALTER TABLE statements for each table found
I have some idea about how to write a query that identifies all tables in the schema. But I wouldn't know how to filter them. And if I didn't filter them, I assume the generated alter table statements would break.
Would be great if someone could share their knowledge on this.
Thanks to Abelisto for providing some guidance. Eventually, this is how I did it.
First, I created a query that in turn creates the ALTER TABLE statements. MyDB and MyColumn need to reflect actual values.
SELECT
'ALTER TABLE '||columns.table_name||' ALTER COLUMN '||MyColumn||' TYPE varchar(20);'
FROM
information_schema.columns
WHERE
columns.table_catalog = 'MyDB' AND
columns.table_schema = 'public' AND
columns.column_name = 'MyColumn';
Then it was just a matter of executing the output as a new query. All done.

Can two temporary tables with the same name exist in separate queries

I was wondering, if it is possible to have two temp tables with the same name in two separate queries without them conflicting when called upon later in the queries.
Query 1: Create Temp Table Tmp1 as ...
Query 2: Create Temp Table Tmp1 as ...
Query 1: Do something with Tmp1 ...
I am wondering if postgresql distinguishes between those two tables, maybe through addressing them as Query1.Tmp1 and Query2.Tmp1
Each connection to the database gets its own special temporary schema name, and temp tables are created in that schema. So there will not be any conflict between concurrent queries from separate connections, even if the tables have the same names. https://dba.stackexchange.com/a/5237 for more info
The PostgreSQL docs for creating tables states:
Temporary tables exist in a special schema, so a schema name cannot be given when creating a temporary table.