PostgreSQL : How to select rows from multiple tables inside different schemas? - postgresql

I am trying to select all rows from multiple tables inside various schemas.
These are the tables inside different schemas
schema_1-->ABC_table_1,XYZ_table_2
schema_2-->ABC_table_1,JLK_table_2
.
.
schema_N-->ABC_table_1,LMN_table_2
I am trying to select all rows from table_2 from all schemas:
This query is giving me all the tables:
SELECT
table_schema || '.' || table_name
FROM
information_schema.tables
WHERE
table_type = 'BASE TABLE'
AND
table_schema NOT IN ('pg_catalog', 'information_schema');
What i need to do is something like
select * from schema_1.XYZ_table_2
Union all
select * from schema_2.JLK_table_2
.
.
schema_2.LMN_table_2

Verify if this option can work for you :
select row_to_json(row) from (select * from table1 ) row
UNION ALL
select row_to_json(row) from (select * from table2 ) row
;
If you dont want to work with JSON/HStore datamanipulation you can also try to make dynamic sql select creation for all possible columns in the schema then UNION ALL can still work .
Json looks pretty easier to do this task and also very supported to do any further actions.

Related

PostgreSQL information schema query for tables without a specific data type

I'm trying to write a PostgreSQL query to get all the tables (from a specified schema) which don't have any columns of a specific data type, for example, show all tables without any integer type columns. so far I can manage to get only the table names, the data types of the columns they have and their count but I feel like that's the wrong direction in order to get what I want. any help appreciated, thanks
SELECT Table_Name, Data_Type, COUNT(Data_Type)
FROM Information_schema.Columns
WHERE Table_Schema = 'project'
GROUP BY Table_Name, Data_Type
You'll want to start with the tables table and then use an EXISTS subquery:
SELECT table_name
FROM information_schema.tables
WHERE table_schema = 'project'
AND NOT EXISTS (
SELECT *
FROM information_schema.columns
WHERE column.table_schema = tables.table_schema
AND column.table_name = tables.table_name
AND data_type = 'integer'
)

Execute result of a PostgreSQL query again to get the final result set?

I'm looking for a way to get the list of all json attributes across all my PostgreSQL tables dynamically.
I have Query 1 which would generate a list of sql statements, and then run that sql statements to get the final output all in one go (like the dynamic SQL concept in SQL server).
Query 1 looks like this :
create temporary table test (ordr int, field varchar(1000));
-- Step 1 Create temp table to insert all table/col/json attrbute info
insert into test(ordr,field)
select 0 ordr,'create temporary table temp_table
( table_schema varchar(200)
,table_name varchar(200)
,column_name varchar(200)
,json_attribute varchar(200)
,data_type varchar(50)
);'
union
-- Non json type columns
select 1 ordr, 'insert into temp_table(table_name, column_name,data_type,json_attribute)'
union
-- Json columns with data like json object
select
3 ordr,
concat('select distinct ''', t.table_name, ''' tbl, ''', c.column_name, ''' col, ''' , c.data_type,''' data_type, '
,'jsonb_object_keys(', c.column_name, ') json_attribute', ' from ', t.table_name,
' where jsonb_typeof(' , c.column_name, ') = ''object'' union') AS field
from information_schema.tables t
join information_schema.columns c on c.table_name = t.table_name
where t.table_schema not in ('information_schema', 'pg_catalog')
--and table_type = 'BASE TABLE'
and c.data_type ='jsonb';
--final sql statements to build temp table
--copy all the column "txt" to a separate window and execute it, it will create a temp table "temp_table" which will have all tables/cols/json attributes
select ordr
,(case when t.ordr = (select max(t2.ordr) from test t2) then replace(field,'union','') else field end) txt
from test t
union
select 9999, ';select * from temp_table;'
order by 1 ;
Query 1 output : This is a list of sql statements
I'm looking for a way to run the Query 1 & Query 1 output which would get me the final output all in one go.
Any lead or guidance will be really appreciated.

PostgreSQL - Combine select

I have a database which stores multiple schemas with tables in it
I want to get every schema name and in the same time check if the schema has a table named 'status'
I got two queries for that:
This query returns all schemas of the database:
select schema_name from information_schema.schemata
With the returned query I then check every schema if the table 'status' exists:
select exists(select * from information_schema.tables where table_schema = 'the_schema_name' and table_name = 'status')
My question is now if I can combine these two queries into one?
Thanks in advance
Doobie
Use a co-related sub-query:
select s.schema_name,
exists (select *
from information_schema.tables t
where t.table_schema = s.schema_name
and t.table_name = 'status') as status_exists
from information_schema.schemata s;
If you just want to find the schemas where the table does not exist, you can do that with the following query:
select s.schema_name
from information_schema.schemata s
where not exists (select *
from information_schema.table t
where t.schema_name = s.schema_name
and t.table_name = 'status');

How to combine multiple tables together in postgresql that have the same columns but in different order?

I have multiple tables that all have the same columns, but in different order. I want to merge them all together. I've created an empty table with the standard columns in the order I would like. I've tried inserting with
insert into master_table select * from table1;
but that doesn't work because of the differing column order - some of the values end up in the wrong columns. What is the best way to create one table out of them all in the order specified in my empty master table?
If you are dealing with many columns and many tables, you can use the information_schema to get the columns. You can loop through all the tables you want to insert from and run this in a plpgsql procedure, replacing table1 with a variable:
EXECUTE (
SELECT
'insert into master_table
(' || string_agg(quote_ident(column_name), ',') || ')
SELECT ' || string_agg('p.' || quote_ident(column_name), ',') || '
FROM table1 p '
FROM information_schema.columns raw
WHERE table_name = 'master_table');
just indicate the proper order in the select
instead of
select *
if you want 3 field on second posiition.
select field1, field3, field2
or you can use the INSERT sintaxis
INSERT INTO master_table (field1, field3, field2)
SELECT *

SELECT ALL column_names in postgresql

I'm using PostgreSQL and I want to create a query that will display all column_names in a specific table.
Schema: codes
Table Name: watch_list
Here are the column_names in my table:
watch_list_id, watch_name, watch_description
I tried what I found in the web:
SELECT *
FROM information_schema.columns
WHERE table_schema = 'codes'
AND table_name = 'watch_list'
It output is not what I wanted. It should be:
watch_list_id, watch_name, watch_description
How to do this?
If you want all column names in a single row, you need to aggregate those names:
SELECT table_name, string_agg(column_name, ', ' order by ordinal_position) as columns
FROM information_schema.columns
WHERE table_schema = 'codes'
AND table_name = 'watch_list'
GROUP BY table_name;
If you remove the condition on the table name, you get this for all tables in that schema.
SELECT table_name FROM information_schema.tables WHERE table_schema='public'