FROM schema.table as SELECT in DB2 - db2

i want to execute a dynmic select in DB where schema and table are defiend by a select statement.
I tried
SELECT * FROM SELECT Creator || '.' || Name FROM sysibm.systables where CREATOR = (SELECT "column" FROM schema.table where "column" = "value") and "column" = "value"
But it doesnt work.
I also tried
SELECT * (SELECT 'FROM' || ' ' || Creator || '.' || Name FROM sysibm.systables where CREATOR = (SELECT "column" FROM schema.table where "column" = "value") and "column" = "value")
Any idea? Or is this not possible in DB2?
CYA_D0c

Two issues with your approach:
Or is this not possible in DB2?
Yes, this is not possible in a single step. as #clockwork-muse said, not a db2 privilege.
And, realize that you want to select from several different tables, so actually, you dont want a single SELECT statement, but several. one select for each table that matches your criteria.
You have to perform this in a 2 step way. 1st query will just generate the individuals SELECT statements for each target table.
db2 -z Generated_SELECTs.sql -x SELECT 'SELECT * FROM "' || RTRIM(TABSCHEMA) || '"."' || RTRIM(TABNAME) || '" ;' FROM SYSCAT.TABLES WHERE <your condition>
This will dump the individual SELECTs statements to the .sql file.
Then you can execute them all using db2 -tvf
db2 -tvf Generated_SELECTs.sql
.2 Second issue with your approach, is the use of CREATOR instead of SCHEMA.
Table prefix on db2 is named SCHEMA, most of times SCHEMA is the same as CREATOR, but not always, IGOR can be the creator of a table inside SCHEMA 'Beta'.
The table would be referenced as Beta. , and not IGOR.
So you really should use SCHEMA.

Related

Setting the Postgres 9.6 Current Sequence Values of All Tables to Max ID values of all Parent Tables [duplicate]

This question already has answers here:
How to bulk update sequence ID postgreSQL for all tables
(3 answers)
Closed last month.
Postgres allows for auto-increment of the Primary Key value SERIAL is used at DDL. With this, a sequence property for the table is also created along with the table. However, when tuples are inserted manually, the sequence current value does not get updated.
For One table at a time we can use SQL like this
SELECT setval('<table>_id_seq', COALESCE((SELECT MAX(id) FROM your_table), 1), false); wherein we need to manually set the <table>_id_seq' and id' attribute name.
The problem becomes labourious if there are multiple tables. So is there any generalised method to set current value for all sequences to the MAX(<whatever_sequence_attribute_is>)?
Yup! Got it
First run the following SQL
SELECT 'SELECT setval(' || quote_literal(quote_ident(sequence_namespace.nspname) || '.' || quote_ident(class_sequence.relname)) || ', COALESCE((SELECT MAX(' ||quote_ident(pg_attribute.attname)|| ') FROM ' || quote_ident(table_namespace.nspname)|| '.'||quote_ident(class_table.relname)|| '), 1), false)' || ';' FROM pg_depend INNER JOIN pg_class AS class_sequence ON class_sequence.oid = pg_depend.objid AND class_sequence.relkind = 'S' INNER JOIN pg_class AS class_table ON class_table.oid = pg_depend.refobjid INNER JOIN pg_attribute ON pg_attribute.attrelid = class_table.oid AND pg_depend.refobjsubid = pg_attribute.attnum INNER JOIN pg_namespace as table_namespace ON table_namespace.oid = class_table.relnamespace INNER JOIN pg_namespace AS sequence_namespace ON sequence_namespace.oid = class_sequence.relnamespace ORDER BY sequence_namespace.nspname, class_sequence.relname;
The result window will show as many SQL statements as sequences are there in the DB. Copy all the SQL statements and Execute all. The current value of all sequences will get updated.
Note: Please Note that the major SQL will only generate SQLs and WILL NOT UPDATE the sequences. One has to copy all the SQL Statements from the Result and execute.

PostgreSQL : How to select rows from multiple tables inside different schemas?

I am trying to select all rows from multiple tables inside various schemas.
These are the tables inside different schemas
schema_1-->ABC_table_1,XYZ_table_2
schema_2-->ABC_table_1,JLK_table_2
.
.
schema_N-->ABC_table_1,LMN_table_2
I am trying to select all rows from table_2 from all schemas:
This query is giving me all the tables:
SELECT
table_schema || '.' || table_name
FROM
information_schema.tables
WHERE
table_type = 'BASE TABLE'
AND
table_schema NOT IN ('pg_catalog', 'information_schema');
What i need to do is something like
select * from schema_1.XYZ_table_2
Union all
select * from schema_2.JLK_table_2
.
.
schema_2.LMN_table_2
Verify if this option can work for you :
select row_to_json(row) from (select * from table1 ) row
UNION ALL
select row_to_json(row) from (select * from table2 ) row
;
If you dont want to work with JSON/HStore datamanipulation you can also try to make dynamic sql select creation for all possible columns in the schema then UNION ALL can still work .
Json looks pretty easier to do this task and also very supported to do any further actions.

How to perform a database-wide full text search in PostgreSQL?

I have a PostgreSQL database with about 500 tables. Each table has a unique ID column named id and a user ID column named user_id. I would like to perform a full-text search of all varchar columns across all of these tables for a particular user. I do this today with ElasticSearch but I'd like to simplify my architecture. I understand that I can add full text search columns to all of the tables with things like stored generated columns and then add indices for fast full text search:
ALTER TABLE pgweb
ADD COLUMN textsearchable_index_col tsvector
GENERATED ALWAYS AS (to_tsvector('english', coalesce(title, '') || ' ' || coalesce(body, ''))) STORED;
CREATE INDEX textsearch_idx ON pgweb USING GIN (textsearchable_index_col);
However, I'm not familiar with how to do cross-table searches efficiently. Maybe a view across all textsearchable_index_col columns? I'd like the result to be something like the table name and id of the matching row. For example:
table_name | id
-------------+-------
table1 | 492
table42 | 20
If it matters, I'm using Ruby on Rails as the client with ActiveRecord. I'm using a managed PostgreSQL 13 database at Digital Ocean so I won't be able to install custom psql plugins.
Maybe It is not the answer you are looking for, because I am not sure if there is a better approach, but first I will try to automate the process.
I will make two dynamic queries, the first one to create columns textsearchable_index_col (in each table with at least one varchar column) and the other to create an index on that columns (one index per table).
You could ADD a textsearchable_index_col column for each "character varying" column instead only one concatenating all "character varying" columns, but in this case I will create one textsearchable_index_col column per table like you propose.
I assume table schema "public" but you can use the real one.
-- Create columns textsearchable_index_col:
SELECT 'ALTER TABLE ' || table_schema || '.' || table_name || E' ADD COLUMN textsearchable_index_col tsvector GENERATED ALWAYS AS (to_tsvector(\'english\', coalesce(' ||
string_agg(column_name, E', \'\') || \' \' || coalesce(') || E', \'\'))) STORED;'
FROM information_schema.columns
WHERE table_schema = 'public' AND data_type IN ('character varying')
GROUP BY table_schema, table_name;
-- Create indexes on textsearchable_index_col columns:
SELECT 'CREATE INDEX ' || table_name || '_textsearch_idx ON ' || table_schema || '.' || table_name || ' USING GIN (textsearchable_index_col);'
FROM information_schema.columns
WHERE table_schema = 'public' AND data_type IN ('character varying')
GROUP BY table_schema, table_name;
Then I will use a dynamic query to create a query (using UNION) to search on all that textsearchable_index_col columns:
You need to replace question mark by parameters (user_id and searched text), and take out the last "UNION ALL"
SELECT E'SELECT \'' || table_name || E'\' AS table_name, id FROM ' || table_schema || '.' || table_name || E' WHERE user_id = ? AND textsearchable_index_col' || ' ## to_tsquery(?) UNION ALL'
FROM information_schema.columns
WHERE table_schema = 'public' AND data_type IN ('character varying')
GROUP BY table_schema, table_name;

How to combine multiple tables together in postgresql that have the same columns but in different order?

I have multiple tables that all have the same columns, but in different order. I want to merge them all together. I've created an empty table with the standard columns in the order I would like. I've tried inserting with
insert into master_table select * from table1;
but that doesn't work because of the differing column order - some of the values end up in the wrong columns. What is the best way to create one table out of them all in the order specified in my empty master table?
If you are dealing with many columns and many tables, you can use the information_schema to get the columns. You can loop through all the tables you want to insert from and run this in a plpgsql procedure, replacing table1 with a variable:
EXECUTE (
SELECT
'insert into master_table
(' || string_agg(quote_ident(column_name), ',') || ')
SELECT ' || string_agg('p.' || quote_ident(column_name), ',') || '
FROM table1 p '
FROM information_schema.columns raw
WHERE table_name = 'master_table');
just indicate the proper order in the select
instead of
select *
if you want 3 field on second posiition.
select field1, field3, field2
or you can use the INSERT sintaxis
INSERT INTO master_table (field1, field3, field2)
SELECT *

Query to select from all tables that have the same value in the same column

Database is called 'school'. 'school' has tables with 'classroom_names' i.e 'room1' as a table, 'room2' as another table, etc. and each 'classroom_names' has a 'student_name' column.
I want to select all 'classroom_names' where it has a 'student_name' called 'John'.
So far I can only select all 'classroom_names' from the database like his:
select * from syscat.tables
I'd suggest the following. (Which tries to fix some of the problems with your design)
Create the following View
CREATE OR REPLACE VIEW SCHOOL.STUDENT_LIST AS
SELECT 'Room1' as CLASSROOM, student_name FROM SCHOOL.Room1
UNION ALL
SELECT 'Room2' as CLASSROOM, student_name FROM SCHOOL.Room2
UNION ALL
SELECT 'Room3' as CLASSROOM, student_name FROM SCHOOL.Room3
UNION ALL
SELECT 'Room4' as CLASSROOM, student_name FROM SCHOOL.Room4
UNION ALL
-- etc
SELECT 'RoomN' as CLASSROOM, student_name FROM SCHOOL.RoomN
Now you can say
SELECT CLASSROOM FROM STUDENT_LIST WHERE student_name = 'John'
You can built a dynamic query from the DB2 catalog
db2 "select 'select ' || trim(tabname) || ' classroom, student_name from '
|| tabname || ' where student_name = ''John'';'
from syscat.tables
where tabname like 'room%'" | db2 +p -tv
The last part (db2 +p -tv) allows to execute the output. If it does not work (buffer size), just remove this, and copy paste the output.