I have a postgres database with 30 tables. The main table "general_data" has a column "building_code", which all the other tables use as foreign key.
What I would need is to synchronise the "building_code" columns in all tables, meaning if a row is added in "general_data" a row is created in ALL tables with the the same value in "building_code" (the other columns remain empty).
Is there an SQL function that does that?
Related
My goal is to append all the tables in the schema. Is there a way to loop through the table in one specific schema, and use those tables to append to each other to create a bigger table? (Here, all my tables have the same format for the columns).
There are set of values to update. Example: table t1 has column c1 which has to be updated from 1 to x. There are around 300 such sets available in a file and around 15 such tables with over 100k of records.
What is the optimal way of doing this?
Approaches I can think of are:
individual update statement for old with new value in all tables
programmatically read the file and create dynamic update statement
using merge into table syntax
In one of the tables the column is primary key with tables referencing them as foreign key
When I insert a row to a table which has some rows already in oracle, the new inserting row gets inserted somewhere middle instead of at the bottom of table. When inserting again a new row it follows below the row just added.
Why this happens?
Tables in databases typically represent an unordered collection of data. In Oracle, tables are by default heap-organized tables and do not store data in order.
If ordering is important in your data, consider an index-organized table. For Oracle, more information on that can be found here: Overview of Tables
Sharing your table definition would help in confirming that for you.
We have two copies of a simple application that is based on SQLite. The application has 10 tables with a variety of relations between the tables. We would like to merge the databases to a single Postgres database with the same schema. We can use Talend to facilitate this, however the issue is that there would be duplicate keys (as both the source databases are independent). Is there a systematic method by which we can insert data into Postgres with the original key plus an offset resulting from loading the first database?
Step 1. Restore the first database.
Step 2. Change foreign keys of all tables by adding the option on update cascade.
For example, if the column table_b.a_id refers to the column table_a.id:
alter table table_b
drop constraint table_b_a_id_fkey,
add constraint table_b_a_id_fkey
foreign key (a_id) references table_a(id)
on update cascade;
Step 3. Update primary keys of the tables by adding the desired offset, e.g.:
update table_a
set id = 10000+ id;
Step 4. Restore the second database.
If you have the possibility to edit the script with database schema (or do the transfer manually with your own script), you can merge steps 1 and 2 and edit the script before the restore (adding the option on update cascade for foreign keys in tables declarations).
I have a Postgres database that carry over 200 tables with the same column names and datatypes respectively. I would like to join all of them into one table, how can I achieve this?
I have Postgres 9.4 and pgAdmin setup.
If the tables have identical column names and types, then you can create a parent table and arrange for all of the other tables to inherit from the parent table. After this, queries on the parent table will automatically query all of the child tables.
First create an empty parent table with the same definition as the 200 tables you already have.
Then, use ALTER TABLE on each of the 200 tables to make them inherit from the parent table.
CREATE TABLE myparenttable( LIKE mychildtable1 );
-- Repeat this for each of the child tables
ALTER TABLE mychildtable1 INHERIT myparenttable;
See also: Inheritance in the postgresql manual.