Almost all the information I had needed about a database, I could find in information_schema
This time I needed to read details of all foreign keys in a database through single query I found every thing in information_schema.key_Column_usage but could not find the constraints like on delete, on update
I could do show create table for all individual tables. But is there any way to get these details through some select query like this?
SELECT CONSTRAINT_NAME, TABLE_NAME,COLUMN_NAME, REFERENCED_TABLE_NAME,
REFERENCED_COLUMN_NAME FROM information_schema.`KEY_COLUMN_USAGE` WHERE
table_schema = 'mydbname' AND referenced_column_name IS NOT NULL
It is doing the job well but just missing constraints like on delete, on update How can I get those values as well so that I can get all info about foreign keys in a single query?
UPDATE_RULE and DELETE_RULE is the thing you asked for
it's a little bit too late but it could help someone else, here the solution :
SELECT tb1.CONSTRAINT_NAME, tb1.TABLE_NAME, tb1.COLUMN_NAME,
tb1.REFERENCED_TABLE_NAME, tb1.REFERENCED_COLUMN_NAME, tb2.MATCH_OPTION,
tb2.UPDATE_RULE, tb2.DELETE_RULE
FROM information_schema.`KEY_COLUMN_USAGE` AS tb1
INNER JOIN information_schema.REFERENTIAL_CONSTRAINTS AS tb2 ON
tb1.CONSTRAINT_NAME = tb2.CONSTRAINT_NAME
WHERE table_schema = 'sfa' AND referenced_column_name IS NOT NULL
Update From information_schema.REFERENTIAL_CONSTRAINTS table add in mysql 5.1 mysql-5.1 we can get information about all constraints**. Accepted answer gives the solution as query.
Earlier
Before mysql 5.1 like mysql-5.0, we could not get this information, we could only use show create table for individual table.
If you are looking for (primary|foreign|unique) keys :
http://dev.mysql.com/doc/refman/5.5/en/table-constraints-table.html
Now, you can find foreign key constraint details in table INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS
http://dev.mysql.com/doc/refman/5.5/en/referential-constraints-table.html
Related
Scenario: I have two tables. Table A and Table B, both have the same exact columns. My task is to create a master table. I need to ensure no duplicates are in the master table unless it is a new record.
problem: Whoever built the tables did not assign a Primary Key to the table.
Attempts: I attempted running an INSERT INTO WHERE NOT EXISTS query (below as an example not the actual query I ran)
Question: the portion of the query below WHERE t2.id = t1.id confuses me, my table has a multitude of columns, there is no id column like I said it has no PRIMARY key to anchor the match, so, in a scenario where all I have are values without primary keys, how can I append only new records? Also, perhaps I am going about this the wrong way but are there any other functions or options through TSQL worth considering? Maybe not an INSERT INTO statement or perhaps something else? My SQL skills aren't yet that advance so I am not asking for a solution but perhaps ideas or other methods worth considering? Any ideas are welcome.
INSERT INTO TABLE_2
(id, name)
SELECT t1.id,
t1.name
FROM TABLE_1 t1
WHERE NOT EXISTS(SELECT id
FROM TABLE_2 t2
WHERE t2.id = t1.id)
If I understand your question correctly, you would need to amend the SQL sample you posted by changing the condition t2.id = t1.id to whatever columns you do have.
Say your 2 tables have name and brand columns and you don't want duplicates, just change the sample to:
WHERE t2.name = t1.name
AND t2.brand = t1.brand
This will ensure you don't insert and rows in table 2 from table 1 which are duplicates. You would have to make sure the where condition contains all columns (you said the table schemas are identical).
Also, the above code sample copies everything into table 2 - but you said you want a master table - so you'd have to change it to insert into the master table, not table 2.
Is it possible to add a new column to an existing table from another table using insert or update in conjunction with full outer join .
In my main table i am missing some records in one column in the other table i have all those records i want to take the full record set into the maintable table. Something like this;
UPDATE maintable
SET all_records= othertable.records
FROM
FULL JOIN othertable on maintable.col = othertable.records;
Where maintable.col has same id a othertable.records.
I know i could simply join the tables but i have a lot of comments in the maintable i don't want to have to copy paste back in if possible. As i understand using where is equivalent of a left join so won't show me what i'm missing
EDIT:
What i want is effectively a new maintable.col with all the records i can then pare down based on presence of records in other cols from other tables
Try this:
UPDATE maintable
SET all_records = o.records
FROM othertable o
WHERE maintable.col = o.records;
This is the general syntax to use in postgres when updating via a join.
HTH
EDIT
BTW you will need to change this - I used your example, but you are updating the maintable with the column used for the join! Your set needs to be something like SET missingcol = o.extracol
AMENDED GENERALISED ANSWER (following off-line chat)
To take a simplified example, suppose that you have two tables maintable and subtable, each with the same columns, but where the subtable has extra records. For both tables id is the primary key. To fill maintable with the missing records, for pre 9.5 versions of Postgres you must use the following syntax:
INSERT INTO maintable (SELECT * FROM subtable s WHERE NOT EXISTS
(SELECT 1 FROM maintable m WHERE m.id = s.id));
Since 9.5 there is a (preferred) alternative:
INSERT INTO maintable (SELECT * FROM subtable) ON CONFLICT DO NOTHING;
This is preferred because (apart from being simpler) it avoids the situation that has been known to arise in the former, where a race condition is created between the INSERT and the sub-SELECT.
Obviously when the columns are different, you need to specify in the INSERT statement which columns are inserted from which. Something like:
INSERT INTO maintable (id, ColA, ColB)
(SELECT id, ColE, ColG FROM subtable ....)
Similarly the common field might not be id in both tables. However, the simplified example should be enough to point you in the right direction.
Good Day,
I'm currently using posgresql as my backend and I have to make huge changes on my table fields.
I will be using two tables.
Table 1 Table 2
Old Index New Index
Product Id Old Index
Address Product Id
Contact no Address
Contact no
Email
I have to migrate all details from Table 1 from Table 2. I’m using a different index for Table 2.
For my other tables to recognize my old index I used this query
Update Table 2 Set OldIndex =Table2.index
From(select Oldindex from Table 1)as new,Table 1
Where Table1.Productid =Table2.Productid
I have other tables related to Table 1 so my goal is to replace the old index with new index and hope that other tables can see the changes too.
But I’m not sure I’m doing this right. my query is slow, I hope someone can test my query and point me on the right direction if I'm doing it all wrong, thank you in advance.
Would you mind to try MERGE
MERGE INTO Table2 AS b
USING Table1 AS p
ON p.product_id = b.product_id
WHEN MATCHED THEN b.OldIndex = b.NewIndex
I do not know how it works for postgresql, but you can find some samples here: https://wiki.postgresql.org/wiki/MergeTestExamples
The way to do this in PostgreSQL is to use a writable CTE (available in 9.2 and later).
In this way you would do something like:
WITH up (UPDATE table2
SET ....
FROM table1 t1
WHERE t1.product_id = table2.product_id
RETURNING product_id)
INSERT INTO table2 (...)
SELECT ...
FROM table1
WHERE product_id NOT IN (select product_id from up);
You can find some examples here.
I have a oracle database schema with more complicated foreign key relation.. I need to populate test data to all the tables.. due to foreign key constraints i am finding it difficult to find hierarchy of tables.. can anyone suggest any package or method to accomplish this..
Thanks in advance
It would be helpful if you could let us know what form you want the output to take. You may want to start with Frank Kulash's example of a hierarchial query against the DBA_CONSTRAINTS table to show the path.
If you are looking for a way to determine what order to load tables, that's identical to a question that was asked on dba.stackexchange (can't mark this question as a duplicate because DBA is still in beta). Something like
WITH constraint_tree AS
(
SELECT DISTINCT
a.table_name AS table_name
, b.table_name AS parent_table_name
FROM dba_constraints a
LEFT OUTER JOIN dba_constraints b
ON a.r_constraint_name = b.constraint_name
AND a.owner = b.owner
WHERE a.owner = 'SCOTT'
)
SELECT table_name, lvl
FROM (
SELECT a.*,
rank() over (partition by table_name order by lvl desc) rnk
FROM (
SELECT table_name, level lvl
FROM constraint_tree
START WITH parent_table_name IS NULL
CONNECT BY NOCYCLE parent_table_name = PRIOR table_name
) a
) b
WHERE rnk = 1
ORDER BY lvl, table_name
/
will give you the tables in the order they should be loaded (assuming there are no cycles in the data). If you want to load in parallel, all tables with the same LVL can be loaded simultaneously.
If the hierarchy of tables is very complicated, and if you can get sole access to the schema (i.e. impose some "down time" on the users), you could disable all the foreign key constraints, load the data, then re-enable the constraints.
Another alternative is to use deferrable constraints, and only defer them for the session that is loading the data; but there are disadvantages to this, one being that you'd first have to drop all the constraints in order to make them deferrable if they're not already.
In PostgreSQL, is there a way to get all of the tables that a view/table depends on based on its use of foreign keys and access to a given table?
Basically, I want to be able to copy the structure of a view/table using a script and want to be able to automatically get the list of tables that I would also need to copy in order for everything to still work right.
This response appears to be headed in the right direction, but doesn't give me the results that I expect/need. Any suggestions?
Using the info from Andy Lester, I was able to come up with the following queries to retrieve the information that I needed.
Get Tables that Foreign Keys refer to:
SELECT cl2.relname AS ref_table
FROM pg_constraint as co
JOIN pg_class AS cl1 ON co.conrelid=cl1.oid
JOIN pg_class AS cl2 ON co.confrelid=cl2.oid
WHERE co.contype='f' AND cl1.relname='TABLENAME'
ORDER BY cl2.relname;
Get Tables that a View or Rules from a Table refer to:
SELECT cl_d.relname AS ref_table
FROM pg_rewrite AS r
JOIN pg_class AS cl_r ON r.ev_class=cl_r.oid
JOIN pg_depend AS d ON r.oid=d.objid
JOIN pg_class AS cl_d ON d.refobjid=cl_d.oid
WHERE cl_d.relkind IN ('r','v') AND cl_r.relname='TABLENAME'
GROUP BY cl_d.relname
ORDER BY cl_d.relname;
Assuming you have your foreign keys set up correctly, use pg_dump to dump the table definitions.
pg_dump -s -t TABLENAME
I think it is a quite bad idea. Just copy the whole database, I think that the application wants to have all data, not only data from one table.
What's more, there are also triggers, that could depend on some tables, but to know that you'd have to make not so easy code analysis.
In psql, adding + to the usual \d gives you a "Referenced by" list along with the table definition.
\d+ tablename