Let's say I have two tables. Is there a way to insert rows into both simultaneously? When I had 1 table I could simply do
INSERT INTO users (first_name, last_name, email) VALUES ('Tom','Prats','tom#tomprats.com'), ('Tim', 'Prats', tim#tomprats.com);
If I now have 1 table that stores email, and an associated table that stores first and last name, how do I insert both at once? Keeping in mind I'm importing a lot of data.
Table "users"
Column | Type | Modifiers
---------------------+---------------+-----------
id | character(36) | not null
email | text |
Indexes:
"pk_users" PRIMARY KEY, btree (id)
Foreign-key constraints:
"fk_users_attributes" FOREIGN KEY (attributes_id) REFERENCES user_attributes(id)
Table "user_attributes"
Column | Type | Modifiers
-----------------+---------------+-----------
id | character(36) | not null
first_name | text |
last_name | text |
Indexes:
"pk_user_attributes" PRIMARY KEY, btree (id)
Referenced by:
TABLE "users" CONSTRAINT "fk_users_attributes" FOREIGN KEY (attributes_id) REFERENCES user_attributes(id)
You can't do this only by insert.
If you must do this in a single line the best way to do it is to write a short function that will make 2 inserts. This will result in a single transaction per call(if this was the main problem).
You can also use Data-Modifying Statements in WITH
Related
I have one table, which is heavily updated in my system by process A. This is the simplified table:
db=# \d employee;
Table "public.employee"
Column | Type | Collation | Nullable | Default
-----------------+-----------------------------+-----------+----------+---------------------------------------------
id | integer | | not null | nextval('employee_id_seq'::regclass)
name | character varying | | |
Indexes:
"employee_pkey" PRIMARY KEY, btree (id)
And I have a table which is referencing that table:
db=# \d employee_property;
Table "public.employee_property"
Column | Type | Collation | Nullable | Default
-----------------+-----------------------------+-----------+----------+---------------------------------------------
id | integer | | not null | nextval('employee_property_id_seq'::regclass)
type | character varying | | |
value | character varying | | |
employee_id | integer | | not null |
Indexes:
"employee_property_pkey" PRIMARY KEY, btree (id)
"employee_property_employee_id_type_value_key" UNIQUE CONSTRAINT, btree (employee_id, type, value)
"ix_employee_property_employee_id" btree (employee_id)
Foreign-key constraints:
"employee_property_employee_id_fkey" FOREIGN KEY (employee_id) REFERENCES employee(employee_id) ON DELETE CASCADE DEFERRABLE
I am trying to understand if I am updating the employee_property table heavily by process B in the system, might it cause some locks or any other side effects which might affect the process A which updates the employee table?
If you insert a row in employee_property or update the employee_id column of an existing row, a FOR KEY SHARE lock is placed on the row the new employee_id refers to.
This lock will block any concurrent attempt to delete the referenced employee row or update any PRIMARY KEY or UNIQUE columns. Updates to the locked employee row that do not modify a key column will work, because they only require a FOR NO KEY UPDATE lock on the row, which is compatible with FOR KEY SHARE.
The reason for this is that PostgreSQL must ensure that the referenced row cannot vanish while the transaction that modifies employee_property is still in progress. Simply checking for referencing rows in employee won't be enough, because the effects of a transaction that is still in progress are not visible outside the transaction itself.
I am working on an ETL where we get data from hive and dump it to Postgres. Just to ensure the data is not corrupt I first store the data in a temp table (created as the main table with all the indexes and constraints) and if the data is validated copy it to the main table.
But it has been taking to long as the data is huge.
Once the data is validated I am now thinking of dropping the main table and then renaming the temp table to the main table.
Will renaming a table in Postgres drop the indexes,constraints and defaults defined on it?
It a word - no, it will not drop any indexes, constraints or defaults. Here's a quick demo:
db=> CREATE TABLE mytab (
id INT PRIMARY KEY,
col_uniq INT UNIQUE,
col_not_null INT NOT NULL DEFAULT 123
);
CREATE TABLE
db=> \d mytab
Table "public.mytab"
Column | Type | Modifiers
--------------+---------+----------------------
id | integer | not null
col_uniq | integer |
col_not_null | integer | not null default 123
Indexes:
"mytab_pkey" PRIMARY KEY, btree (id)
"mytab_col_uniq_key" UNIQUE CONSTRAINT, btree (col_uniq)
db=> ALTER TABLE mytab RENAME TO mytab_renamed;
ALTER TABLE
db=> \d mytab_renamed
Table "public.mytab_renamed"
Column | Type | Modifiers
--------------+---------+----------------------
id | integer | not null
col_uniq | integer |
col_not_null | integer | not null default 123
Indexes:
"mytab_pkey" PRIMARY KEY, btree (id)
"mytab_col_uniq_key" UNIQUE CONSTRAINT, btree (col_uniq)
I am working in Postgres 9.1 and I want to create a foreign key relationship for two tables that don't currently have one.
These are my tables:
# \d frontend_item;
Table "public.frontend_item"
Column | Type | Modifiers
-------------------+-------------------------+--------------------------------------------------------------------
id | integer | not null default nextval('frontend_prescription_id_seq'::regclass)
presentation_code | character varying(15) | not null
pct_code | character varying(3) | not null
Indexes:
"frontend_item_pkey" PRIMARY KEY, btree (id)
# \d frontend_pct;
Column | Type | Modifiers
------------+--------------------------+-----------
code | character varying(3) | not null
Indexes:
"frontend_pct_pkey" PRIMARY KEY, btree (code)
"frontend_pct_code_1df55e2c36c298b2_like" btree (code varchar_pattern_ops)
This is what I'm trying:
# ALTER TABLE frontend_item ADD CONSTRAINT pct_fk
FOREIGN KEY (pct_code) REFERENCES frontend_pct(code) ON DELETE CASCADE;
But I get this error:
ERROR: insert or update on table "frontend_item" violates
foreign key constraint "pct_fk"
DETAIL: Key (pct_code)=(5HQ) is not present in table "frontend_pct"
I guess this makes sense, because currently the frontend_pct table is empty, while the frontend_item has values in it.
Firstly, is the syntax of my ALTER TABLE correct?
Secondly, is there an automatic way to create the required values in frontend_pct? It would be great if there was some way to say to Postgres "create the foreign key, and insert values into the foreign key table if they don't exist".
Your syntax seems correct.
No, there is not an automatic way to insert the required values.
You can only do it manually before adding the constraint. In your case must be something like
INSERT INTO frontend_pct (code)
SELECT code FROM
(
SELECT DISTINCT pct_code AS code
FROM frontend_item
WHERE pct_code NOT IN (SELECT code FROM frontend_pct)
) AS a;
NOTICE:
The query can be heavy if you have lot of data..
I'm going through 7 Databases in 7 Weeks.
In PostgreSQL, I created a venues table that has a SERIAL venue_id column.
output of \d venues
Table "public.venues"
Column | Type | Modifiers
----------------+------------------------+-----------------------------------------------------------
venue_id | integer | not null default nextval('venues_venue_id_seq'::regclass)
name | character varying(255) |
street_address | text |
type | character(7) | default 'public'::bpchar
postal_code | character varying(9) |
country_code | character(2) |
Indexes:
"venues_pkey" PRIMARY KEY, btree (venue_id)
Check constraints:
"venues_type_check" CHECK (type = ANY (ARRAY['public'::bpchar, 'private'::bpchar]))
Foreign-key constraints:
"venues_country_code_fkey" FOREIGN KEY (country_code, postal_code) REFERENCES cities(country_code, postal_code) MATCH FULL
The next step is to create an event table that references venue_id with a foreign key.
I'm trying this:
CREATE TABLE events (
event_id SERIAL PRIMARY KEY,
title text,
starts timestamp,
ends timestamp,
FOREIGN KEY (venue_id) REFERENCES venues (venue_id));
And I get this error:
ERROR: column "venue_id" referenced in forgein key not found
What's wrong?
you need to initialize the foreign key column too. see here http://www.postgresql.org/docs/current/static/ddl-constraints.html#DDL-CONSTRAINTS-FK
source & credit from #mu is too short
I'm going through the second edition of this book, so things might have changed slightly.
To create the table, you explicitly have to declare the venues_id as a column in your table, just like the rest of your columns:
CREATE TABLE events (
event_id SERIAL PRIMARY KEY,
title text,
starts timestamp,
ends timestamp,
venue_id integer, -- this is the line you're missing!
FOREIGN KEY (venue_id)
REFERENCES venues (venue_id) MATCH FULL
);
Once you have executed that, the table is created:
7dbs=# \dt
List of relations
Schema | Name | Type | Owner
--------+-----------+-------+----------
public | cities | table | postgres
public | countries | table | postgres
public | events | table | postgres
public | venues | table | postgres
I've got some linked tables in a Postgres database, as follows:
Table "public.key"
Column | Type | Modifiers
--------+------+-----------
id | text | not null
name | text |
Referenced by:
TABLE "enumeration_value" CONSTRAINT "enumeration_value_key_id_fkey" FOREIGN KEY (key_id) REFERENCES key(id)
Table "public.enumeration_value"
Column | Type | Modifiers
--------+------+-----------
id | text | not null
key_id | text |
Foreign-key constraints:
"enumeration_value_key_id_fkey" FOREIGN KEY (key_id) REFERENCES key(id)
Referenced by:
TABLE "classification_item" CONSTRAINT "classification_item_value_id_fkey" FOREIGN KEY (value_id) REFERENCES enumeration_value(id)
Table "public.classification_item"
Column | Type | Modifiers
----------------+------+-----------
id | text | not null
transaction_id | text |
value_id | text |
Foreign-key constraints:
"classification_item_transaction_id_fkey" FOREIGN KEY (transaction_id) REFERENCES transaction(id)
"classification_item_value_id_fkey" FOREIGN KEY (value_id) REFERENCES enumeration_value(id)
I want to
delete all classification_items associated with a certain transaction
delete all enumeration_values associated with those classification_items
and finally, delete all key items associated with those enumeration_values.
The difficulty is that the key items are NOT unique to enumeration_values associated (via classification_item) with a certain transaction. They get created independently, and can exist across multiple of these transactions.
So I know how to do the second two of these steps, but not the first one:
delete from key where id in (select key_id from enumeration_value where id in (select value_id from "classification_item" where id = (select id from "transaction" where slice_id = (select id from slice where name = 'barnet'))));
# In statement above: help! How do I make sure these keys are ONLY used with these values?
delete from enumeration_value where id in (select value_id from "classification_item" where id = (select id from "transaction" where slice_id = (select id from slice where name = 'barnet')));
delete from classification_item where transaction_id in (select id from "transaction" where slice_id = (select id from slice where name = 'barnet'));
If only postgres had a CASCADE DELETE statement....
If only postgres had a CASCADE DELETE
statement....
PostgreSQL has this option for a long time, as of version 8.0 (5 years ago). Just use them.