Column <tablename>_id referenced in foreign key not found - postgresql

I'm going through 7 Databases in 7 Weeks.
In PostgreSQL, I created a venues table that has a SERIAL venue_id column.
output of \d venues
Table "public.venues"
Column | Type | Modifiers
----------------+------------------------+-----------------------------------------------------------
venue_id | integer | not null default nextval('venues_venue_id_seq'::regclass)
name | character varying(255) |
street_address | text |
type | character(7) | default 'public'::bpchar
postal_code | character varying(9) |
country_code | character(2) |
Indexes:
"venues_pkey" PRIMARY KEY, btree (venue_id)
Check constraints:
"venues_type_check" CHECK (type = ANY (ARRAY['public'::bpchar, 'private'::bpchar]))
Foreign-key constraints:
"venues_country_code_fkey" FOREIGN KEY (country_code, postal_code) REFERENCES cities(country_code, postal_code) MATCH FULL
The next step is to create an event table that references venue_id with a foreign key.
I'm trying this:
CREATE TABLE events (
event_id SERIAL PRIMARY KEY,
title text,
starts timestamp,
ends timestamp,
FOREIGN KEY (venue_id) REFERENCES venues (venue_id));
And I get this error:
ERROR: column "venue_id" referenced in forgein key not found
What's wrong?

you need to initialize the foreign key column too. see here http://www.postgresql.org/docs/current/static/ddl-constraints.html#DDL-CONSTRAINTS-FK
source & credit from #mu is too short

I'm going through the second edition of this book, so things might have changed slightly.
To create the table, you explicitly have to declare the venues_id as a column in your table, just like the rest of your columns:
CREATE TABLE events (
event_id SERIAL PRIMARY KEY,
title text,
starts timestamp,
ends timestamp,
venue_id integer, -- this is the line you're missing!
FOREIGN KEY (venue_id)
REFERENCES venues (venue_id) MATCH FULL
);
Once you have executed that, the table is created:
7dbs=# \dt
List of relations
Schema | Name | Type | Owner
--------+-----------+-------+----------
public | cities | table | postgres
public | countries | table | postgres
public | events | table | postgres
public | venues | table | postgres

Related

How to upgrade PostgreSQL to IDENTITY (or alternative)?

NB: Sorry for long post.
I host a PeerTube instance on OpenBSD 6.7, with PostgreSQL 12.3.
I have troubles regularly with the database, because, I don't know why, the SERIAL thing in Id cause trouble :
error[13/07/2020 à 23:01:04] Cannot create video views for video 1004826 in hour 22.
{
"err": {
"stack": "SequelizeDatabaseError: null value in column \"id\" violates not-null constraint\n at Query.formatError (/var/www/peertube/versions/peertube-v2.3.0-rc.1/node_modules/sequelize/lib/dialects/postgres/query.js:366:16)\n
...
"message": "null value in column \"id\" violates not-null constraint",
Here, we can see that the thing is supposed to pass a DEFAULT to the Id COLUMN.
"sql": "INSERT INTO \"video\" (\"id\",\"uuid\",\"name\",\"category\",\"licence\",\"language\",\"privacy\",\"nsfw\",\"description\",\"support\",\"duration\",\"views\",\"likes\",\"dislikes\",\"remote\",\"url\",\"commentsEnabled\",\"downloadEnabled\",\"waitTranscoding\",\"state\",\"publishedAt\",\"originallyPublishedAt\",\"channelId\",\"createdAt\",\"updatedAt\") VALUES (DEFAULT,$1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24) RETURNING *;",
So after a bit of research, I found about IDENTITY column and proceed to change a few of them, just to see if it improves the situation.
So I changed the Id column to IDENTITY. But now, I forgot to drop the SEQUENCE (the one that was there in the first place to make things work... ) in the first place. Let's try :
peertube_prod2=> DROP SEQUENCE "videoView_id_seq";
ERROR: cannot drop sequence "videoView_id_seq" because column id of table "videoView" requires it
HINT: You can drop column id of table "videoView" instead.
But I cannot drop the id column. So instead, I copied the id column content in another one. Then dropped the original id column and replaced it with an Identity column (I basically made a swap operation with a temporary column and return to the original name). But I still cannot drop the damn SEQUENCE.
That same operation cannot happen either on some key TABLE (with links to half the rest of the DB).
So actually, the problem has moved. I still cannot update the thing (because the SEQUENCE is still there, even if Id is now an IDENTITY field):
warn[02/08/2020 à 11:02:26] Cannot process activity Update.
{
"err": {
"stack": "SequelizeDatabaseError: more than one owned sequence found\n at Query.formatError (/var/www/peertube/versions/peertube-v2.3.0/node_modules/sequelize/lib/dialects/postgres/query.js:366:16)\n
...
"message": "more than one owned sequence found",
Structure of a table after updating to IDENTITY :
peertube_prod2=> \d "videoView"
Table "public.videoView"
Column | Type | Collation | Nullable | Default
-----------+--------------------------+-----------+----------+----------------------------------
startDate | timestamp with time zone | | not null |
endDate | timestamp with time zone | | not null |
views | integer | | not null |
videoId | integer | | not null |
createdAt | timestamp with time zone | | not null |
col | integer | | |
id | integer | | not null | generated by default as identity
Indexes:
"video_view_start_date" btree ("startDate")
"video_view_video_id" btree ("videoId")
Foreign-key constraints:
"videoView_videoId_fkey" FOREIGN KEY ("videoId") REFERENCES video(id) ON UPDATE CASCADE ON DELETE CASCADE
structure of a table from the start :
peertube_prod2=> \d "videoComment"
Table "public.videoComment"
Column | Type | Collation | Nullable | Default
--------------------+--------------------------+-----------+----------+--------------------------------------------
id | integer | | not null | nextval('"videoComment_id_seq"'::regclass)
url | character varying(2000) | | not null |
text | text | | not null |
originCommentId | integer | | |
inReplyToCommentId | integer | | |
videoId | integer | | not null |
accountId | integer | | |
createdAt | timestamp with time zone | | not null |
updatedAt | timestamp with time zone | | not null |
deletedAt | timestamp with time zone | | |
Indexes:
"videoComment_pkey" PRIMARY KEY, btree (id)
"video_comment_url" UNIQUE, btree (url)
"video_comment_account_id" btree ("accountId")
"video_comment_created_at" btree ("createdAt" DESC)
"video_comment_video_id" btree ("videoId")
"video_comment_video_id_origin_comment_id" btree ("videoId", "originCommentId")
Foreign-key constraints:
"videoComment_accountId_fkey" FOREIGN KEY ("accountId") REFERENCES account(id) ON UPDATE CASCADE ON DELETE CASCADE
"videoComment_inReplyToCommentId_fkey" FOREIGN KEY ("inReplyToCommentId") REFERENCES "videoComment"(id) ON UPDATE CASCADE ON DELETE CASCADE
"videoComment_originCommentId_fkey" FOREIGN KEY ("originCommentId") REFERENCES "videoComment"(id) ON UPDATE CASCADE ON DELETE CASCADE
"videoComment_videoId_fkey" FOREIGN KEY ("videoId") REFERENCES video(id) ON UPDATE CASCADE ON DELETE CASCADE
Referenced by:
TABLE ""userNotification"" CONSTRAINT "userNotification_commentId_fkey" FOREIGN KEY ("commentId") REFERENCES "videoComment"(id) ON UPDATE CASCADE ON DELETE CASCADE
TABLE ""videoComment"" CONSTRAINT "videoComment_inReplyToCommentId_fkey" FOREIGN KEY ("inReplyToCommentId") REFERENCES "videoComment"(id) ON UPDATE CASCADE ON DELETE CASCADE
TABLE ""videoComment"" CONSTRAINT "videoComment_originCommentId_fkey" FOREIGN KEY ("originCommentId") REFERENCES "videoComment"(id) ON UPDATE CASCADE ON DELETE CASCADE
I though I could build again an empty database with IDENTITY columns everywhere and then import the data from the current database. Can one do that ?
Or can I upgrade the current database directly ?
Or ... what did I missed ?

Does updating a column with foregin key contraint can lock the referenced table?

I have one table, which is heavily updated in my system by process A. This is the simplified table:
db=# \d employee;
Table "public.employee"
Column | Type | Collation | Nullable | Default
-----------------+-----------------------------+-----------+----------+---------------------------------------------
id | integer | | not null | nextval('employee_id_seq'::regclass)
name | character varying | | |
Indexes:
"employee_pkey" PRIMARY KEY, btree (id)
And I have a table which is referencing that table:
db=# \d employee_property;
Table "public.employee_property"
Column | Type | Collation | Nullable | Default
-----------------+-----------------------------+-----------+----------+---------------------------------------------
id | integer | | not null | nextval('employee_property_id_seq'::regclass)
type | character varying | | |
value | character varying | | |
employee_id | integer | | not null |
Indexes:
"employee_property_pkey" PRIMARY KEY, btree (id)
"employee_property_employee_id_type_value_key" UNIQUE CONSTRAINT, btree (employee_id, type, value)
"ix_employee_property_employee_id" btree (employee_id)
Foreign-key constraints:
"employee_property_employee_id_fkey" FOREIGN KEY (employee_id) REFERENCES employee(employee_id) ON DELETE CASCADE DEFERRABLE
I am trying to understand if I am updating the employee_property table heavily by process B in the system, might it cause some locks or any other side effects which might affect the process A which updates the employee table?
If you insert a row in employee_property or update the employee_id column of an existing row, a FOR KEY SHARE lock is placed on the row the new employee_id refers to.
This lock will block any concurrent attempt to delete the referenced employee row or update any PRIMARY KEY or UNIQUE columns. Updates to the locked employee row that do not modify a key column will work, because they only require a FOR NO KEY UPDATE lock on the row, which is compatible with FOR KEY SHARE.
The reason for this is that PostgreSQL must ensure that the referenced row cannot vanish while the transaction that modifies employee_property is still in progress. Simply checking for referencing rows in employee won't be enough, because the effects of a transaction that is still in progress are not visible outside the transaction itself.

postgresl: search no result on id

DB: postgresql 9.5
It returns result when seraching with field name, but it returns no result when searching with id.
somedb=# select id from users where name='abc';
id
------
123
(1 row)
somedb=# select id from users where id=123;
id
----
(0 rows)
bigquant_jupyterhub=# \d users;
Table "public.users"
Column | Type | Modifiers
---------------+-----------------------------+----------------------------------------------------
id | integer | not null default nextval('users_id_seq'::regclass)
name | character varying(1023) |
_server_id | integer |
admin | boolean |
last_activity | timestamp without time zone |
cookie_id | character varying(1023) |
state | text |
auth_state | text |
Indexes:
"users_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
"users__server_id_fkey" FOREIGN KEY (_server_id) REFERENCES servers(id)
Referenced by:
TABLE "api_tokens" CONSTRAINT "api_tokens_user_id_fkey" FOREIGN KEY (user_id) REFERENCES users(id)
TABLE "user_group_map" CONSTRAINT "user_group_map_user_id_fkey" FOREIGN KEY (user_id) REFERENCES users(id)
Thanks for all your help.

In Postgres, how can I delete a row from table B when a row from table A is deleted?

I’m using Postgres 9.5.0. I have the following table
myproject=> \d my_objects;
Table "public.my_objects"
Column | Type | Modifiers
---------------------+-----------------------------+-------------------------------------
name | character varying |
day | date |
distance | double precision |
user_id | integer |
created_at | timestamp without time zone | not null
updated_at | timestamp without time zone | not null
distance_unit_id | integer |
import_completed | boolean |
id | character varying | not null default uuid_generate_v4()
linked_my_object_time_id | character varying |
web_crawler_id | integer |
address_id | character varying |
Indexes:
"my_objects_pkey" PRIMARY KEY, btree (id)
"index_my_objects_on_user_id_and_day_and_name" UNIQUE, btree (user_id, day, name)
"index_my_objects_on_user_id" btree (user_id)
"index_my_objects_on_web_crawler_id" btree (web_crawler_id)
Foreign-key constraints:
"fk_rails_5287d445c0" FOREIGN KEY (address_id) REFERENCES addresses(id) ON DELETE CASCADE
"fk_rails_970b2325bf" FOREIGN KEY (distance_unit_id) REFERENCES distance_units(id)
"fk_rails_dda3297b57" FOREIGN KEY (linked_my_object_time_id) REFERENCES my_object_times(id) ON DELETE CASCADE
"fk_rails_ebd32625bc" FOREIGN KEY (web_crawler_id) REFERENCES web_crawlers(id)
"fk_rails_fa07601dff" FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
Right now, each my_object has an address field. What I would like is when I delete the my_object, the corresponding address entry be deleted as well. Without moving the address_id column out of the my_objects table, is it possible to set something up such that when I delete a row from the my_objects table, any corresponding address data is deleted as well? Obviously, the foreign key I have set up will not get the job done.
You can do this with a trigger:
CREATE OR REPLACE FUNCTION remove_address() RETURNS trigger
LANGUAGE plpgsql AS
$$BEGIN
DELETE FROM public.addresses WHERE id = OLD.address_id;
RETURN OLD;
END;$$;
CREATE TRIGGER remove_address
AFTER DELETE ON public.my_objects FOR EACH ROW
EXECUTE PROCEDURE remove_address()

Using INSERT INTO with association in Postgres

Let's say I have two tables. Is there a way to insert rows into both simultaneously? When I had 1 table I could simply do
INSERT INTO users (first_name, last_name, email) VALUES ('Tom','Prats','tom#tomprats.com'), ('Tim', 'Prats', tim#tomprats.com);
If I now have 1 table that stores email, and an associated table that stores first and last name, how do I insert both at once? Keeping in mind I'm importing a lot of data.
Table "users"
Column | Type | Modifiers
---------------------+---------------+-----------
id | character(36) | not null
email | text |
Indexes:
"pk_users" PRIMARY KEY, btree (id)
Foreign-key constraints:
"fk_users_attributes" FOREIGN KEY (attributes_id) REFERENCES user_attributes(id)
Table "user_attributes"
Column | Type | Modifiers
-----------------+---------------+-----------
id | character(36) | not null
first_name | text |
last_name | text |
Indexes:
"pk_user_attributes" PRIMARY KEY, btree (id)
Referenced by:
TABLE "users" CONSTRAINT "fk_users_attributes" FOREIGN KEY (attributes_id) REFERENCES user_attributes(id)
You can't do this only by insert.
If you must do this in a single line the best way to do it is to write a short function that will make 2 inserts. This will result in a single transaction per call(if this was the main problem).
You can also use Data-Modifying Statements in WITH