We are trying to move our application to new postgresql cluster.
during doing that we noticed that application threw exception like that:
[2017-06-02 14:43:34,530] ........ (psycopg2.IntegrityError) duplicate key value violates unique constraint "items_url"
DETAIL: Key (url)=(http://www.domainname.ru/ap_module/content/article/400-professional/140-professional/11880) already exists.
[SQL: 'UPDATE items SET status=%(status)s WHERE items.id IN ....
it's very strange because:
the application writes to items fields, not items_url. items_url is indexes by items, actually
UPDATE only changes status fields that hasn't flag unique and also it is not a primary key.
table items:
id | integer | not null default nextval(('public.items_id_seq'::text)::regclass)
ctime | timestamp without time zone | not null default now()
pubdate | timestamp without time zone | not null default now()
resource_id | integer | not null default 0
url | text |
title | text |
description | text |
body | text |
status | smallint | not null default 0
image | text |
orig_id | integer | not null default 0
mtime | timestamp without time zone | not null default now()
checksum | text |
video_url | text |
audio_url | text |
content_type | smallint | default 0
author | text |
video | text |
fulltext_status | smallint | default 0
summary | text |
image_id | integer |
video_id | integer |
priority | smallint |
Indexes:
"items_pkey" PRIMARY KEY, btree (id)
"items_url" UNIQUE, btree (url)
"items_resource_id" btree (resource_id)
"ndx__items__ctime" btree (ctime)
"ndx__items__image" btree (image_id)
"ndx__items__mtime" btree (mtime)
"ndx__items__pubdate" btree (pubdate)
"ndx__items__video" btree (video_id)
Foreign-key constraints:
"items_fkey1" FOREIGN KEY (image_id) REFERENCES images(id) ON UPDATE CASCADE ON DELETE SET NULL
"items_fkey2" FOREIGN KEY (video_id) REFERENCES videos(id) ON UPDATE CASCADE ON DELETE SET NUL
Well, the question is why it happens and how can I troubleshoot this?
Thank you.
UPD1:
I tried to reproduce it on 9.4. - reproduced
Played with client_encoding. Encoding everywhere is the same.
Related
NB: Sorry for long post.
I host a PeerTube instance on OpenBSD 6.7, with PostgreSQL 12.3.
I have troubles regularly with the database, because, I don't know why, the SERIAL thing in Id cause trouble :
error[13/07/2020 à 23:01:04] Cannot create video views for video 1004826 in hour 22.
{
"err": {
"stack": "SequelizeDatabaseError: null value in column \"id\" violates not-null constraint\n at Query.formatError (/var/www/peertube/versions/peertube-v2.3.0-rc.1/node_modules/sequelize/lib/dialects/postgres/query.js:366:16)\n
...
"message": "null value in column \"id\" violates not-null constraint",
Here, we can see that the thing is supposed to pass a DEFAULT to the Id COLUMN.
"sql": "INSERT INTO \"video\" (\"id\",\"uuid\",\"name\",\"category\",\"licence\",\"language\",\"privacy\",\"nsfw\",\"description\",\"support\",\"duration\",\"views\",\"likes\",\"dislikes\",\"remote\",\"url\",\"commentsEnabled\",\"downloadEnabled\",\"waitTranscoding\",\"state\",\"publishedAt\",\"originallyPublishedAt\",\"channelId\",\"createdAt\",\"updatedAt\") VALUES (DEFAULT,$1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24) RETURNING *;",
So after a bit of research, I found about IDENTITY column and proceed to change a few of them, just to see if it improves the situation.
So I changed the Id column to IDENTITY. But now, I forgot to drop the SEQUENCE (the one that was there in the first place to make things work... ) in the first place. Let's try :
peertube_prod2=> DROP SEQUENCE "videoView_id_seq";
ERROR: cannot drop sequence "videoView_id_seq" because column id of table "videoView" requires it
HINT: You can drop column id of table "videoView" instead.
But I cannot drop the id column. So instead, I copied the id column content in another one. Then dropped the original id column and replaced it with an Identity column (I basically made a swap operation with a temporary column and return to the original name). But I still cannot drop the damn SEQUENCE.
That same operation cannot happen either on some key TABLE (with links to half the rest of the DB).
So actually, the problem has moved. I still cannot update the thing (because the SEQUENCE is still there, even if Id is now an IDENTITY field):
warn[02/08/2020 à 11:02:26] Cannot process activity Update.
{
"err": {
"stack": "SequelizeDatabaseError: more than one owned sequence found\n at Query.formatError (/var/www/peertube/versions/peertube-v2.3.0/node_modules/sequelize/lib/dialects/postgres/query.js:366:16)\n
...
"message": "more than one owned sequence found",
Structure of a table after updating to IDENTITY :
peertube_prod2=> \d "videoView"
Table "public.videoView"
Column | Type | Collation | Nullable | Default
-----------+--------------------------+-----------+----------+----------------------------------
startDate | timestamp with time zone | | not null |
endDate | timestamp with time zone | | not null |
views | integer | | not null |
videoId | integer | | not null |
createdAt | timestamp with time zone | | not null |
col | integer | | |
id | integer | | not null | generated by default as identity
Indexes:
"video_view_start_date" btree ("startDate")
"video_view_video_id" btree ("videoId")
Foreign-key constraints:
"videoView_videoId_fkey" FOREIGN KEY ("videoId") REFERENCES video(id) ON UPDATE CASCADE ON DELETE CASCADE
structure of a table from the start :
peertube_prod2=> \d "videoComment"
Table "public.videoComment"
Column | Type | Collation | Nullable | Default
--------------------+--------------------------+-----------+----------+--------------------------------------------
id | integer | | not null | nextval('"videoComment_id_seq"'::regclass)
url | character varying(2000) | | not null |
text | text | | not null |
originCommentId | integer | | |
inReplyToCommentId | integer | | |
videoId | integer | | not null |
accountId | integer | | |
createdAt | timestamp with time zone | | not null |
updatedAt | timestamp with time zone | | not null |
deletedAt | timestamp with time zone | | |
Indexes:
"videoComment_pkey" PRIMARY KEY, btree (id)
"video_comment_url" UNIQUE, btree (url)
"video_comment_account_id" btree ("accountId")
"video_comment_created_at" btree ("createdAt" DESC)
"video_comment_video_id" btree ("videoId")
"video_comment_video_id_origin_comment_id" btree ("videoId", "originCommentId")
Foreign-key constraints:
"videoComment_accountId_fkey" FOREIGN KEY ("accountId") REFERENCES account(id) ON UPDATE CASCADE ON DELETE CASCADE
"videoComment_inReplyToCommentId_fkey" FOREIGN KEY ("inReplyToCommentId") REFERENCES "videoComment"(id) ON UPDATE CASCADE ON DELETE CASCADE
"videoComment_originCommentId_fkey" FOREIGN KEY ("originCommentId") REFERENCES "videoComment"(id) ON UPDATE CASCADE ON DELETE CASCADE
"videoComment_videoId_fkey" FOREIGN KEY ("videoId") REFERENCES video(id) ON UPDATE CASCADE ON DELETE CASCADE
Referenced by:
TABLE ""userNotification"" CONSTRAINT "userNotification_commentId_fkey" FOREIGN KEY ("commentId") REFERENCES "videoComment"(id) ON UPDATE CASCADE ON DELETE CASCADE
TABLE ""videoComment"" CONSTRAINT "videoComment_inReplyToCommentId_fkey" FOREIGN KEY ("inReplyToCommentId") REFERENCES "videoComment"(id) ON UPDATE CASCADE ON DELETE CASCADE
TABLE ""videoComment"" CONSTRAINT "videoComment_originCommentId_fkey" FOREIGN KEY ("originCommentId") REFERENCES "videoComment"(id) ON UPDATE CASCADE ON DELETE CASCADE
I though I could build again an empty database with IDENTITY columns everywhere and then import the data from the current database. Can one do that ?
Or can I upgrade the current database directly ?
Or ... what did I missed ?
I have problem.
The standard_price is computed field and not stored product_template, product_product table. How to get the standard price field value in odoo xlsx report?
The error is:
Record does not exist or has been deleted.: None
Help, I need any solution and idea?
Check the cost field of the product_price_history table. I think that is what you are looking for. This table is related with the product_product table through the field product_id:
base=# \dS product_price_history
Table "public.product_price_history"
Column | Type | Modifiers
-------------+-----------------------------+--------------------------------------------------------------------
id | integer | not null default nextval('product_price_history_id_seq'::regclass)
create_uid | integer |
product_id | integer | not null
company_id | integer | not null
datetime | timestamp without time zone |
cost | numeric |
write_date | timestamp without time zone |
create_date | timestamp without time zone |
write_uid | integer |
Indexes:
"product_price_history_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
"product_price_history_company_id_fkey" FOREIGN KEY (company_id) REFERENCES res_company(id) ON DELETE SET NULL
"product_price_history_create_uid_fkey" FOREIGN KEY (create_uid) REFERENCES res_users(id) ON DELETE SET NULL
"product_price_history_product_id_fkey" FOREIGN KEY (product_id) REFERENCES product_product(id) ON DELETE CASCADE
"product_price_history_write_uid_fkey" FOREIGN KEY (write_uid) REFERENCES res_users(id) ON DELETE SET NULL
I have 1.3 billion rows in a PostgreSQL table sku_comparison that looks like this:
id1 (INTEGER) | id2 (INTEGER) | (10 SMALLINT columns) | length1 (SMALLINT)... |
... length2 (SMALLINT) | length_difference (SMALLINT)
The id1 and id2 columns are referenced in a table called sku, which contains about 300,000 rows, and have an associated varchar(25) value in each row from a column, code.
There is a btree index built on id1 and id2, and a compound index of id1 and id2 in sku_comparison. There is a btree index on the id column of sku, as well.
My goal is to update the length1 and length2 columns with the lengths of the corresponding code column from the sku table. However, I ran the following code for over 20 hours, and it did not complete the update:
UPDATE sku_comparison SET length1=length(sku.code) FROM sku
WHERE sku_comparison.id1=sku.id;
All of the data is stored on a single hard disk on a local computer, and the processor is fairly modern. Constructing this table, which required much more complicated string comparisons in Python, only took about 30 hours or so, so I am not sure why something like this would take as long.
edit: here are formatted table definitions:
Table "public.sku"
Column | Type | Modifiers
------------+-----------------------+--------------------------------------------------
id | integer | not null default nextval('sku_id_seq'::regclass)
sku | character varying(25) |
pattern | character varying(25) |
pattern_an | character varying(25) |
firsttwo | character(2) | default ' '::bpchar
reference | character varying(25) |
Indexes:
"sku_pkey" PRIMARY KEY, btree (id)
"sku_sku_idx" UNIQUE, btree (sku)
"sku_firstwo_idx" btree (firsttwo)
Referenced by:
TABLE "sku_comparison" CONSTRAINT "sku_comparison_id1_fkey" FOREIGN KEY (id1) REFERENCES sku(id)
TABLE "sku_comparison" CONSTRAINT "sku_comparison_id2_fkey" FOREIGN KEY (id2) REFERENCES sku(id)
Table "public.sku_comparison"
Column | Type | Modifiers
---------------------------+----------+-------------------------
id1 | integer | not null
id2 | integer | not null
consec_charmatch | smallint |
consec_groupmatch | smallint |
consec_fieldtypematch | smallint |
consec_groupmatch_an | smallint |
consec_fieldtypematch_an | smallint |
general_charmatch | smallint |
general_groupmatch | smallint |
general_fieldtypematch | smallint |
general_groupmatch_an | smallint |
general_fieldtypematch_an | smallint |
length1 | smallint | default 0
length2 | smallint | default 0
length_difference | smallint | default '-999'::integer
Indexes:
"sku_comparison_pkey" PRIMARY KEY, btree (id1, id2)
"ssd_id1_idx" btree (id1)
"ssd_id2_idx" btree (id2)
Foreign-key constraints:
"sku_comparison_id1_fkey" FOREIGN KEY (id1) REFERENCES sku(id)
"sku_comparison_id2_fkey" FOREIGN KEY (id2) REFERENCES sku(id)
Would you consider using an anonymous code block?
using pseudo code...
FOREACH 'SELECT ski.id,
sku.code,
length(sku.code)
FROM sku
INTO v_skuid, v_skucode, v_skulength'
DO
UPDATE sku_comparison
SET sku_comparison.length1 = v_skulength
WHERE sku_comparison.id1=v_skuid;
END DO
END FOREACH
This would break the whole thing into smaller transactions and you will not be evaluating the length of sku.code every time.
I'm using Play 2.3 and trying to generate relational database by evolution for PostgreSQL 9.4.
I have following statements in my conf/evolutions/default/1.sql script:
ALTER TABLE ONLY round
ADD CONSTRAINT round_event_id_fkey FOREIGN KEY (event_id) REFERENCES event(id);
ALTER TABLE ONLY round
ADD CONSTRAINT round_event_id UNIQUE (event_id);
Following is my event table description:
Table "public.event"
Column | Type | Modifiers
-------------------------------+-----------------------------+---------------------------------------------------- id | integer | not null default nextval('event_id_seq'::regclass) related_event_hash | character varying(45) | start_time | timestamp without time zone | end_time | timestamp without time zone | name | character varying(45) | status | character varying(45) | not null owner_id | bigint | not null venue_id | bigint | participation_hash | character varying(45) | number_of_participants | integer | number_of_backup_participants | integer | created | timestamp without time zone | not null updated | timestamp without time zone | not null Indexes:
"event_pkey" PRIMARY KEY, btree (id)
"index_event_name" btree (name)
"index_event_status" btree (status)
"index_start_time" btree (start_time) Foreign-key constraints:
"event_owner_id_fkey" FOREIGN KEY (owner_id) REFERENCES person(id)
"event_venue_id_fkey" FOREIGN KEY (venue_id) REFERENCES venue(id) Referenced by:
TABLE "anonymous_person" CONSTRAINT "anonymous_person_event_id_fkey" FOREIGN KEY (event_id) REFERENCES event(id)
TABLE "mix_game" CONSTRAINT "mix_game_event_id_fkey" FOREIGN KEY (event_id) REFERENCES event(id)
TABLE "participant" CONSTRAINT "participant_event_id_fkey" FOREIGN KEY (event_id) REFERENCES event(id)
When I start the application in a browser I get this error:
Database 'default' is in an inconsistent state!
While trying to run this SQL script, we got the following error:
ERROR: there is no unique constraint matching given keys for referenced table "round" [ERROR:0, SQLSTATE:42830]
What could be wrong? How to fix this error and add foreign key constraints?
Note that it generates database round as follows without foreign key constraints.
Table "public.round"
Column | Type | Modifiers
------------------+-----------------------+----------------------------------------------------
id | integer | not null default nextval('round_id_seq'::regclass)
round_no | integer | not null
event_id | bigint | not null
state | character varying(20) | not null
team_composition | character(12) | not null
result | character varying(20) |
description | character varying(45) |
play_time | integer | not null
shift_time | integer |
change_time | integer |
Indexes:
"round_pkey" PRIMARY KEY, btree (id)
"round_event_id" UNIQUE CONSTRAINT, btree (event_id)
Take a look at the documentation.
As you see you have to delimit the both Ups and Downs section by using
comments in your SQL script.
Also, do not edit the 1.sql file because it is updated by the evolution mechanism. Start your own evolutions at 2.sql.
I'm trying to get this query to run faster. It seems like sorting by the quality field is what really slows it down (the table has about 5 million rows) - maybe there is an index I can use for that?
Query:
SELECT "connectr_twitterpassage"."id", "connectr_twitterpassage"."third_party_id", "connectr_twitterpassage"."third_party_created", "connectr_twitterpassage"."source", "connectr_twitterpassage"."text", "connectr_twitterpassage"."author", "connectr_twitterpassage"."raw_data", "connectr_twitterpassage"."retweet_count", "connectr_twitterpassage"."favorited_count", "connectr_twitterpassage"."lang", "connectr_twitterpassage"."location", "connectr_twitterpassage"."author_followers_count", "connectr_twitterpassage"."is_retweet", "connectr_twitterpassage"."url", "connectr_twitterpassage"."author_fk_id", "connectr_twitterpassage"."quality", "connectr_twitterpassage"."is_top_tweet", "connectr_twitterpassage"."created", "connectr_twitterpassage"."modified"
FROM "connectr_twitterpassage"
INNER JOIN "connectr_twitterpassage_words" ON ("connectr_twitterpassage"."id" = "connectr_twitterpassage_words"."twitterpassage_id")
WHERE "connectr_twitterpassage_words"."word_id" = 18974807
ORDER BY "connectr_twitterpassage"."quality"
DESC LIMIT 100
Here is the EXPLAIN ANALYZE: http://explain.depesz.com/s/7zb
And the table definitions:
\d connectr_twitterpassage
Column | Type | Modifiers
------------------------+--------------------------+----------------------------------------------------------------------
id
| integer | not null default nextval('connectr_twitterpassage_id_seq'::regclass)
third_party_id | character varying(10000) | not null
source | character varying(10000) | not null
text | character varying(10000) | not null
author | character varying(10000) | not null
raw_data | character varying(10000) | not null
created | timestamp with time zone | not null
modified | timestamp with time zone | not null
third_party_created | timestamp with time zone |
retweet_count | integer | not null
favorited_count | integer | not null
lang | character varying(10000) | not null
location | character varying(10000) | not null
author_followers_count | integer | not null
is_retweet | boolean | not null
url | character varying(10000) | not null
author_fk_id | integer |
quality | bigint |
is_top_tweet | boolean | not null
Indexes:
"connectr_passage_pkey" PRIMARY KEY, btree (id)
"connectr_twitterpassage_third_party_id_uniq" UNIQUE CONSTRAINT, btree (third_party_id)
"connectr_passage_author_followers_count" btree (author_followers_count)
"connectr_passage_favorited_count" btree (favorited_count)
"connectr_passage_retweet_count" btree (retweet_count)
"connectr_passage_source" btree (source)
"connectr_passage_source_like" btree (source varchar_pattern_ops)
"connectr_passage_third_party_id" btree (third_party_id)
"connectr_passage_third_party_id_like" btree (third_party_id varchar_pattern_ops)
"connectr_twitterpassage_author_fk_id" btree (author_fk_id)
"connectr_twitterpassage_created" btree (created)
"connectr_twitterpassage_is_top_tweet" btree (is_top_tweet)
"connectr_twitterpassage_quality" btree (quality)
"connectr_twitterpassage_third_party_created" btree (third_party_created)
"id_to_quality_sorted" btree (id, quality DESC NULLS LAST)
Foreign-key constraints:
"author_fk_id_refs_id_074720a5" FOREIGN KEY (author_fk_id) REFERENCES connectr_twitteruser(id) DEFERRABLE INITIALLY DEFERRED
Referenced by:
TABLE "connectr_passageviewevent" CONSTRAINT "passage_id_refs_id_892b36a6" FOREIGN KEY (passage_id) REFERENCES connectr_twitterpassage(id) DEFERRABLE INITIALLY DEFERRED
TABLE "connectr_connection" CONSTRAINT "twitter_from_id_refs_id_8adbab24" FOREIGN KEY (twitter_from_id) REFERENCES connectr_twitterpassage(id) DEFERRABLE INITIALLY DEFERRED
TABLE "connectr_connection" CONSTRAINT "twitter_to_id_refs_id_8adbab24" FOREIGN KEY (twitter_to_id) REFERENCES connectr_twitterpassage(id) DEFERRABLE INITIALLY DEFERRED
TABLE "connectr_twitterpassage_words" CONSTRAINT "twitterpassage_id_refs_id_720f772f" FOREIGN KEY (twitterpassage_id) REFERENCES connectr_twitterpassage(id) DEFERRABLE INITIALLY DEFERRED
connectr=# \d connectr_twitterpassage_words
Table "public.connectr_twitterpassage_words"
Column | Type | Modifiers
-------------------+---------+----------------------------------------------------------------------------
id | integer | not null default nextval('connectr_twitterpassage_words_id_seq'::regclass)
twitterpassage_id | integer | not null
word_id | integer | not null
Indexes:
"connectr_twitterpassage_words_pkey" PRIMARY KEY, btree (id)
"connectr_twitterpassage_twitterpassage_id_613c80271f09fba8_uniq" UNIQUE CONSTRAINT, btree (twitterpassage_id, word_id)
"connectr_twitterpassage_words_twitterpassage_id" btree (twitterpassage_id)
"connectr_twitterpassage_words_word_id" btree (word_id)
"word_to_twitterpassage_id" btree (word_id, twitterpassage_id)
Foreign-key constraints:
"twitterpassage_id_refs_id_720f772f" FOREIGN KEY (twitterpassage_id) REFERENCES connectr_twitterpassage(id) DEFERRABLE INITIALLY DEFERRED
"word_id_refs_id_64f49629" FOREIGN KEY (word_id) REFERENCES connectr_word(id) DEFERRABLE INITIALLY DEFERRED
connectr=# \d connectr_word
Table "public.connectr_word"
Column | Type | Modifiers
---------------------+--------------------------+------------------------------------------------------------
id | integer | not null default nextval('connectr_word_id_seq'::regclass)
word | character varying(10000) | not null
created | timestamp with time zone | not null
modified | timestamp with time zone | not null
frequency | double precision |
is_username | boolean | not null
is_hashtag | boolean | not null
cloud_eligible | boolean | not null
passage_count | integer |
avg_quality | double precision |
last_twitter_search | timestamp with time zone |
cloud_approved | boolean | not null
display_word | character varying(10000) | not null
is_trend | boolean | not null
Indexes:
"connectr_word_pkey" PRIMARY KEY, btree (id)
"connectr_word_word_uniq" UNIQUE CONSTRAINT, btree (word)
"connectr_word_avg_quality" btree (avg_quality)
"connectr_word_cloud_eligible" btree (cloud_eligible)
"connectr_word_last_twitter_search" btree (last_twitter_search)
"connectr_word_passage_count" btree (passage_count)
"connectr_word_word" btree (word)
Referenced by:
TABLE "connectr_passageviewevent" CONSTRAINT "source_word_id_refs_id_178d46eb" FOREIGN KEY (source_word_id) REFERENCES connectr_word(id) DEFERRABLE INITIALLY DEFERRED
TABLE "connectr_wordmatchrewardevent" CONSTRAINT "tapped_word_id_refs_id_c2ffb369" FOREIGN KEY (tapped_word_id) REFERENCES connectr_word(id) DEFERRABLE INITIALLY DEFERRED
TABLE "connectr_connection" CONSTRAINT "word_id_refs_id_00cccde2" FOREIGN KEY (word_id) REFERENCES connectr_word(id) DEFERRABLE INITIALLY DEFERRED
TABLE "connectr_twitterpassage_words" CONSTRAINT "word_id_refs_id_64f49629" FOREIGN KEY (word_id) REFERENCES connectr_word(id) DEFERRABLE INITIALLY DEFERRED
Looking at the explain output, the sort is taking very little of the time. It is gathering the data it needs to sort that takes the time.
You must be spending a bit of time hitting the disk. If you could get your data better cached, that should speed it up a lot using the same query.
Otherwise, your best bet may be to denormalize the data by adding the quality field to the connectr_twitterpassage_words table and having an index on (word_id, quality,...)