I have a table that has a primary key based off of a sequence START WITH 1 INCREMENT BY 1.
Primary key works great, incrementing as needed.
However I have another field that is UNIQUE. When I attempt to insert a new row that will fail the UNIQUE check, the primary key is still updated. I expect this to not update the primary key because the INSERT failed.
To test this I inserted two rows, primary key was 1 and 2 respectively. I then attempted to insert the data from row two again. It failed due to the unique constraint. I then inserted another row with a unique value and the primary key jumped from 2 to 4, skipping the three that would have been used had the unique constraint not failed.
Is this expected behaviour for Postgres?
That is normal behavior.
The documentation explains that:
To avoid blocking concurrent transactions that obtain numbers from the same sequence, a nextval operation is never rolled back; that is, once a value has been fetched it is considered used and will not be returned again. This is true even if the surrounding transaction later aborts, or if the calling query ends up not using the value. For example an INSERT with an ON CONFLICT clause will compute the to-be-inserted tuple, including doing any required nextval calls, before detecting any conflict that would cause it to follow the ON CONFLICT rule instead. Such cases will leave unused “holes” in the sequence of assigned values. Thus, PostgreSQL sequence objects cannot be used to obtain “gapless” sequences.
For details and examples, you can read my blog.
I have a table 'client', which has 3 columns - id, siebel_id, phone_number.
PhoneNumber has a unique constraint. If I save a new client with an existing number, I'll get an error ERROR: duplicate key value violates unique constraint "phone_number_unique".
Is it possible to make PSQL or MyBatis showing 'siebel_id' of a record where the phone number already saved?
I mean to get a message like
'ERROR: duplicate key value violates unique constraint "phone_number_unique"
Detail: Key (phone_number)=(+79991234567) already exists on siebel_id...'
No, it's not possible to tweak the internal message that the PostgreSQL database engine returns accompannying an error. Well... unless you recompiled the whole PostgreSQL database from scratch, and I would assume this is off the table.
However, you can easily search for the offending row using SQL, as in:
select siebel_id from client where phone_number = '+79991234567';
If it possible to prevent Postgres from including the DETAIL section of it's error messages?
This is to prevent the detail from reaching our less-secure logging setup, but still including enough info to debug errors.
Example exception:
duplicate key value violates unique constraint
"tbl_statement_labels_pkey"\nDETAIL: Key (statement_id, label_id)=(1,
11) already exists.\n'
I only want
duplicate key value violates unique constraint
"tbl_statement_labels_pkey" part.
Thanks!
I have a JPA entity with a composite unique key, and I wrote a scheduler that loads data into the table of this entity. Only that I have an exception when load in in case there is a unique key violation. I want to suppress any unique constraint violation exception from my database and continue the loading of all the other objects. I am using transactions, and I would not want to lock the whole table to verify uniqueness as other users are using it.
I think you can iterate over all objects and wrap the method that load an object into table by a
try{} catch(JPAException exp){}
With this if the exception occurred when load any particular data, the iteration will continue and other objects still be processed.
In postgres I have created a table by name twitter_tweets. In this table I have assigned constraint for tweet_text column by using the command
ALTER TABLE ONLY twitter_tweets
ADD CONSTRAINT twitter_tweets_pkey PRIMARY KEY (tweet_text);
The constraint has applied by getting message i.e., alter table
but while parsing the data it showing runtime exception i.e.,
java.lang.RuntimeException: Failed to execute insert query insert into twitter_tweets (tweet_created_at, tweet_id, tweet_id_str, tweet_text, tweet_source, tweet_truncated, tweet_in_reply_to_status_id, tweet_in_reply_to_status_id_str, tweet_in_reply_to_user_id, tweet_in_reply_to_user_id_str, tweet_in_reply_to_screen_name, tweet_geo,tweet_coordinates, tweet_at_reply, tweet_is_quote_status, tweet_retweet_count, tweet_favorite_count, tweet_favorited, tweet_retweeted, tweet_lang, tweet_possibly_sensitive, tweet_filter_level, tweet_scopes_S)values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?) at Demo.JdbcClient.executeInsertQuery(JdbcClient.java:62) at Demo.PsqlBolt.execute(PsqlBolt.java:91) at backtype.storm.daemon.executor$fn__5694$tuple_action_fn__5696.invoke(executor.clj:690) at backtype.storm.daemon.executor$mk_task_receiver$fn__5615.invoke(executor.clj:436) at backtype.storm.disruptor$clojure_handler$reify__5189.onEvent(disruptor.clj:58) at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:132) at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:106) at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) at backtype.storm.daemon.executor$fn__5694$fn__5707$fn__5758.invoke(executor.clj:819) at backtype.storm.util$async_loop$fn__545.invoke(util.clj:479) at clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745) Caused by: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "twitter_tweets_pkey" Detail: Key (tweet_text)=() already exists. at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2198) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1927) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:405) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2892) at com.zaxxer.hikari.proxy.StatementProxy.executeBatch(StatementProxy.java:116) at com.zaxxer.hikari.proxy.PreparedStatementJavassistProxy.executeBatch(PreparedStatementJavassistProxy.java) at Demo.JdbcClient.executeInsertQuery(JdbcClient.java:50) ... 11 more
The below image1 is the table to which i have used constraint
This is my output after keeping constraints
Your problem is described here:
ERROR: duplicate key value violates unique constraint "twitter_tweets_pkey" Detail: Key (tweet_text)=() already exists. at
You set tweet_text to be your PRIMARY KEY (PK), and as PK it cant get duplicated data.
At some point you already insert the data that you are trying to insert now into this column (tweet_text).
Now, why not create an Integer column, AUTO INCREMENTED, something like ID? The way as it now, you are telling me that no one should post a same text that was posted by other user.
Ex. If User A post a tweet with content (tweet_text) : "Hello World", no other user can post the same content.
Unique Constraint Violation
You asked for a primary key. A primary key in Postgres automatically creates an index and a UNIQUE constraint.
Then you inserted rows of data. At least two of those rows had the same value in that primary key field. The duplicate data violated the UNIQUE constraint. Postgres then did its duty in refusing to store the offending data. That refusal is reported back to you, the Java programmer, as an Exception.
At least that is my guess based on this excerpt from the middle of your error text:
Caused by: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "twitter_tweets_pkey" Detail: Key (tweet_text)=() already exists.