Duplicate key value violates unique constraint in Postgres - postgresql

I have setup a table like so
CREATE TABLE teams
(
ID SERIAL PRIMARY KEY,
league_name VARCHAR,
);
When running an insert I get the error duplicate key value violates unique constraint "teams_pkey"
My insert looks like
async function saveMissingTeamData(teamObject) {
let response;
try {
response = await pool.query('INSERT INTO teams (league_name) VALUES ($1) RETURNING *', [teamObject.league_name]);
} catch (e) {
console.error('Error Occurred in saveMissingTeamData', e);
throw e;
}
return response.rows;
}
I was under the impression (could be wrong) that by specifying SERIAL then auto incrementation of the id would be handled by psql?
What's strange is that it will work on my production environment but not on my staging environment (which is a copy of lives data)
How can I rectify this?

The problem is that, for some hard-to-discover reason, the sequence associated with the column id does not match the actual contents of the column. You can fix it with this one-time intervention:
select setval('teams_id_seq', coalesce((select id+ 1 from teams order by id desc limit 1), 1), false);
Read about Sequence Manipulation Functions.

Related

PostgresPagingQueryProvider retuns 0 columns

I have a PostgresPagingQueryProvider that has a simple query
Select id from action
This is expected to return 1 column. When I run the batch I have an exception saying
PreparedStatementCallback; SQL [SELECT id FROM action WHERE 1=1 ORDER BY id ASC LIMIT 1000];
The column index is out of range: 1, number of columns: 0.; nested exception is org.postgresql.util.PSQLException:
The column index is out of range: 1, number of columns: 0.
This is the query provider:
#Bean
public PostgresPagingQueryProvider queryProvider() {
PostgresPagingQueryProvider provider = new PostgresPagingQueryProvider();
HashMap<String, Order> sorting = new HashMap<>();
sorting.put("id", Order.ASCENDING);
provider.setSelectClause("Select id");
provider.setFromClause("from action");
provider.setWhereClause("where 1=1");
provider.setSortKeys(sorting);
return provider;
}
And this is the creation of the table
CREATE TABLE action (
id int4 NOT NULL,
"timestamp" timestamp NULL,
CONSTRAINT action_pkey PRIMARY KEY (id)
);
When using the same query with ItemReader this works just fine.
Ok, I have figured it out.
The problem was that due to a copy paste of mine I setting the first variable of the query to STOPPED .
As someone might notice the query does not accept any parameters. So deleting that fixed the issue.
In addition to that I thought that the part where I pass in the HashMap of the variables was unrelated, thus it is not included in the question so I guess no one could have answered that.
I am going to leave this here as a tribute to the monumental errors and time wasting a copy paste might lead.

R2DBC: Why are RowsFetchSpec<T> operators (.all(),.first(),.one()) signal onComplete although record is stored in database?

I am using DatabaseClient for building a custom Repository. After I insert or update an Item I need that Row data to return the saved/updated Item. I just can´t wrap my head around why .all(), .first(), .one() are not returning the Result Map, although I can see that the data is inserted/updated in the database. They just signal onComplete. But .rowsUpdated() returns 1 row updated.
I observed this behaviour with H2 and MS SQL Server.
I´m new to R2dbc. What am I missing? Any ideas?
#Transactional
public Mono<Item> insertItem(Item entity){
return dbClient
.sql("insert into items (creationdate, name, price, traceid, referenceid) VALUES (:creationDate, :name, :price, :traceId, :referenceId)")
.bind("creationDate", entity.getCreationDate())
.bind("name", entity.getName())
.bind("price", entity.getPrice())
.bind("traceId", entity.getTraceId())
.bind("referenceId", entity.getReferenceId())
.fetch()
.first() //.all() //.one()
.map(Item::new)
.doOnNext(item -> LOGGER.info(String.format("Item: %s", item)));
}
The table looks like this:
CREATE TABLE [dbo].[items](
[creationdate] [bigint] NOT NULL,
[name] [nvarchar](32) NOT NULL,
[price] [int] NOT NULL,
[traceid] [nvarchar](64) NOT NULL,
[referenceid] [int] NOT NULL,
PRIMARY KEY (name, referenceid)
)
Thanks!
This is the behavior of an insert/update statement in database, it does not return the inserted/updated rows.
It returns the number of inserted/updated rows.
It may also return some generated values by the database (such as auto increment, generated uuid...), by adding the following line:
.filter(statement -> statement.returnGeneratedValues())
where you may specify specific generated columns in parameter. However this has limitations depending on the database (for example MySql can only return the last generated ID of an auto increment column even if you insert multiple rows).
If you want to get the inserted/updated values from database, you need to do a select.

Fetching id field of an entity in the SQL Transaction

I have the following schema.
Person(pid, pname)
Beer(bid, bname)
Likes(pid,bid)
I would like to insert a likes item. However, I am accepting the following format for the new users : (Pid, pname, bid, bname).
I would like to create a transaction for that to avoid conflict ( This is a highly simplified version of my real problem but the issue is the same). In my Person table, I set pid Auto-Increment(or Serial in Postgresql). Also the same goes for bid.
I have stuck in a point where I know the Person does not exist but the beer exists. So, I have to create a Person, then add an entity to Likes relation.
As far as I know, when I use the Autocommit(false) in dB, the transaction won't save until the commit. So, should I change the db design:
Change the auto-increment field to a normal integer, not null field.
In the transaction, after the autoCommit(false) has begun, read the last entry of the person
Increment it by one while creating the new person
Then create likes relation
Or, is there any other way around or do I miss something about transactions?
Here is what I have done so far:
try {
String add_person_sql = "INSERT INTO Person (name) VALUES(?)";
PreparedStatement add_person_statement = mydb.prepareStatement(add_person_sql);
String add_likes_sql = "INSERT INTO Likes (pid, bid) VALUES(?, ?)";
PreparedStatement add_likes_statement = mydb.prepareStatement(add_likes_sql);
mydb.setAutoCommit(false);
add_person_statement.setString(1, pname);
// The problem is, without saving the person I cannot know the id of the person
// AFAIK, this execution is not finished until commit occurs
add_person_statement.executeQuery();
// How can I fetch person's id
add_likes_statement.setString(1, pid);
add_likes_statement.setString(2, bid);
add_likes_statement.executeQuery();
mydb.commit();
}
catch(Exception e){
System.out.println(e);
mydb.rollback();
}
You can tell JDBC to return the generated ID from the insert statement, then you can use that ID to insert into the likes table:
mydb.prepareStatement(add_person_sql, new String[]{"pid"});
The second parameter tells the driver to return the generated value for the pid column.
Alternatively you can use
mydb.prepareStatement(add_person_sql, Statement.RETURN_GENERATED_KEYS);
that tells the driver to detect the auto increment columns.
Then run the insert using executeUpdate()
add_person_statement.setString(1, pname);
add_person_statement.executeUpdate();
int newPid = -1;
ResultSet idResult = add_person.getGeneratedKeys();
if (idResult.next()) {
newPid = idResult.getInt(1);
}
add_likes_statement.setString(1, newPid);
add_likes_statement.setString(2, bid);
add_likes_statement.executeUpdate();
mydb.commit();

How to insert data with explicit id property which has null value?

I have a js object with explicit id property. Like this:
const data = {
user_id: null,
user_email: faker.internet.email()
};
The value of user_id is null and there is a users table using user_id as its primary key.
I want to insert this data correctly and I hope knex can obey the primary key increment rule.
Here is my code:
async function insert(user: any) {
return await knex('users')
.insert(user)
.returning('*');
}
When I try to insert this data, got an error:
error: null value in column "user_id" violates not-null constraint
How can I solve this?
You can set the value of user_id to undefined. Then it will insert the data and
obey the primary key increment rule.
const data = {
user_id: undefined,
user_email: faker.internet.email()
};
Check the inserted row in users table. The value of user_id is 1

Implementing if-not-exists-insert using Entity Framework without race conditions

Using LINQ-to-Entities 4.0, is there a correct pattern or construct for safely implementing "if not exists then insert"?
For example, I currently have a table that tracks "user favorites" - users can add or remove articles from their list of favorites.
The underlying table is not a true many-to-many relationship, but instead tracks some additional information such as the date the favorite was added.
CREATE TABLE UserFavorite
(
FavoriteId int not null identity(1,1) primary key,
UserId int not null,
ArticleId int not null
);
CREATE UNIQUE INDEX IX_UserFavorite_1 ON UserFavorite (UserId, ArticleId);
Inserting two favorites with the same User/Article pair results in a duplicate key error, as desired.
I've currently implemented the "if not exists then insert" logic in the data layer using C#:
if (!entities.FavoriteArticles.Any(
f => f.UserId == userId &&
f.ArticleId == articleId))
{
FavoriteArticle favorite = new FavoriteArticle();
favorite.UserId = userId;
favorite.ArticleId = articleId;
favorite.DateAdded = DateTime.Now;
Entities.AddToFavoriteArticles(favorite);
Entities.SaveChanges();
}
The problem with this implementation is that it's susceptible to race conditions. For example, if a user double-clicks the "add to favorites" link two requests could be sent to the server. The first request succeeds, while the second request (the one the user sees) fails with an UpdateException wrapping a SqlException for the duplicate key error.
With T-SQL stored procedures I can use transactions with lock hints to ensure a race condition never occurs. Is there a clean method for avoiding the race condition in Entity Framework without resorting to stored procedures or blindly swallowing exceptions?
You can also write a stored procedure that uses some new tricks from sql 2005+
Use your combined unique ID (userID + articleID) in an update statement, then use the ##RowCount function to see if the row count > 0 if it's 1 (or more), the update has found a row matching your userID and ArticleID, if it's 0, then you're all clear to insert.
e.g.
Update tablex set userID = #UserID, ArticleID = #ArticleID (you could have more properties here, as long as the where holds a combined unique ID) where userID = #UserID and ArticleID = #ArticleID
if (##RowCount = 0)
Begin
Insert Into tablex ...
End
Best of all, it's all done in one call, so you don't have to first compare the data and then determine if you should insert. And of course it will stop any dulplicate inserts and won't throw any errors (gracefully?)
You could try to wrap it in a transaction combined with the 'famous' try/catch pattern:
using (var scope = new TransactionScope())
try
{
//...do your thing...
scope.Complete();
}
catch (UpdateException ex)
{
// here the second request ends up...
}