duplicate key value violates unique constraint "pk_user_governments" - postgresql

I am trying to insert a record with many to many relationship in EfCore to postgres table
When adding a simple record to Users...it works but when I introduced 1:N with User_Governments
It started giving me duplicate key value violates unique constraint "pk_user_governments"
I have tried a few things:
SELECT MAX(user_governments_id) FROM user_governments;
SELECT nextval('users_gov_user_id_seq');
This keeps incrementing everytime I run it in postgres..but the issue does not go
I am inserting it as follows:
User user = new();
user.Organisation = organisation;
user.Name = userName;
user.Email = email;
user.IsSafetyDashboardUser = isSafetyFlag;
if (isSafetyFlag)
{
List<UserGovernment> userGovernments = new List<UserGovernment>();
foreach (var govId in lgas)
{
userGovernments.Add(new UserGovernment()
{
LocalGovId = govId,
StateId = 7
});
}
user.UserGovernments = userGovernments;
}
_context.Users.Add(user);
int rows_affected = _context.SaveChanges();
Table and column in db is as follows:
CREATE TABLE IF NOT EXISTS user_governments
(
user_government_id integer NOT NULL GENERATED BY DEFAULT AS IDENTITY ( INCREMENT 1 START 1 MINVALUE 1 MAXVALUE 2147483647 CACHE 1 ),
user_id integer NOT NULL,
state_id integer NOT NULL,
local_gov_id integer NOT NULL,
CONSTRAINT pk_user_governments PRIMARY KEY (user_government_id),
CONSTRAINT fk_user_governments_local_govs_local_gov_id FOREIGN KEY (local_gov_id)
REFERENCES local_govs (local_gov_id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE CASCADE,
CONSTRAINT fk_user_governments_states_state_id FOREIGN KEY (state_id)
REFERENCES states (state_id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE CASCADE,
CONSTRAINT fk_user_governments_users_user_id FOREIGN KEY (user_id)
REFERENCES users (user_id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE CASCADE
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
I have also tried running following command as per this post
SELECT SETVAL((SELECT PG_GET_SERIAL_SEQUENCE('user_governments', 'user_government_id')), (SELECT (MAX("user_government_id") + 1) FROM "user_governments"), FALSE);
but I get error:
ERROR: relation "user_governments" does not exist

IDENTITY is an table integrated automatic increment. No needs to use PG_GET_SERIAL_SEQUENCE wich is dedicated for SEQUENCES that is another way to have increment outside the table. So you cannot use a query like :
SELECT SETVAL((SELECT PG_GET_SERIAL_SEQUENCE('user_governments', 'user_government_id')),
(SELECT (MAX("user_government_id") + 1) FROM "user_governments"), FALSE)
If your purpose is to assigne the seed for an IDENTITY, the ways to do that is :
You must use a syntax like this one :
ALTER TABLE user_governments
ALTER COLUMN user_government_id RESTART WITH (select Max(user_government_id) + 1 from user_governments);

It turned out that I did not build the model correctly.
The user_government table had an incremental key, but I had defined the model as follows
modelBuilder.Entity<UserGovernment>()
.HasKey(bc => new { bc.UserId, bc.LocalGovId });
I replaced it with:
modelBuilder.Entity<UserGovernment>()
.HasKey(bc => new { bc.UserGovernmentId});
The Journey :)
Initially I found out that once I commented the following line
_context.UserGovernments.AddRange(userGovernments);
It just inserted data with user_government_id as 0
Then I tried manually giving a value to user_government_id and it also went successfully, this lead me to check my modelbuilder code!!

Related

Updating key constraints on multiple records simultaneously

We have a table with a unique key which gets updated by ‘aging’ older records, as mentioned by #Tony O’Hagan here.
The table looks as follows:
-- auto-generated definition
create table abc
(
key uuid not null,
hash text not null,
age integer not null,
value varchar(50),
constraint abc_pkey
primary key (key, age)
);
We can simulate an ‘aged’ record with the following dummy data:
INSERT INTO public.abc (key, hash, age, value) VALUES ('bec619bb-451c-49d8-b555-4d16e1f724fb', 'asdf', 0, '1');
INSERT INTO public.abc (key, hash, age, value) VALUES ('bec619bb-451c-49d8-b555-4d16e1f724fb', 'asdf', 1, '2');
INSERT INTO public.abc (key, hash, age, value) VALUES ('bec619bb-451c-49d8-b555-4d16e1f724fb', 'asdf', 2, '3');
When I want to add a new record, I must first ‘age’ the older records before inserting a new record with age=0
However I get the following error message when I run the query below:
[23505] ERROR: duplicate key value violates unique constraint "abc_pkey" Detail: Key (key, age)=(bec619bb-451c-49d8-b555-4d16e1f724fb, 2) already exists.
UPDATE abc
SET age = age +1
WHERE key IN (
'bec619bb-451c-49d8-b555-4d16e1f724fb'
)
How can I update/age these records?
We can disable the CONSTRAINTS with the commande
SET CONSTRAINTS ALL DEFERRED
✓
which lets us run our update
UPDATE public.abc SET age = age + 1;
3 rows affected
we can then reactivate the CONSTRAINTS with
SET CONSTRAINTS ALL IMMEDIATE
✓

Kafka/KsqlDb : Why is PRIMARY KEY appending chars?

I intend to create a TABLE called WEB_TICKETS where the PRIMARY KEY is equal to the key->ID value. For some reason, when I run the CREATE TABLE instruction the PRIMARY KEY value is appended with the chars 'JO' - why is this happening?
KsqlDb Statements
These work as expected
CREATE STREAM STREAM_WEB_TICKETS (
ID_TICKET STRUCT<ID STRING> KEY
)
WITH (KAFKA_TOPIC='web.mongodb.tickets', FORMAT='AVRO');
CREATE STREAM WEB_TICKETS_REKEYED
WITH (KAFKA_TOPIC='web_tickets_by_id') AS
SELECT *
FROM STREAM_WEB_TICKETS
PARTITION BY ID_TICKET->ID;
PRINT 'web_tickets_by_id' FROM BEGINNING LIMIT 1;
key: 5d0c2416b326fe00515408b8
The following successfully creates the table but the PRIMARY KEY value isn't what I expect:
CREATE TABLE web_tickets (
id_pk STRING PRIMARY KEY
)
WITH (KAFKA_TOPIC = 'web_tickets_by_id', VALUE_FORMAT = 'AVRO');
select id_pk from web_tickets EMIT CHANGES LIMIT 1;
|ID_PK|
|J05d0c2416b326fe00515408b8
As you can see the ID_PK value has the characters JO appended to it. Why is this?
It appears as though I wasn't properly setting the KEY FORMAT. The following command produces the expected result.
CREATE TABLE web_tickets_test_2 (
id_pk VARCHAR PRIMARY KEY
)
WITH (KAFKA_TOPIC = 'web_tickets_by_id', FORMAT = 'AVRO');

How to reset the auto generated primary key in PostgreSQL

My class for the table topics is as below. The primary key is autogenerated serial key. While testing, I deleted rows from the table and was trying to re-insert them again. The UUID is not getting reset.
class Topics(db.Model):
""" User Model for different topics """
__tablename__ = 'topics'
uuid = db.Column(db.Integer, primary_key=True)
topics_name = db.Column(db.String(256),index=True)
def __repr__(self):
return '<Post %r>' % self.topics_name
I tried the below command to reset the key
ALTER SEQUENCE topics_uuid_seq RESTART WITH 1;
It did not work.
I would appreciate any form of suggestion!
If it's indeed a serial ID, you can reset the owned SEQUENCE with:
SELECT setval(pg_get_serial_sequence('topics', 'uuid'), max(uuid)) FROM topics;
See:
How to reset postgres' primary key sequence when it falls out of sync?
But why would the name be uuid? UUID are not integer numbers and not serial. Also, it's not entirely clear what's going wrong, when you write:
The UUID is not getting reset.
About ALTER SEQUENCE ... RESTART:
Postgres manually alter sequence
In order to avoid duplicate id errors that may arise when resetting the sequence try:
UPDATE table SET id = DEFAULT;
ALTER SEQUENCE seq RESTART;
UPDATE table SET id = DEFAULT;
For added context:
'table' = your table name
'id' = your id column name
'seq' = find the name of your sequence with:
SELECT pg_get_serial_sequence('table', 'id');

Auto increment non primary column in SQLAlchemy

In my db model I need userId and forumPostId to be a composite primary key. But I need id to be a auto incremented value. But when I'm trying to insert new row in table getting none in id instead of a auto incremented integer.
class ForumPostFollow(db.Model):
__tablename__ = "forum_post_follow"
id = db.Column(db.Integer,autoincrement=True,nullable=False,unique=True)
userId = db.Column(db.Integer,db.ForeignKey('user.id'),primary_key=True)
forumPostId = db.Column(db.Integer,db.ForeignKey('forum_post.id'),primary_key=True)
active = db.Column(db.Boolean,nullable=False)
My package versions
Flask-SQLAlchemy==2.3.2 SQLAlchemy>=1.3.0
This question is similar to this question. But it's for version 1.1
Updated Question
I've changed my id columns sequence from terminal
ALTER TABLE forum_post_follow DROP COLUMN id;
ALTER TABLE forum_post_follow ADD COLUMN id SERIAL;
Then my altered columns looks like this
But still getting same error
sqlalchemy.exc.IntegrityError: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (1, 1, t, null).
[SQL: INSERT INTO forum_post_follow (id, "userId", "forumPostId", active) VALUES (%(id)s, %(userId)s, %(forumPostId)s, %(active)s)]
[parameters: {'id': None, 'userId': 1, 'forumPostId': 1, 'active': True}]
I had a similar problem and found that as per SQLAlchemy 1.3 Documentation:
For the case where this default generation of IDENTITY is not desired, specify False for the Column.autoincrement flag, on the first integer primary key column:
so you might want to try this:
class ForumPostFollow(db.Model):
__tablename__ = "forum_post_follow"
id = db.Column(db.Integer,autoincrement=False,primary_key=True,nullable=False,unique=True)
userId = db.Column(db.Integer,db.ForeignKey('user.id'),primary_key=True)
forumPostId = db.Column(db.Integer,db.ForeignKey('forum_post.id'),primary_key=True)
active = db.Column(db.Boolean,nullable=False)

I'm trying to insert tuples into a table A (from table B) if the primary key of the table B tuple doesn't exist in tuple A

Here is what I have so far:
INSERT INTO Tenants (LeaseStartDate, LeaseExpirationDate, Rent, LeaseTenantSSN, RentOverdue)
SELECT CURRENT_DATE, NULL, NewRentPayments.Rent, NewRentPayments.LeaseTenantSSN, FALSE from NewRentPayments
WHERE NOT EXISTS (SELECT * FROM Tenants, NewRentPayments WHERE NewRentPayments.HouseID = Tenants.HouseID AND
NewRentPayments.ApartmentNumber = Tenants.ApartmentNumber)
So, HouseID and ApartmentNumber together make up the primary key. If there is a tuple in table B (NewRentPayments) that doesn't exist in table A (Tenants) based on the primary key, then it needs to be inserted into Tenants.
The problem is, when I run my query, it doesn't insert anything (I know for a fact there should be 1 tuple inserted). I'm at a loss, because it looks like it should work.
Thanks.
Your subquery was not correlated - It was just a non-correlated join query.
As per description of your problem, you don't need this join.
Try this:
insert into Tenants (LeaseStartDate, LeaseExpirationDate, Rent, LeaseTenantSSN, RentOverdue)
select current_date, null, p.Rent, p.LeaseTenantSSN, FALSE
from NewRentPayments p
where not exists (
select *
from Tenants t
where p.HouseID = t.HouseID
and p.ApartmentNumber = t.ApartmentNumber
)