I'm using Gorm's unique index to enforce a certain ordering of elements
type Element struct {
gorm.Model
ID int `gorm:"primary_key"`
Index int `gorm:"uniqueIndex:unique_order"`
ParentID int `gorm:"uniqueIndex:unique_order"`
}
The problem I'm running into is when I create and delete multiple elements I start to get an error like
ERROR: duplicate key value violates unique constraint "unique_order" (SQLSTATE 23505)
This is happening because Gorm soft deletes elements by setting deleted_at, but the rows still exist in the database and the unique index is still enforced.
I know I can get around this using a partial index in Postgres SQL, but I'm wondering if there's a way I can handle this with Gorm?
Related
Problem:
I'd like to make a composite primary key from columns id and user_id for a postgres database table. Column user_id is a foreign key with an integer type, whereas id is a string. Will this cause a conflict because the types are different?
Edit: Also, are there combinations of types that would cause problems?
Context:
I obviously should match the type of the User.id field for its foreign key. And, the id for my table will be derived from a uuid to prevent data leaks. So I would prefer not to change the types of either field I want in this table.
Research:
I am using sqlalchemy. Their documentation mentions how to create a composite primary key, but it doesn't discuss dealing with different types for each column.
No, this won't be a problem.
Your question seems to indicate that you think, the values of the indexed columns are somehow concatenated and then stored in the index as a single value. This is not the case. Each column value is stored independently but together. Similar to the way the column values are stored in the actual table.
I have a table ideas with columns idea_id, element_id and element_value.
Initially, I had created a composite primary key(ideas_pkey) using all three columns but I started facing size limit issues with the index associated with the primary key as the element_value column had a huge value.
Hence, I created another unique index hashing the column with possible large values
CREATE UNIQUE INDEX ideas_pindex ON public.ideas USING btree (idea_id, element_id, md5(element_value))
Now I deleted the initial primary key ideas_pkey and wanted to recreate it using this newly created index like so
alter table ideas add constraint ideas_pkey PRIMARY KEY ("idea_id", "element_id", "element_value") USING INDEX ideas_pindex;
But this fails with the following error
ERROR: syntax error at or near "ideas_pindex"
LINE 2: ...a_id", "element_id", "element_value") USING INDEX ideas_...
^
SQL state: 42601
Character: 209
What am I doing wrong?
A primary key index can't be a functional index. You can instead just have a unique index on your table, or create another column storing the md5() of your larger column and use it in the PK.
That being said, there is also another error in your query: If you want to specify an index name, you can't specify the PK columns (they are derived from the underlying index). And if you want to specify the pk columns, you can't specify the index name/definition, as it will be automatically created. See the doc
I have a table 'client', which has 3 columns - id, siebel_id, phone_number.
PhoneNumber has a unique constraint. If I save a new client with an existing number, I'll get an error ERROR: duplicate key value violates unique constraint "phone_number_unique".
Is it possible to make PSQL or MyBatis showing 'siebel_id' of a record where the phone number already saved?
I mean to get a message like
'ERROR: duplicate key value violates unique constraint "phone_number_unique"
Detail: Key (phone_number)=(+79991234567) already exists on siebel_id...'
No, it's not possible to tweak the internal message that the PostgreSQL database engine returns accompannying an error. Well... unless you recompiled the whole PostgreSQL database from scratch, and I would assume this is off the table.
However, you can easily search for the offending row using SQL, as in:
select siebel_id from client where phone_number = '+79991234567';
How do you add a compound / composite index on a PostgreSQL table with TimescaleDB installed?
Following https://docs.timescale.com/latest/using-timescaledb/schema-management, you can add a compound / composite index to TimescaleDB by simply doing:
CREATE INDEX ON conditions (time DESC, cityid)
WHERE cityid IS NOT NULL;
time is a column with timestamps (The one used as primary key in TimescaleDB).
cityid is a column for a city identifier we might to often query for (As second parameter after the time series dates).
This can be done before or after converting the table to a hypertable.
To avoid bloating the index when the column cityid is often NULL, the statement WHERE cityid IS NOT NULL is for. Use this per default unless you are often searching for missing data (cityid IS NULL).
When I create my tables in the gorm database, it adds columns to the table that I don't want. I'm not sure how it's adding these extra fields. This causes me to run into an error that says, "pq: null value in column "user_id" violates not-null constraint". "user_id" is the unwanted column that gets added. I'm using gorm and postgreSQL.
I have a many to many relationship for my two tables. My first table is created properly and my second table, stores, is created with the provided fields plus two unwanted fields: "user_id" and "stores_id". I've tried removing the many to many relationship to see if that was the issue, I've tried dropping the tables and recreating them with different fields. Regardless, I have not been able to get rid of the two extra columns.
The first (working) table:
type User struct {
gorm.Model
ID int `json:"u_id"`
Name string `json:"u_name"`
Stores []Store `gorm:"many2many:stores;" json:"stores"`
}
When I execute '\d users', I get the following columns: id, created_at, updated_at, deleted_at, name.
The second (problematic) table:
type Stores struct {
gorm.Model
ID int `json:"s_id"`
NUM int `gorm:"unique" json:"s_num"`
Users []User `gorm:"many2many:user" json:"users"`
}
When I execute '\d' stores, I get the following columns: user_id, vehicle_id, id, created_at, updated_at, deleted_at, num.
I'm executing the creation of these tables through the following code:
db.AutoMigrate(&User{})
db.AutoMigrate(&Store{})
On another note, if I add gorm:"primary_key";auto_increment" to my ID values in my structs, I get the error "pq: column "user_id" appears twice in primary key constraint". I was able to get around this by removing the primary_key and auto_increment attributes, running AutoMigrate() and then adding it back in and running AutoMigrate() again - this was totally fine and working.
I've also tried manually inserting a user_id and store_id. This works fine, except that I'd have to generate new ones every time because they require uniqueness. I understand that the error "pq: null value in column "user_id" violates not-null constraint" is caused by the fact that I'm not providing a user_id or store_id when I'm creating my store. I'm simply confused why a user_id and store_id column is being generated at all, and I'm hoping I can fix that.
This is what gorm.Model looks like
type Model struct {
ID uint `gorm:"primary_key"`
CreatedAt time.Time
UpdatedAt time.Time
DeletedAt *time.Time
}
When we call gorm.Model inside a struct, it means we are add default fields of gorm.Model in our current struct.
type Stores struct {
gorm.Model
....
so your user model will look something like
ype User struct {
ID uint `gorm:"primary_key"`
CreatedAt time.Time
UpdatedAt time.Time
DeletedAt *time.Time
ID int `json:"u_id"`
Name string `json:"u_name"`
Stores []Store `gorm:"many2many:stores;" json:"stores"`
}
that error mayne due to duplicate primary_key key. try to rename ID intjson:"u_id"`` to UserID. you need to update Stores too.
Fixed the duplicate ID errors by removing gorm.Model, as #(Akshaydeep Girl) pointed out what having gorm.Model entails. As for the random 'user_id' and 'store_id' that kept automatically being added, they were being added because of the many2many gorm relationship. I was able to remove those by switching the order of migration.
func DBMigrate(db *gorm.DB) *gorm.DB {
db.AutoMigrate(&Store{})
db.AutoMigrate(&User{})
return db
}
When I dropped both tables and re-compiled/ran my project with the new order of migration, the stores table was created without the random 'user_id' and 'store_id', and the users table didn't have those either.