Assuming the following data model
model Card {
id Int #id #default(autoincrement())
name String
}
Let's then say I have in the Card table the following
id: 0, name: "0"
id: 1, name: "1"
For some reason, let's say that I have to change id:1 to id:2.
id: 0, name: "0"
id: 2, name: "1"
If I then invoke prisma.card.create(<some info here>...), I will get an error like this:
Unhandled Rejection (Error): GraphQL error:
Invalid `prisma.card.create()` invocation:
Unique constraint failed on the fields: (`id`)
Which makes perfect sense. However, is there a way I can make autoincrement skip an existing id? That is, skip to 3 such that:
id: 0, name: "0"
id: 2, name: "1"
id: 3, name: "3"
The autoincrement() function in Prisma is translated directly to a native behavior of the underlying database. So, your question should be more specific about your database rather than about Prisma.
With that being said, even on the DB level, it is rarely recommended to change the seed of the ID in a table.
There is a DDL command to reset the id of a table but you should not use that as part of a normal lifecycle of your application.
For example, this is how to do it on postgres - Reset auto increment counter in postgres
Related
It's been a few months since I started prisma and I'm still confused.
In a normal database, foreign key data also exists in table data. However, according to the prisma document, in prisma, the data does not exist at the database level.
So where is it stored? It seems that the things I do "connect:id:1" are stored in the Prisma client. If I delete the prisma dependency and install it again with npm install, will all these relational data be deleted too?? How can I make it as safe as possible????
And it seems too dangerous when I migrate later. what am I misunderstanding?
ADDED
const user = await prisma.user.create({
data: {
email: 'vlad#prisma.io',
posts: {
connect: [{ id: 8 }, { id: 9 }, { id: 10 }],
},
},
include: {
posts: true, // Include all posts in the returned object
},
})
in this case, id 8, id 9, id 10, Where are all these stored? Is there any way to check other than prisma studio or select query? I don't know where it is physically stored. It's not even in the planet scale database.
// In the workbench, the foriegn key is actually saved and can be exported. I don't know how it's not at the database level, but where it is referenced and stored.
Considering this Schema:
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
model User {
id Int #id #default(autoincrement())
name String
email String #unique
posts Post[]
}
model Post {
id Int #id #default(autoincrement())
title String
published Boolean #default(true)
author User #relation(fields: [authorId], references: [id])
authorId Int
}
There is a one-to-many relationship between User and Posts.
according to the prisma document, in prisma, the data does not exist
at the database level.
Only the relation fields do not exist at the database level, so in this case posts in User model and author in Post model would not exist at database level. But the foreign key exists at the database level, so in this case authorId is actually stored in the database.
Based on the create query you have shared:
in this case, id 8, id 9, id 10, Where are all these stored?
The connect statement in create query is essentially linking the records.
So to elaborate Posts with id 8,9,10 would have the authorId value of the new user record which is created.
So the data is stored in database, you can always check which posts are created by a specific author. You just need to query all the posts which has authorId set to the id which you are querying for.
I'm new to go and Backend and I'm Trying to make many-to-many relation between tables. I used this repo to make model:https://github.com/harranali/gorm-relationships-examples/tree/main/many-to-many
I Used GORM with postgresql.
My model:
type Book struct {
gorm.Model
Title string `json:"title"`
Author string `json:"author"`
Description string `json:"description"`
Category string `json:"Category"`
Publisher string `json:"publisher"`
AuthorsCard []*AuthorsCard `gorm:"many-to-many:book_authorscard;" json:"authorscard"`
}
type AuthorsCard struct {
gorm.Model
Name string `json:"name"`
Age int `json:"age"`
YearOfBirth int `json:"year"`
Biography string `json:"biography"`
}
After connecting to database and AutoMigrating:
func init() {
config.Connect()
db = config.GetDB()
db.AutoMigrate(&models.Book{}, &models.AuthorsCard{})
}
I've created Function to see how that relation works:
func TestCreate() {
var AuthorsCard = []models.AuthorsCard{
{
Age: 23,
Name: "test",
YearOfBirth: 1999,
Biography: "23fdgsdddTEST",
},
}
db.Create(&AuthorsCard)
var testbook = models.Book{
Title: "Test",
Author: "tst",
Description: "something",
}
db.Create(&testbook)
db.Model(&testbook).Association("AuthorsCard").Append(&AuthorsCard)
}
But got This Error:
panic: reflect: call of reflect.Value.Interface on zero Value [recovered]
panic: reflect: call of reflect.Value.Interface on zero Value
How can I deal with this "Null" problem and make proper relation?
UPD: The First part of a problem was connected to a version of GORM, After I changed old version(github.com/jinzhu/gorm v1.9.16) to new version (gorm.io/gorm v1.23.6) the problem with reflect Error gone.
but now, when I want to create new book, I get this Error:
/go/pkg/mod/gorm.io/driver/postgres#v1.3.7/migrator.go:119 ERROR: there is no unique constraint matching given keys for referenced table "authors_cards" (SQLSTATE 42830)
[28.440ms] [rows:0] CREATE TABLE "book_authorscard" ("book_id" bigint,"authors_card_id" bigint,PRIMARY KEY ("book_id","authors_card_id"),CONSTRAINT "fk_book_authorscard_authors_card" FOREIGN KEY ("authors_card_id") REFERENCES "authors_cards"("id"),CONSTRAINT "fk_book_authorscard_book" FOREIGN KEY ("book_id") REFERENCES "books"("id"))
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
UPD 2:
I decided to make a Migrator().DropTable(). That's kinda worked, and all Errors have gone. But still I get "authorscard": null as a response.
By reading the release note of Gorm v2 (https://gorm.io/docs/v2_release_note.html), I think that you are trying to use v2 feature with an old version (<v2). Try to use Gorm latest version.
I want to add a column to an table but the migration is failing:
The model:
model Notification {
id Int #id #default(autoincrement())
movie Movie? #relation(fields: [movieId], references: [id])
movieId Int?
movieRating MovieRating? #relation(fields: [movieRatingId], references: [id])
movieRatingId Int?
user User? #relation(fields: [userId], references: [id])
userId Int?
followedUser User? #relation("FollowedUser", fields: [followedUserId], references: [id])
followedUserId Int?
action String
value String?
watched Boolean #default(false)
}
The error:
Database error:
ERROR: column "value" of relation "Notification" contains null values
DbError { severity: "ERROR", parsed_severity: Some(Error), code: SqlState("23502"), message: "column "value" of relation "Notification" contains null values", detail: None, hint: None, position: None, where_: None, schema: Some("public"), table: Some("Notification"), column: Some("value"), datatype: None, constraint: None, file: Some("tablecmds.c"), line: Some(5450), routine: Some("ATRewriteTable") }
Screengrab of the table:
I understand that if the value column that I want to add is required that this would show an error, but the String? makes it's optional right?
Took a closer look in my migrations and I added the value column in 2 steps. First I added it without the optional ? but since my local db was empty there was no issue. Then I made it optional, but on production it was the first migration (adding the value without optional) that failed.
My first migration was:
AlterTable ALTER TABLE "Notification" ADD COLUMN "value" TEXT NOT NULL;
After that I created another migration:
ALTER TABLE "Notification" ALTER COLUMN "value" DROP NOT NULL;
So this is what caused the error.
I resolved it by rolling back the first migration: npx prisma migrate resolve --rolled-back "20220515174738_added_value_field_to_notificaion"
Then I changed the first migration file:
AlterTable ALTER TABLE "Notification" ADD COLUMN "value" TEXT; and I removed the second migration.
Now I can deploy the migrations again :)
I'm facing an unique constraint violation issue when doing an upsert, because the UPDATE query built by sequelize ignores the partial index constraint defined by the model (unless it doesn't matter). I'm new to node+sequelize so I might be missing something obvious, but I went through all the potential places for finding the appropriate answers, inclusive of the sequelize code, but I'm not able to find the answer I'm looking for. Really appreciate your help!
My current versions:
"pg": "7.9.0",
"sequelize": "5.21.3"
I have a model that consists of a primary key: id and two other unique indexes of which one of them is a nullable field.
module.exports.Entities = sequelize.define('entities', {
id: {type: Sequelize.UUID, defaultValue: Sequelize.UUIDV4, allowNull: false, primaryKey: true},
cId: {type: Sequelize.STRING, allowNull: false},
pId: {type: Sequelize.UUID, allowNull: false},
eKey: {type: Sequelize.INTEGER, allowNull: true}
}, {
indexes: [
{
name: 'unique_c_id_p_id',
fields: ['c_id', 'p_id'],
unique: true
},
{
name: 'unique_e_key',
fields: ['e_key'],
unique: true,
where: {
eKey: {
[Op.not]: null
}
}
}
]
})
and the table itself looks like below:
CREATE TABLE public.entities (
id UUID DEFAULT uuid_generate_v4 (),
c_id UUID NOT NULL,
p_id UUID NOT NULL,
e_key INTEGER DEFAULT NULL,
CONSTRAINT ENTITY_SERVICE_PKEY PRIMARY KEY (id),
CONSTRAINT unique_c_id_p_id UNIQUE (c_id, p_id)
);
CREATE UNIQUE INDEX unique_e_key ON public.entities (e_key) WHERE e_key IS NOT NULL;
The upsert method call looks like:
module.exports.upsert = async (Model, values) => Model.upsert(values, {returning: true})
I pass the above Entities model, and the below value as arguments to this function.
{
"id"="3169d4e2-8e2d-451e-8be0-40c0b28e2aa9",
"c_id"="00000000-0000-0000-0000-000000000000",
"p_id"="78bce392-4a15-4a8a-986b-c9398787345f",
"e_key"= null
}
Issue: SequelizeUniqueConstraintError
Sequelize tries to do an insert followed by an update query when we attempt to update an existing record using the upsert method.
The insert query shows a conflict, since the record exists already, and sequelize upsert call proceeds on to invoke the update query.
However, the query that it builds to UPDATE looks something like below:
"SQL statement UPDATE entities SET id='3169d4e2-8e2d-451e-8be0-40c0b28e2aa9',c_id='00000000-0000-0000-0000-000000000000',p_id='78bce392-4a15-4a8a-986b-c9398787345f',e_key=NULL
WHERE (id = '3169d4e2-8e2d-451e-8be0-40c0b28e2aa9'
OR e_key IS NULL
OR (c_id = '00000000-0000-0000-0000-000000000000' AND p_id = '78bce392-4a15-4a8a-986b-c9398787345f'))
RETURNING id\nPL/pgSQL function pg_temp_5.sequelize_upsert() line 1 at SQL statement"
Now, I do understand the reason why it's throwing the unique constraint violation, since in the above query's WHERE clause sequelize calls OR e_key IS NULL since e_key = null and that could potentially return more than 1 record, and the SET is trying to update the same value for all those records that were returned thereby violating the primaryKey constraints, unique constraints etc.
What I would like to understand is that:
Why does sequelize not exclude the e_key unique constraint based on the partial index defined given that it picks the WHERE clause attributes based on the constraints defined in the Model & it's indexes?
Is there anything that I could do to get past this issue?
Or, am I missing something obvious that I could fix and try?
Really appreciate you taking your time to read and respond. Thanks!
I had an existing PostgreSQL database with a table created like this:
CREATE TABLE product (id SERIAL PRIMARY KEY, name VARCHAR(100) DEFAULT NULL)
This table is described in a YML Doctrine2 file within a Symfony2 project:
Acme\DemoBundle\Entity\Product:
type: entity
table: product
fields:
id:
id: true
type: integer
nullable: false
generator:
strategy: SEQUENCE
name:
type: string
length: 100
nullable: true
When I run for the first time the Doctrine Migrations diff task, I should get a versioning file with no data in the up and down methods. But what I get instead is this :
// ...
class Version20120807125808 extends AbstractMigration
{
public function up(Schema $schema)
{
// this up() migration is autogenerated, please modify it to your needs
$this->abortIf($this->connection->getDatabasePlatform()->getName() != "postgresql");
$this->addSql("ALTER TABLE product ALTER id DROP DEFAULT");
}
public function down(Schema $schema)
{
// this down() migration is autogenerated, please modify it to your needs
$this->abortIf($this->connection->getDatabasePlatform()->getName() != "postgresql");
$this->addSql("CREATE SEQUENCE product_id_seq");
$this->addSql("SELECT setval('product_id_seq', (SELECT MAX(id) FROM product))");
$this->addSql("ALTER TABLE product ALTER id SET DEFAULT nextval('product_id_seq')");
}
}
Why are differences detected? How can I avoid this? I tried several sequence strategies with no success.
A little update on this question.
Using Doctrine 2.4, the solution is to use the IDENTITY generator strategy :
Acme\DemoBundle\Entity\Product:
type: entity
table: product
id:
type: integer
generator:
strategy: IDENTITY
fields:
name:
type: string
length: 100
nullable: true
To avoid DROP DEFAULT on fields that have a default value in the database, the default option on the field is the way to go. Of course this can be done with lifecycle callbacks, but it's necessary to keep the default value in the database if this database is used by other apps.
For a "DEFAULT NOW()" like default value, the solution is the following one:
Acme\DemoBundle\Entity\Product:
type: entity
table: product
id:
type: integer
generator:
strategy: IDENTITY
fields:
creation_date:
type: datetime
nullable: false
options:
default: CURRENT_TIMESTAMP
Doctrine 2.0 does not support the SQL DEFAULT keyword, and will always try to drop a postgres default value.
I have found no solution to this problem, I just let doctrine handle the sequences itself.
This is a opened bug registered here :
http://www.doctrine-project.org/jira/browse/DBAL-903