I did some manual modifications to my database table, so now Prisma won't let me run migrate dev. I want to undo my changes, so I'm back in sync with what Prisma want me to have.
I had a lot of changes that I've managed to fix. But there's still one left that I don't know how to handle.
[*] Changed the `Product` table
[*] Altered column `id` (sequence changed)
This is a Prostgres database. How do I reset the sequence to whatever value Prisma wants it to be? How do I know what value Prisma wants?
The problem was that the id column had been re-created with an INTEGER type. The original table had a SERIAL type. After fixing that I could successfully run my migration.
The correct types to use was found in my very first migration.sql file in the migrations folder.
-- CreateTable
CREATE TABLE "Product" (
"id" SERIAL NOT NULL,
"name" TEXT NOT NULL,
"desc" TEXT,
"longDesc" TEXT,
"price" INTEGER,
"imgUrl" TEXT,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY ("id")
);
Related
I have a simple Prisma schema (I'm only using the relevant part):
enum ApprovalStatus {
APPROVED
DENIED
PENDING
}
model Attendee {
user User #relation(fields: [user_id], references: [id])
user_id BigInt
event Event #relation(fields: [event_id], references: [id])
event_id BigInt
status ApprovalStatus #default(APPROVED)
created_at DateTime #default(now())
updated_at DateTime? #updatedAt
deleted_at DateTime?
##id([user_id, event_id])
##unique([user_id, event_id])
##map("attendees")
}
After saving the schema I run npx prisma migrate dev, and it creates the migration and successfully migrates. A quick peek in postgres shows that the table is created and a \dT+ shows that the new type and the 3 entries, have been added as well.
Then I noticed that subsequent runs of migration started adding some weird alter table lines for the attendees table, for no reason. I checked the migration and there was no reason for it. Here's the migration of the attendee table, and as you can see status column is quite clearly defined:
-- CreateTable
CREATE TABLE "attendees" (
"user_id" BIGINT NOT NULL,
"event_id" BIGINT NOT NULL,
"status" "ApprovalStatus" NOT NULL DEFAULT 'APPROVED',
"created_at" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updated_at" TIMESTAMP(3),
"deleted_at" TIMESTAMP(3),
CONSTRAINT "attendees_pkey" PRIMARY KEY ("user_id","event_id")
);
And now, even if there were no changes to anything in the schema, and all previous migrations were properly applied, running npx prisma migrate dev (with or without --create-only) will always generate a migration with following:
/*
Warnings:
- The `status` column on the `attendees` table would be dropped and recreated. This will lead to data loss if there is data in the column.
*/
-- AlterTable
ALTER TABLE "attendees" DROP COLUMN "status",
ADD COLUMN "status" "ApprovalStatus" NOT NULL DEFAULT 'APPROVED';
It's acting as if the type or name of the column has changed, even though there were no changes to the model or even entire schema for that matter. If I run the generate command more times, it will create this same migration each time with the exact same content. I thought it might have something to do with migration order, but unless it's doing migrations randomly, ApprovalStatus migration comes before attendees does. I really see no reason for it to behave this way, but I'm uncertain how to proceed. Any advice would be welcome.
EDIT: Additional info
"prisma": "^4.6.0"
"express": "^4.17.2"
"typescript": "^4.8.4"
psql (15.0, server 12.13 (Debian 12.13-1.pgdg110+1))
There was a regression introduced in Prisma v4.6.0 where Prisma drops and recreates an enum field as can be seen in this issue. This was fixed in Prisma v4.6.1. Kindly update to the latest version and you should not experience this issue with Prisma migrate.
I'm not really sure what I've done wrong. But this caused the other services to mess up the other db migration stuff.
Hoping someone will help me with the cause.
Thank you!
We have a db migration script that creates a table
V6__add_subscription_tables.sql
CREATE TABLE plan_subscription (
id bigint NOT NULL,
version bigint NOT NULL,
team_id bigint NOT NULL,
plan_id bigint NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (plan_id) REFERENCES plan (id),
UNIQUE (team_id)
);
Added another script that would add to dev environment a plan_subscription
But in my current task, the migration will fail if it's a fresh database so I deleted the insertion
V5002__add_test_data.sql
//There are other test data here
/* THIS IS THE DATA THAT I DELETED
INSERT INTO plan_subscription VALUES (nextval('plan_subscription_sequence'), 0, 3, currval('plan_sequence'));
*/
And since I have to alter the table and add a column with constraint, I moved the adding of test data in the new db migration script but.
There seems to be no error but it messed up something that I'm not sure what's the cause.
V5004__add_date_occurred_in_plan_subscription.sql
ALTER TABLE plan_subscription ADD
date_occurred TIMESTAMP WITHOUT TIME ZONE NOT NULL;
INSERT INTO plan_subscription VALUES (nextval('plan_subscription_sequence'), 0, 3, currval('plan_sequence'), current_date);
So what I did, I just removed the NOT NULL constraint and reverted the deletion of the old test data.
I know this is kinda long and weird but I'm hoping someone would know the reason.
Thank you!
HeidiSQL generated the following creation code:
CREATE TABLE "books" (
"id" BIGINT NOT NULL,
"creation_date" TIMESTAMP NOT NULL,
"symbol" VARCHAR NOT NULL,
PRIMARY KEY ("id"),
UNIQUE INDEX "symbol" ("symbol")
)
;
COMMENT ON COLUMN "books"."id" IS E'';
COMMENT ON COLUMN "books"."creation_date" IS E'';
COMMENT ON COLUMN "books"."symbol" IS E'';
And when I try to submit, I get the following error:
Is it an HeidiSQL bug with PostgreSQL?
There is a recommendation note for how you should create an UNIQUE INDEX in PostgreSQL:
The preferred way to add a unique constraint to a table is ALTER TABLE ... ADD CONSTRAINT. The use of indexes to enforce unique constraints could be considered an implementation detail that should not be accessed directly. One should, however, be aware that there's no need to manually create indexes on unique columns; doing so would just duplicate the automatically-created index.
There are a couple of ways of creating a UNIQUE INDEX on PostgreSQL.
The first way is, as mentioned above, using ALTER TABLE on a previously created table:
ALTER TABLE books ADD UNIQUE ("symbol");
OR
ALTER TABLE books ADD CONSTRAINT UQ_SYMBOL UNIQUE ("symbol")
Note: this approach uses the auto-generation of indexes of PostgreSQL (that is, PostgreSQL will identify that that column is unique and add an index to it).
The second way is using CREATE INDEX:
CREATE UNIQUE INDEX "symbol" ON books("symbol");
Last but not least, you can simply omit, in the table creation, the INDEX keyword and let PostgreSQL do the magic (create the index) for you:
CREATE TABLE "books" (
"id" BIGINT NOT NULL,
"creation_date" TIMESTAMP NOT NULL,
"symbol" VARCHAR NOT NULL,
PRIMARY KEY ("id"),
UNIQUE ("symbol")
);
Note: The syntax UNIQUE "symbol"("symbol") could be a confusion made with method 2, as in that method one it is required to specify both the table and the column name (books("symbol")).
Therefore, that is not a PostgreSQL bug, but rather a HeidiSQL one.
Edit: I was able to reproduce the bug and opened an issue on github.
I want to add revisioning for records in an existing application which stores data in a PostgreSQL database. I read about strategies e.g. in this question, this question and this blog post.
I think that the approach to create a second history table which will rarely be queried will work best. However I do have some practical problems. Let's say that this is my table I want to add revision control to:
create table people(
id serial not null primary key,
name varchar(255) not null
);
For this very simple table my history table could look like this:
create table people_history(
peopleId int not null references people(id) on delete cascade on update restrict,
revision int not null,
revisionTimestamp timestamptz not null default current_timestamp,
name character varying(255) not null,
primary key(peopleId, revision)
);
And this brings the first problems up:
How do I generate the revision number?
Of course I could create a sequence from which I request revision numbers which would be easy. However that would leave large gaps between revisions per person as many people share the same sequence and it would feel more natural if the revision numbers were ascending numbers without gaps per person.
So I am tempted to find my revision number by select max(revision)+1 from ... where peopleId=.... However that could lead to a race condition if two threads ask for the next revision number and try to insert. That is very unlikely I have to admit (especially in my case where only few updates happen anyway) and would not cause data to corrupt as that would be a duplicate primary key and thus cause a transaction rollback, but it is not pretty either. I wonder if there is a prettier solution.
How do I insert data into the history table?
Two ways come to mind: Manually on every statement that updates the main table or using a trigger. A trigger sounds less error-prone as it is less likely that I forget about a query somewhere. However I cannot communicate to the application exactly which revision number was just created, can I? So if I want to create a couple of event tables like this:
create table peopleUserEditEvent (
poepleId int not null,
revision int not null,
userId int not null references users(id) on delete set null on update restrict,
comment text not null default '',
primary key(paopleId, revision),
foreign key (peopleId, revision) references people_history
);
That lists some metadata for revisions which explains why the revision was changed. In this case a user with a specific ID edited the data and might have supplied a comment.
In another case (and another event table) a cronjob might have changed something and documents the event which probably has no userId and no comment but other metadata.
To add those event data I need the revision id and if the revision id was created by a trigger it will be difficult to find out (or is there a practical way to do so?).
Well, you need one replication strategy for all tables and column you have , you can create one table to maintain all changes and insert on anytime you make a UPDATE INSERT or DELETE statement, maybe with this exemple of framwork idempiere changelog can help you
CREATE TABLE ad_changelog (
ad_changelog_id NUMERIC(10,0) NOT NULL,
ad_session_id NUMERIC(10,0) NOT NULL,
ad_table_id NUMERIC(10,0) NOT NULL,
ad_column_id NUMERIC(10,0) NOT NULL,
isactive CHAR(1) DEFAULT 'Y'::bpchar NOT NULL,
created TIMESTAMP WITHOUT TIME ZONE DEFAULT now() NOT NULL,
createdby NUMERIC(10,0) NOT NULL,
updated TIMESTAMP WITHOUT TIME ZONE DEFAULT now() NOT NULL,
updatedby NUMERIC(10,0) NOT NULL,
record_id NUMERIC(10,0) NOT NULL,
oldvalue VARCHAR(2000),
newvalue VARCHAR(2000),
undo CHAR(1),
redo CHAR(1),
iscustomization CHAR(1) DEFAULT 'N'::bpchar NOT NULL,
description VARCHAR(255),
ad_changelog_uu VARCHAR(36) DEFAULT NULL::character varying,
CONSTRAINT adcolumn_adchangelog FOREIGN KEY (ad_column_id)
REFERENCES adempiere.ad_column(ad_column_id)
MATCH PARTIAL
ON DELETE CASCADE
ON UPDATE NO ACTION
DEFERRABLE
INITIALLY DEFERRED,
CONSTRAINT adsession_adchangelog FOREIGN KEY (ad_session_id)
REFERENCES adempiere.ad_session(ad_session_id)
MATCH PARTIAL
ON DELETE NO ACTION
ON UPDATE NO ACTION
DEFERRABLE
INITIALLY DEFERRED,
CONSTRAINT adtable_adchangelog FOREIGN KEY (ad_table_id)
REFERENCES adempiere.ad_table(ad_table_id)
MATCH PARTIAL
ON DELETE CASCADE
ON UPDATE NO ACTION
DEFERRABLE
INITIALLY DEFERRED
)
WITH (oids = false);
CREATE INDEX ad_changelog_speed ON adempiere.ad_changelog
USING btree (ad_table_id, record_id);
CREATE UNIQUE INDEX ad_changelog_uu_idx ON adempiere.ad_changelog
USING btree (ad_changelog_uu COLLATE pg_catalog."default");
Right now I have two tables, one that contains a compound primary key and another that that references one of the values of the primary key but is a one-to-many relationship between Product and Mapping. The following is an idea of the setup:
CREATE TABLE dev."Product"
(
"Id" serial NOT NULL,
"ShortCode" character(6),
CONSTRAINT "ProductPK" PRIMARY KEY ("Id")
)
CREATE TABLE dev."Mapping"
(
"LookupId" integer NOT NULL,
"ShortCode" character(6) NOT NULL,
CONSTRAINT "MappingPK" PRIMARY KEY ("LookupId", "ShortCode")
)
Since the ShortCode is displayed to the user as a six character string I don't want to have a another table to have a proper foreign key reference but trying to create one with the current design is not allowed by PostgreSQL. As such, how can I create a check so that the short code in the Mapping table is checked to make sure it exists?
Depending on the fine print of your requirements and your version of Postgres I would suggest a TRIGGER or a NOT VALID CHECK constraint.
We have just discussed the matter in depth in this related question on dba.SE:
Disable all constraints and table checks while restoring a dump
If I understand you correctly, you need a UNIQUE constraint on "Product"."ShortCode". Surely it should be declared NOT NULL, too.
CREATE TABLE dev."Product"
(
"Id" serial NOT NULL,
"ShortCode" character(6) NOT NULL UNIQUE,
CONSTRAINT "ProductPK" PRIMARY KEY ("Id")
);
CREATE TABLE dev."Mapping"
(
"LookupId" integer NOT NULL,
"ShortCode" character(6) NOT NULL REFERENCES dev."Product" ("ShortCode"),
CONSTRAINT "MappingPK" PRIMARY KEY ("LookupId", "ShortCode")
);
Your original "Product" table will allow this INSERT statement to succeed, but it shouldn't.
insert into dev."Product" ("ShortCode") values
(NULL), (NULL), ('ABC'), ('ABC'), ('ABC');
Data like that is just about useless.
select * from dev."Product"
id ShortCode
--
1
2
3 ABC
4 ABC
5 ABC