I've been trying to use the mattes/migrate package but I can't seem to get it to actually do anything. The database runs on postgres and I interact with it through sqlx.
I've gone through the readme on github, and applied the following code:
// use synchronous versions of migration functions ...
allErrors, ok := migrate.UpSync("postgres://<my_url_string>", "./app/database/migrations")
if !ok {
fmt.Println("Oh no ...")
// do sth with allErrors slice
}
My schema is initiated like this:
//sqlx's initiated DB is imported in the database package
func SyncSchemas() {
database.DB.MustExec(schema)
}
var schema =
`CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE TABLE IF NOT EXISTS examples (
created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
deleted_at text DEFAULT NULL,
id UUID PRIMARY KEY UNIQUE NOT NULL DEFAULT uuid_generate_v4()
);`
type Example struct {
ID string `json:"id" db:"id"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
DeletedAt *time.Time `json:"deleted_at" db:"deleted_at"`
}
It runs without error, but that's all it seems to be doing at the moment. Shouldn't it keep track of my schemas or something? I previously used gorm as my ORM, and there I just had to run autoMigrate() on a schema and it automatically created and applied migrations as I changed the schema. I would like the same functionality with mattes/migrate, but I can't seem to find an example or tutorial which shows how to use it.
How am I supposed to do this?
One suggestion in whenever you code in Golang is to check the errors do it in this way
if allErrors, ok := migrate.UpSync("postgres://<my_url_string>", "./migrations"); !ok {
fmt.Println("Oh no ...", ok)
// do sth with allErrors slice
}
As I don't know what is your error at least in this way you can check what your error is.
Related
I have a simple Prisma schema (I'm only using the relevant part):
enum ApprovalStatus {
APPROVED
DENIED
PENDING
}
model Attendee {
user User #relation(fields: [user_id], references: [id])
user_id BigInt
event Event #relation(fields: [event_id], references: [id])
event_id BigInt
status ApprovalStatus #default(APPROVED)
created_at DateTime #default(now())
updated_at DateTime? #updatedAt
deleted_at DateTime?
##id([user_id, event_id])
##unique([user_id, event_id])
##map("attendees")
}
After saving the schema I run npx prisma migrate dev, and it creates the migration and successfully migrates. A quick peek in postgres shows that the table is created and a \dT+ shows that the new type and the 3 entries, have been added as well.
Then I noticed that subsequent runs of migration started adding some weird alter table lines for the attendees table, for no reason. I checked the migration and there was no reason for it. Here's the migration of the attendee table, and as you can see status column is quite clearly defined:
-- CreateTable
CREATE TABLE "attendees" (
"user_id" BIGINT NOT NULL,
"event_id" BIGINT NOT NULL,
"status" "ApprovalStatus" NOT NULL DEFAULT 'APPROVED',
"created_at" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updated_at" TIMESTAMP(3),
"deleted_at" TIMESTAMP(3),
CONSTRAINT "attendees_pkey" PRIMARY KEY ("user_id","event_id")
);
And now, even if there were no changes to anything in the schema, and all previous migrations were properly applied, running npx prisma migrate dev (with or without --create-only) will always generate a migration with following:
/*
Warnings:
- The `status` column on the `attendees` table would be dropped and recreated. This will lead to data loss if there is data in the column.
*/
-- AlterTable
ALTER TABLE "attendees" DROP COLUMN "status",
ADD COLUMN "status" "ApprovalStatus" NOT NULL DEFAULT 'APPROVED';
It's acting as if the type or name of the column has changed, even though there were no changes to the model or even entire schema for that matter. If I run the generate command more times, it will create this same migration each time with the exact same content. I thought it might have something to do with migration order, but unless it's doing migrations randomly, ApprovalStatus migration comes before attendees does. I really see no reason for it to behave this way, but I'm uncertain how to proceed. Any advice would be welcome.
EDIT: Additional info
"prisma": "^4.6.0"
"express": "^4.17.2"
"typescript": "^4.8.4"
psql (15.0, server 12.13 (Debian 12.13-1.pgdg110+1))
There was a regression introduced in Prisma v4.6.0 where Prisma drops and recreates an enum field as can be seen in this issue. This was fixed in Prisma v4.6.1. Kindly update to the latest version and you should not experience this issue with Prisma migrate.
When I create my tables in the gorm database, it adds columns to the table that I don't want. I'm not sure how it's adding these extra fields. This causes me to run into an error that says, "pq: null value in column "user_id" violates not-null constraint". "user_id" is the unwanted column that gets added. I'm using gorm and postgreSQL.
I have a many to many relationship for my two tables. My first table is created properly and my second table, stores, is created with the provided fields plus two unwanted fields: "user_id" and "stores_id". I've tried removing the many to many relationship to see if that was the issue, I've tried dropping the tables and recreating them with different fields. Regardless, I have not been able to get rid of the two extra columns.
The first (working) table:
type User struct {
gorm.Model
ID int `json:"u_id"`
Name string `json:"u_name"`
Stores []Store `gorm:"many2many:stores;" json:"stores"`
}
When I execute '\d users', I get the following columns: id, created_at, updated_at, deleted_at, name.
The second (problematic) table:
type Stores struct {
gorm.Model
ID int `json:"s_id"`
NUM int `gorm:"unique" json:"s_num"`
Users []User `gorm:"many2many:user" json:"users"`
}
When I execute '\d' stores, I get the following columns: user_id, vehicle_id, id, created_at, updated_at, deleted_at, num.
I'm executing the creation of these tables through the following code:
db.AutoMigrate(&User{})
db.AutoMigrate(&Store{})
On another note, if I add gorm:"primary_key";auto_increment" to my ID values in my structs, I get the error "pq: column "user_id" appears twice in primary key constraint". I was able to get around this by removing the primary_key and auto_increment attributes, running AutoMigrate() and then adding it back in and running AutoMigrate() again - this was totally fine and working.
I've also tried manually inserting a user_id and store_id. This works fine, except that I'd have to generate new ones every time because they require uniqueness. I understand that the error "pq: null value in column "user_id" violates not-null constraint" is caused by the fact that I'm not providing a user_id or store_id when I'm creating my store. I'm simply confused why a user_id and store_id column is being generated at all, and I'm hoping I can fix that.
This is what gorm.Model looks like
type Model struct {
ID uint `gorm:"primary_key"`
CreatedAt time.Time
UpdatedAt time.Time
DeletedAt *time.Time
}
When we call gorm.Model inside a struct, it means we are add default fields of gorm.Model in our current struct.
type Stores struct {
gorm.Model
....
so your user model will look something like
ype User struct {
ID uint `gorm:"primary_key"`
CreatedAt time.Time
UpdatedAt time.Time
DeletedAt *time.Time
ID int `json:"u_id"`
Name string `json:"u_name"`
Stores []Store `gorm:"many2many:stores;" json:"stores"`
}
that error mayne due to duplicate primary_key key. try to rename ID intjson:"u_id"`` to UserID. you need to update Stores too.
Fixed the duplicate ID errors by removing gorm.Model, as #(Akshaydeep Girl) pointed out what having gorm.Model entails. As for the random 'user_id' and 'store_id' that kept automatically being added, they were being added because of the many2many gorm relationship. I was able to remove those by switching the order of migration.
func DBMigrate(db *gorm.DB) *gorm.DB {
db.AutoMigrate(&Store{})
db.AutoMigrate(&User{})
return db
}
When I dropped both tables and re-compiled/ran my project with the new order of migration, the stores table was created without the random 'user_id' and 'store_id', and the users table didn't have those either.
I'm using a postgraphql autogenerated mutation to create a row with graphql in a in a postgres table that has a column of datatype UUID[].
However, there doesn't seem to be a way to save this UUID[] data with graphql? Is this a datatype that graphql doesn't account for or am I wrongly forming the array?
When I go to create the row in graphiql:
mutation {
createJob(
input: {
user_ids: ["5b7323ac-e235-4edb-bbf9-97495d9a42a1"],
instructions: "Job Instructions",
publishedDate: "2017-06-07"}
}
)
}
I get the following error:
"message": "column \"user_ids\" is of type uuid[] but expression is of type text[]"
Is a UUID technically not stored like text? I've tried different ways of forming the the UUID array but nothing seems to work
You can probably have a workaround to this problem by creating an implicit cast:
CREATE CAST (text[] AS uuid[])
WITH INOUT
AS IMPLICIT ;
PostgreSQL will automatically do the conversion from text[] to uuid[]; which just seems that is the needed step.
NOTE: The above code must be executed from "the owner of type uuid[] or text[]", which usually means: postgres or the database owner.
To check that it works, you may just do:
CREATE TABLE t
(
ids uuid[]
) ;
INSERT INTO t
(ids)
VALUES
(ARRAY['5b7323ac-e235-4edb-bbf9-97495d9a42a1','5b7323ac-e235-4edb-bbf9-97495d9a42a2']),
(ARRAY['6b7323ac-e235-4edb-bbf9-97495d9a42a1']) ;
... and then, test it with GraphQL
This was a bug in PostGraphQL; it's been fixed in version 4 (which has not been released yet, but you can install via npm install postgraphql#next)
More information: https://github.com/postgraphql/postgraphql/issues/516
I have a table in PostgreSQL represented as the following Go struct:
type AppLog struct {
ID int // set to auto increment in DB, also a primary key
event string
createTime time.Time
}
I configured monthly table partitioning with the above as the base table and a insert trigger to route the data into the child table for the current month using the dateTime values as the partition key.
[the trigger function etc are omitted for brevity]
When I try to insert into AppLog table, Postgres routes the operation to the appropriate child table, say AppLog_2017-05 (current month table), but the insert fails with the following error:
INSERT INTO "app_logs" ("event", "create_time")
VALUES ('device /dev/sdc is now ready','2017-05-26T15:04:30+05:30')
RETURNING "app_logs"."id"
ERROR: sql: no rows in result set
When the same query is run in the Postgres Shell, it runs fine.
Can someone help me understand how to do inserts using GORM in PostgreSQL where table partitioning is configured behind the scene? I am not sure if this is a problem with GORM or the Go PostgreSQL driver or Go Database/SQL package. Or, if I am missing anything.
Any help will be highly appreciated.
I got the answer. It was not an issue with GORM or Go-PQ. I was doing some goof up in my trigger function while returning the inserted values from the child table.
I am using postgres DB and I imported data from csv file. When I try to add new entry from Grails create page, it gives
ERROR: duplicate key value violates unique constraint "course_pkey" Detail: Key (id)=(34) already exists.
I have 697 entries already in the table. How can I have Grails continue to save the entry with id=698?
Thank you in advance.
You most likely want to use a SERIAL type for your primary key in the Postgres database. A SERIAL type auto-increments and sets a default value to the next value. Grails defaults to using Hibernate's native (see section 5.1.2.2.1) id generator strategy, which will use the underlying database. Then you need to simply not specify the id in your domain instance and let the id get generated.
However, if you need to do your exact use case of trying the next id value, you can use a try catch block.
try {
course.save(flush: true)
} catch (DuplicateKeyException e) { // Not sure what the exact exception is
course.id++
course.save(flush: true)
}
You can check de id secuen from hibernate
select * from hibernate_sequence ;
http://grails.1312388.n4.nabble.com/primary-key-constraint-violations-td1356352.html
For PostgreSQL, you can edit the field last_value in the pgAdmin