GORM foreign key doesn't seem to add proper fields - postgresql

I have the following model:
type Drink struct {
gorm.Model // Adds some metadata fields to the table
ID uuid.UUID `gorm:"type:uuid;primary key"`
Name string `gorm:"index;not null;"`
Volume float64 `gorm:"not null;type:decimal(10,2)"`
ABV float64 `gorm:"not null;type:decimal(10,2);"`
Price float64 `gorm:"not null;type:decimal(10,2);"`
Location Location `gorm:"ForeignKey:DrinkID;"`
}
type Location struct {
gorm.Model // Adds some metadata fields to the table
ID uuid.UUID `gorm:"primary key;type:uuid"`
DrinkID uuid.UUID
Name string `gorm:"not null;"`
Address string `gorm:"not null;type:decimal(10,2)"`
Phone int `gorm:"not null;type:decimal(10,0);"`
}
however, when I run the program, it adds both tables, however there is no location field in the Drink table.
My database looks like this after the migrations, regardless of whether I drop the tables previously:
I have a sneaking feeling it might be because I am not using the gorm default ID, but if that's the case can anyone point me to how to override the default ID with a UUID instead of a uint the proper way? or if that's not even the issue, please, I've been working on this for a few days now and I really don't want to take the "easy" road of just using the defaults gorm provides, I actually want to understand what is going on here and how to properly do what I am trying to do. I am getting no errors when running the API, and the migration appears to run as well, it's just the fields I have defined are not actually showing up in the database, which means that the frontend won't be able to add data properly.
What I WANT to happen here is that a list of stores will be available in the front-end, and when a user adds a drink, they will have to select from that list of stores. Each drink added should only have 1 store, as the drinks prices at different stores would be different. So technically there would be many "repeated" drinks in the drink table, but connected to different Locations.

First point is as you are using custom primary key, you should not use gorm.Model as it contains ID field in it. Reference
Second point is according to your description, store (location) has one to
many relationship with drink. That means a store can have multiple
drinks but a drink should belong to only one store. In one-to-many
relationship there should be a reference or relation id in the many
side. That means in your case in drink table. Then your struct
should look like this:
MyModel Struct
type MyModel struct {
CreatedAt time.Time
UpdatedAt time.Time
DeletedAt gorm.DeletedAt `gorm:"index"`
}
Location Struct (Store)
type Location struct {
MyModel
ID uuid.UUID `gorm:"primary key;type:uuid"`
// other columns...
Drinks []Drink
}
Drink Struct
type Drink struct {
MyModel
ID uuid.UUID `gorm:"type:uuid;primary key"`
//other columns...
LocationID uuid.UUID// This is important
}
Then gorm will automatically consider LocationID in drink table will be referring the ID field of Location Table. You can also explicitly instruct this to gorm using gorm:"foreignKey:LocationID;references:ID" in Location struct's Drinks array field.
Reference

Related

How to scan into nested structs with sqlx?

Let's assume that I have two models,
type Customer struct {
Id int `json:"id" db:"id"`
Name string `json:"name" db:"name"`
Address Address `json:"adress"`
}
type Address struct {
Street string `json:"street" db:"street"`
City string `json:"city" db:"city"`
}
// ...
customer := models.Customer{}
err := db.Get(&customer , `select * from users where id=$1 and name=$2`, id, name)
But this scan throws an error as: missing destination name street in *models.Customer
Am I doing something wrong? As you can see I already updated the db corresponding of the value. I doubled check so case sensitivity shouldn't be a problem.
Or is it not possible using https://github.com/jmoiron/sqlx?
I can see it in the documentation but still couldn't figure out how to solve it.
http://jmoiron.github.io/sqlx/#advancedScanning
The users table is declared as:
CREATE TABLE `users` (
`id` varchar(256) NOT NULL,
`name` varchar(150) NOT NULL,
`street` varchar(150) NOT NULL,
`city` varchar(150) NOT NULL,
)
The very link you posted gives you an hint about how to do this:
StructScan is deceptively sophisticated. It supports embedded structs, and assigns to fields using the same precedence rules that Go uses for embedded attribute and method access
So given your DB schema, you can simply embed Address into Customer:
type Customer struct {
Id int `json:"id" db:"id"`
Name string `json:"name" db:"name"`
Address
}
In your original code, Address was a field with its own db tag. This is not correct, and by the way your schema has no address column at all. (it appears you edited it out of your code snippet)
By embedding the struct into Customer instead, Address fields including tags are promoted into Customer and sqlx will be able to populate them from your query result.
Warning: embedding the field will also flatten the output of any JSON marshalling. It will become:
{
"id": 1,
"name": "foo",
"street": "bar",
"city": "baz"
}
If you want to place street and city into a JSON address object as based on your original struct tags, the easiest way is probably to remap the DB struct to your original type.
You could also scan the query result into a map[string]interface{} but then you have to be careful about how Postgres data types are represented in Go.
I had the same problem and came up with a slightly more elegant solution than #blackgreen's.
He's right, the easiest way is to embed the objects, but I do it in a temporary object instead of making the original messier.
You then add a function to convert your temp (flat) object into your real (nested) one.
type Customer struct {
Id int `json:"id" db:"id"`
Name string `json:"name" db:"name"`
Address Address `json:"adress"`
}
type Address struct {
Street string `json:"street" db:"street"`
City string `json:"city" db:"city"`
}
type tempCustomer struct {
Customer
Address
}
func (c *tempCustomer) ToCustomer() Customer {
customer := c.Customer
customer.Address = c.Address
return customer
}
Now you can scan into tempCustomer and simply call tempCustomer.ToCustomer before you return. This keeps your JSON clean and doesn't require a custom scan function.

gorm many2many and additional fields in association table

I have a many2many association (it is used to return JSON). It's declared in a model:
// models/school.go
type School struct {
ID int `gorm:"primary_key"`
Name string `gorm:"not null"`
Accreditations []Accreditation `gorm:"many2many:school_accreditation;"`
}
It works well. I have the association returned in the json. The problem is that I have an additional field in the school_accreditation table but it isn't included in the response.
I have tried to declare a model for the association like proposed in this answer:
// models/schoolAccreditation.go
package models
import "time"
// many to many
type SchoolAccreditation struct {
StartedAt time.Time `gorm:"not null"`
}
But it doesn't work so far. Is there some additional configuration to declare? Or to modify?
Answering to myself, I added the field in the linked model as "ignore" and it works, the column is automatically retrieved from the association table.
type Accreditation struct {
// "accreditation" table
ID int `gorm:"primary_key"`
Name string
Description string
// "school_accreditation table", so the field is set as ignore with "-"
EndAt time.Time `gorm:"-"`
}

Update fields create new foreignkey items instead of updating them

I've got a one-to-one relationship, Location, working with postgresql:
type App struct {
gorm.Model
PersoID string `gorm:"primary_key;unique" json:"perso_id"`
Location OllyLocation `gorm:"foreignkey:location_id"`
LocationID *uint `json:"-" gorm:"type:integer REFERENCES locations(id)"`
Users []User `json:"users,omitempty" gorm:"many2many:app_users;"`
}
type Location struct {
ID uint `json:"-" gorm:"primary_key;unique"`
CreatedAt time.Time `json:"-"`
UpdatedAt time.Time
Lat float64 `json:"lat"`
Long float64 `json:"long"`
Address string `json:"address"`
Country string `json:"country"`
}
But as you can see, the App structure has its own primaryKey id as string (and the GORM one).
Thus this block from doing a simple db.Save() from gorm.
So I've tried to do:
func (previousModel *App) UpdateAndReturnedUpdated(db *gorm.DB, appUpdated *App) error {
if err := db.Model(previousModel).Update(appUpdated).Error; err != nil {
return err
}
return db.Preload("Location").First(previousModel, App{PersoID: appUpdated.PersoID}).Error
}
With this, the App is correctly updated - fantastic! - but not the Location.
The location is re created in the db with the new values and the foreignKey LocationID is updated on the App.
This is super annoying because if I want to do an update of the address only without specifying other fields, they are just re created with the original type value (lat / long will be 0)
Any ideas about how to update the foreignKey? Thanks!
You can use this to not modify the associated entities
db.Set("gorm:save_associations", false).Save(previousModel)
or to save only one field of the struct
db.Model(previousModel).UpdateColumn("PersoID", appUpdated.PersoID)
if you want to update an associated entity and not the relation
you will need to update the entity itself
db.Model(Location{ID:*previousModel.LocationID}).Update(appUpdated.Location)
Well, once more (but let's not blame the guy, he is alone) the main problem comes from an incomplete documentation.
All my problems were solved when I checked that I properly preloaded my foreign object + indicate it's ID (which blocked the fact that the foreignKey was Inserted instead of Updated)
Then, to be able to update it properly I had to do:
db.
Model(previousModel).
UpdateColumns(appUpdated).
Model(&previousModel.Location).
Updates(appUpdated.Location)
Was very painful, but at least it's solved.

How to work with multiple types in single collection with mgo

I'm very new to golang and try to write a simple event-sourcing user-management webapi using mongodb as backing database. Now i have User, which looks something like this:
type User struct {
Id bson.ObjectId `json:"id" bson:"_id"`
UserName string `json:"username" bson:"username"`
Email string `json:"email" bson:"email"`
PwdHash string `json:"pwd_hash" bson:"pwd_hash"`
FullName string `json:"fullname" bson:"fullname"`
}
and three events, happening to user, when somebody uses api:
type UserCreatedEvent struct {
UserId bson.ObjectId `json:"id" bson:"_id"`
//time when event was issued
CreatedEpoch time.Time `json:"created_epoch" bson:"created_epoch"`
//id of user that made a change
IssuedByUserId bson.ObjectId `json:"issuedby_userid" bson:"issuedby_userid"`
}
type UserDeletedEvent struct {
UserId bson.ObjectId `json:"id" bson:"_id"`
CreatedEpoch time.Time `json:"created_epoch" bson:"created_epoch"`
//id of user that made a change
IssuedByUserId bson.ObjectId `json:"issuedby_userid" bson:"issuedby_userid"`
}
type UserUpdatedEvent struct {
UserId bson.ObjectId `json:"id" bson:"_id"`
CreatedEpoch time.Time `json:"created_epoch" bson:"created_epoch"`
//id of user that made a change
IssuedByUserId bson.ObjectId `json:"issuedby_userid" bson:"issuedby_userid"`
ChangedFieldName string `json:"changed_field_name" bson:"changed_field_name"`
NewChangedFieldValue string `json:"new_changed_field_value" bson:"new_changed_field_value"`
}
Now i'm stuck on saving and retrieving events from db. The problem is i want to store them in a single collection, so that i have a full plain history of user modifications. But i can't find how to correctly store event type name as a mongo document field and then use it in searches. What is the idiomatic go-way to do this?
I'll be gratefull for any help.
What's nice about a non-relational database is that a little redundancy is OK and is faster for retrieval since you're not joining things together. You could just add your bottom three objects as properties on your User in whatever fashion makes sense for you and your data. Then you wouldn't need to store the UserId on those objects either.
If you need to search over the Events quickly, you could create another collection to hold them. You'd be inserting to two collections, but retrieval times/logic should be pretty good/easy to write.
Your new Event would be something like:
type UserEventType int
const (
Created UserEventType = iota
Deleted
Updated
)
type UserEvent struct {
UserId bson.ObjectId `json:"id" bson:"_id"`
CreatedEpoch time.Time `json:"created_epoch" bson:"created_epoch"`
//id of user that made a change
IssuedByUserId bson.ObjectId `json:"issuedby_userid" bson:"issuedby_userid"`
ChangedFieldName string `json:"changed_field_name,omitempty" bson:"changed_field_name,omitempty"`
NewChangedFieldValue string `json:"new_changed_field_value,omitempty" bson:"new_changed_field_value,omitempty"`
EventType UserEventType `json:"user_event_type" bson:"user_event_type"`
}
Notice the omitempty on the fields that are optional depending on the type of event.
You really aren't supposed to store different objects in the same collection. Here's a line from the Mongo documentation:
MongoDB stores documents in collections. Collections are analogous to tables in relational databases.
In case you aren't familiar with relational databases, a table would essentially represent one type of object.

`gorm` Ignoring `sql:"index"` Tags

Why gorm is ignoring sql:"index" tags? No indexes got created.
Database in use here is PostgreSQL (importing _ "github.com/lib/pq"). This Model struct is used (because default gorm.Model uses an auto increment number - serial - as primary key and I wanted to set id myself):
type Model struct {
ID int64 `sql:"type:bigint PRIMARY KEY;default:0"`
CreatedAt time.Time
UpdatedAt time.Time
DeletedAt *time.Time `sql:"index"`
}
And one of actual models is:
type TUHistory struct {
Model
TUID int64 `json:"tu_id,string" gorm:"column:tu_id" sql:"index"`
}
func (x *TUHistory) TableName() string {
return "tu_history"
}
And the table is created by db.CreateTable(&TUHistory{}) which creates the table correctly except for indexes.
As a temporary work around, I do db.Model(&TUHistory{}).AddIndex("ix_tuh_tu_id", "tu_id") to create indexes.
From my experience, the db.CreateTable only creates the table and it's fields. You are better off using the AutoMigrate function with the model structure that you want to migrate:
db, err := gorm.Open("postgres", connectionString)
...
// error checking
...
db.AutoMigrate(&Model)
Also, I tried AutoMigrating the model you posted and got an error saying that multiple primary keys are not allowed, so I changed the model to:
type Model struct {
Id int64 `sql:"type:bigint;default:0"`
CreatedAt time.Time
UpdatedAt time.Time
DeletedAt *time.Time `sql:"index"`
}
and the AutoMigration created all PKs and indexes just fine.
Edit:
Checking the GORM's README, on this example, the Email structure goes as:
type Email struct {
ID int
UserID int `sql:"index"` // Foreign key (belongs to), tag `index` will create index for this field when using AutoMigrate
Email string `sql:"type:varchar(100);unique_index"` // Set field's sql type, tag `unique_index` will create unique index
Subscribed bool
}
Notice the comment on the UserId field saying it will create the index when using AutoMigrate.
Also, it's worth taking a look at how the AutoMigrate does it's job:
// Automating Migration
db.AutoMigrate(&User{})
db.Set("gorm:table_options", "ENGINE=InnoDB").AutoMigrate(&User{})
db.AutoMigrate(&User{}, &Product{}, &Order{})
// Feel free to change your struct, AutoMigrate will keep your database up-to-date.
// AutoMigrate will ONLY add *new columns* and *new indexes*,
// WON'T update current column's type or delete unused columns, to protect your data.
// If the table is not existing, AutoMigrate will create the table automatically.