Insert Array of Structs into DB using gorm - postgresql

I need to insert into a Postgresql database using gorm. But the struct in question has fields containing arrays of custom types:
type VideoFile struct {
UID string
FileName string
Format string
Path string
}
type myStruct struct {
ID string
UserId uint64
UserType int
FaceVideoFiles []VideoFile
MeetingVideoFiles []VideoFile
}
func (r *videoRepo) CreateRecord(input *myStruct) error {
tx := r.base.GetDB().Begin()
err := tx.Create(input).Error
if err != nil {
tx.Rollback()
return err
}
tx.Commit()
return nil
}
ID field is the primary key in database. I am not sure which tags to add to the struct fields so I leave it empty here (e.g. gorm:"primaryKey" json:"id", etc). If the FaceVideoFiles is not an array of struct, things seem to work, but that is not the case with an array of struct. What should I do so that I can insert those arrays of structs (not individual structs) into my database?

You have to use one to many. More information related to this. Please follow documentation. https://gorm.io/docs/has_many.html#Has-Many
type Dog struct {
ID int
Name string
Toys []Toy `gorm:"polymorphic:Owner;"`
}
type Toy struct {
ID int
Name string
OwnerID int
OwnerType string
}
db.Create(&Dog{Name: "dog1", Toys: []Toy{{Name: "toy1"}, {Name: "toy2"}}})
// INSERT INTO `dogs` (`name`) VALUES ("dog1")
// INSERT INTO `toys` (`name`,`owner_id`,`owner_type`) VALUES ("toy1","1","dogs"), ("toy2","1","dogs")

Related

Golang pass a slice of structs to a stored procedure as a array of user defined types

I have a slice of structs that I want to pass into a stored procedure to be used as an array of user defined types t but I can't figure out a way of doing this is in Go.
For example the structs in go:
type a struct {
ID int `db:"id"`
Name string `db:"name"`
Created time.Time `db:"created"`
IsNew bool `db:"is_new"`
}
And the create statement for the user defined type
CREATE TYPE custom_type AS
(
id int,
name varchar,
created timestamp,
is_new boolean
)
and then the stored procedure
create or replace procedure custom_procedure(
input custom_type[]
)
So far I have tried doing
func Save(records []a) error {
_, err := p.client.Exec("CALL custom_procedure($1)", pq.Array(records))
return err
}
but I just get an error "sql: converting argument $1 type: unsupported type a, a struct"
You'll have to implement the driver.Valuer interface on the a type and have the Value method return a postgres composite type literal of the instance of a.
You can read this documentation on how to properly construct composite row type values. Just keep in mind that, since you're using pq.Array, which will quote the output of the Value method, you yourself SHOULD NOT put quotes around the output and also you SHOULD NOT use the ROW keyword.
For example:
type a struct {
ID int `db:"id"`
Name string `db:"name"`
Created time.Time `db:"created"`
IsNew bool `db:"is_new"`
}
func (v a) Value() (driver.Value, error) {
s := fmt.Sprintf("(%d,%q,%s,%t)",
v.ID,
v.Name,
v.Created.Format("2006-01-02 15:04:05"),
v.IsNew,
)
return []byte(s), nil
}

Using LEFT JOIN on Golang and mapping to a struct

I have the following structs on Go:
type Struct1 struct {
ID int64 `db:id`
InternalID int64 `db:internal_id`
Structs2 []Struct2
}
type Struct2 struct {
ID int64 `db:id`
InternalID int64 `db:internal_id`
SomeText string `db:some_text`
}
The relation between this two is: There only can be one Struct1, connect to N Struct2 by the internal_id. So I am doing this query:
SELECT*
FROM struct1 st1
LEFT JOIN struct2 st2
ON
st1.internal_id = st2.internal_id
LIMIT 10
OFFSET (1 - 1) * 10;
By executing this query on Go, I wanna know if i can: Create an array of Struct1, by mapping correctly the array Struct2 on it. If it is possible, how can I do it? Thanks in advance.
I'm using Postgres and sqlx.
This is entirely dependent on the library/client you're using to connect to the database, however, I have never seen a library support what you're trying to do so I'll provide a custom implementation that you can do. Full disclosure, I did not test this but I hope it gives you the general idea and anyone should feel free to add edits.
package main
type Struct1 struct {
ID int64 `db:id`
InternalID int64 `db:internal_id`
Structs2 []Struct2
}
type Struct2 struct {
ID int64 `db:id`
InternalID int64 `db:internal_id`
SomeText string `db:some_text`
}
type Row struct {
Struct1
Struct2
}
func main() {
var rows []*Row
// decode the response into the rows variable using whatever SQL client you use
// We'll group the struct1s into a map, then iterate over all the rows, taking the
// struct2 contents off of rows we've already added to the map and then appending
// the struct2 to them. This effectively turns rows into one-to-many relationships
// between struct1 and struct2.
mapped := map[int64]*Struct1{}
for _, r := range rows {
// The key of the map is going to be the internal ID of struct1 (that's the ONE
// in the one-to-many relationship)
if _, ok := mapped[r.Struct1.InternalID]; ok {
// Make sure to initialize the key if this is the first row with struct1's
// internal ID.
mapped[r.Struct1.InternalID] = &r.Struct1
}
// Append the struct 2 (the MANY in the one-to-many relationship) to the struct1s
// array of struct2s.
mapped[r.Struct1.InternalID].Structs2 = append(mapped[r.Struct1.InternalID].Structs2, r.Struct2)
}
// Then convert it to a slice if needed
results := make([]*Struct1, len(mapped))
i := 0
for _, v := range mapped {
results[i] = v
i++
}
}

How to range over bson.D primitive.A slice mongo-go-driver?

My data structure is
{
_id: ObjectID,
...
fields: [
{ name: "Aryan" },
{ books : [ 1,2,3 ] },
]
}
In our application a user can define his own fields data structure but with a key value structure. So, we had no way of knowing the structure of the data.
So in our document struct we had
type Document struct {
Fields map[string]interface{}
}
as the second parameter returned by mongo was primitive.A ( []interface{} under the hood ), the individual item could have been an array, map, anything.
But we couldn't range over that for it being.
How can I get the individual values like the book ids [1,2,3] or the name value "Aryan" ?
After a couple of attempts to solving this, I was at a state where my current element it self is an interface{} ( [1,2,3] in this case ) and I couldn't get the individual 1,2,3s.
But finally managed to solve it
for k, val := range doc.Fields {
v := reflect.ValueOf(val)
switch reflect.TypeOf(t).Kind() {
case reflect.Slice:
// getting the individual ids
for i := 0; i < v.Len(); i++ {
fmt.Println(v.Index(i))
}
case reflect.String:
default:
// handle
}
}
Note: v.Index(i) returns a data type of Value .
to change the data type
v.Index(i).Interface().(string) // string
v.Index(i).Interface().(float64) // float
Ideally you should avoid working with interface{} type since it's very error prone and the compiler can't help you. The idiomatic way is to define a struct for your model with BSON tags like in this example`
type MyType struct {
ID primitive.ObjectID `bson:"_id,omitempty"`
Fields []Field `bson:"fields,omitempty"`
}
type Field struct {
Name string `bson:"name,omitempty"`
Books []int `bson:"books,omitempty"`
}
Field here is defined as the reunion of all possible fields which again is not ideal but at least the compiler can help you and developers know what to expect from the database document.

Map Mongo _id in two different struct fields in Golang

I am working on a project that uses combination of Go and MongoDB. I am stuck at a place where I have a struct like:
type Booking struct {
// booking fields
Id int `json:"_id,omitempty" bson:"_id,omitempty"`
Uid int `json:"uid,omitempty" bson:"uid,omitempty"`
IndustryId int `json:"industry_id,omitempty" bson:"industry_id,omitempty"`
LocationId int `json:"location_id,omitempty" bson:"location_id,omitempty"`
BaseLocationId int `json:"base_location_id,omitempty" bson:"base_location_id,omitempty"`
}
In this struct, the field Id is of int type. But as we know MongoDB's default id is of bsonObject type. Some times, the system generates the default MongoDB id in Id field.
To overcome this I have modified the struct like this:
type Booking struct {
// booking fields
Id int `json:"_id,omitempty" bson:"_id,omitempty"`
BsonId bson.ObjectId `json:"bson_id" bson:"_id,omitempty"`
Uid int `json:"uid,omitempty" bson:"uid,omitempty"`
IndustryId int `json:"industry_id,omitempty" bson:"industry_id,omitempty"`
LocationId int `json:"location_id,omitempty" bson:"location_id,omitempty"`
BaseLocationId int `json:"base_location_id,omitempty" bson:"base_location_id,omitempty"`
}
In above struct, I have mapped the same _id field in two different struct fields Id (type int) and BsonId (type bson.ObjectId). I want if id of integer type comes, it maps under Id otherwise under BsonId.
But this thing is giving following error:
Duplicated key '_id' in struct models.Booking
How can I implement this type of thing with Go Structs ??
Update:
Here is the code I have written for custom marshaling/unmarshaling:
func (booking *Booking) SetBSON(raw bson.Raw) (err error) {
type bsonBooking Booking
if err = raw.Unmarshal((*bsonBooking)(booking)); err != nil {
return
}
booking.BsonId, err = booking.Id
return
}
func (booking *Booking) GetBSON() (interface{}, error) {
booking.Id = Booking.BsonId
type bsonBooking *Booking
return bsonBooking(booking), nil
}
But this is giving type mismatch error of Id and BsonId fields. What should I do now ?
In your original struct you used this tag for the Id field:
bson:"_id,omitempty"
This means if the value of the Id field is 0 (zero value for the int type), then that field will not be sent to MongoDB. But the _id property is mandatory in MongoDB, so in this case the MongoDB server will generate an ObjectId for it.
To overcome this, the easiest is to ensure the Id will is always non-zero; or if the 0 is a valid id, remove the omitempty option from the tag.
If your collection must allow ids of mixed type (int and ObjectId), then the easiest would be to define the Id field with type of interface{} so it can accomodate both (actually all) types of key values:
Id interface{} `json:"_id,omitempty" bson:"_id,omitempty"`
Yes, working with this may be a little more cumbersome (e.g. if you explicitly need the id as int, you need to use type assertion), but this will solve your issue.
If you do need 2 id fields, one with int type and another with ObjectId type, then your only option will be to implement custom BSON marshaling and unmarshaling. This basically means to implement the bson.Getter and / or bson.Setter interfaces (one method for each) on your struct type, in which you may do anything you like to populate your struct or assemble the data to be actually saved / inserted. For details and example, see Accessing MongoDB from Go.
Here's an example how using custom marshaling it may look like:
Leave out the Id and BsonId fields from marshaling (using the bson:"-" tag), and add a 3rd, "temporary" id field:
type Booking struct {
Id int `bson:"-"`
BsonId bson.ObjectId `bson:"-"`
TempId interface{} `bson:"_id"`
// rest of your fields...
}
So whatever id you have in your MongoDB, it will end up in TempId, and only this id field will be sent and saved in MongoDB's _id property.
Use the GetBSON() method to set TempId from the other id fields (whichever is set) before your struct value gets saved / inserted, and use SetBSON() method to "copy" TempId's value to one of the other id fields based on its dynamic type after the document is retrieved from MongoDB:
func (b *Booking) GetBSON() (interface{}, error) {
if b.Id != 0 {
b.TempId = b.Id
} else {
b.TempId = b.BsonId
}
return b, nil
}
func (b *Booking) SetBSON(raw bson.Raw) (err error) {
if err = raw.Unmarshal(b); err != nil {
return
}
if intId, ok := b.TempId.(int); ok {
b.Id = intId
} else bsonId, ok := b.TempId.(bson.ObjectId); ok {
b.BsonId = bsonId
} else {
err = errors.New("invalid or missing id")
}
return
}
Note: if you dislike the TempId field in your Booking struct, you may create a copy of Booking (e.g. tempBooking), and only add TempId into that, and use tempBooking for marshaling / unmarshaling. You may use embedding (tempBooking may embed Booking) so you can even avoid repetition.

how to get distinct values in mongodb using golang

I tried to retrieve a document from my collection with unique id.
I have a collection with fields: name, age, city, and rank. I want to get 'city' results from mongodb using golang.
My struct code
type exp struct {
name string `bson:"name"`
age int `bson:"age"`
city string `bson:"city"`
rank int `bson:"rank"`
}
With the following code to retrieve results from mongodb:
var result []exp //my struct type
err = coll.Find(bson.M{"City":bson.M{}}).Distinct("City",&result)
fmt.Println(result)
With this code I get an empty array as the result. How would I get all the cities?
Try this code
var result []string
err = c.Find(nil).Distinct("city", &result)
if err != nil {
log.Fatal(err)
}
fmt.Println(result)
Due to restrictions in reflection, mgo (as well as encoding/json and other similar packages) is unable to use unexported fields to marshal or unmarshal data. What you need to do is to export your fields by capitalize the first letter:
type exp struct {
Name string `bson:"name"`
Age int `bson:"age"`
City string `bson:"city"`
Rank int `bson:"rank"`
}
A side note: you do not need to specify the bson tags if the desired name is the same as the lowercase field name. The documentation for bson states:
The lowercased field name is used as the key for each exported field,
but this behavior may be changed using the respective field tag.
Edit:
I just realized you did get an empty slice and not a slice with empty struct fields. My answer is then not an actual answer to the question, but it is still an issue that you need to consider.