I have a postgres db that I would like to generate tables for and write to using Gorp, however I get an error message when I try to insert due to the slices contained within my structs "sql: converting argument $4 type: unsupported type []core.EmbeddedStruct, a slice of struct.
My structs look as follows:
type Struct1 struct {
ID string
Name string
Location string
EmbeddedStruct []EmbeddedStruct
}
type EmbeddedStruct struct {
ID string
Name string
struct1Id string
EmbeddedStruct2 []EmbeddedStruct2
}
type EmbeddedStruct2 struct {
ID string
Name string
embeddedStructId string
}
func (repo *PgStruct1Repo) Write(t *core.Struct1) error {
trans, err := createTransaction(repo.dbMap)
defer closeTransaction(trans)
if err != nil {
return err
}
// Check to see if struct1 item already exists
exists, err := repo.exists(t.ID, trans)
if err != nil {
return err
}
if !exists {
log.Debugf("saving new struct1 with ID %s", t.ID)
err = trans.Insert(t)
if err != nil {
return err
}
return nil
}
return nil
}
Does anyone have any experience with/or know if Gorp supports inserting slices? From what I've read it seems to only support slices for SELECT statements
Gorp supports inserting a variadic number of slices, so if you have a slice records, you can do:
err = db.Insert(records...)
However, from your question it seems you want to save a single record that has a slice struct field.
https://github.com/go-gorp/gorp
gorp doesn't know anything about the relationships between your structs (at least not yet).
So, you have to handle the relationship yourself. The way I personally would solve this issue is to have Gorp ignore the slice on the parent:
type Struct1 struct {
ID string
Name string
Location string
EmbeddedStruct []EmbeddedStruct `db:"-"`
}
And then use the PostInsert hook to save the EmbeddedStruct (side note, this is a poor name as it is not actually an embedded struct)
func (s *Struct1) PostInsert(sql gorp.SqlExecutor) error {
for i := range s.EmbeddedStruct {
s.EmbeddedStruct[i].struct1Id = s.ID
}
return sql.Insert(s.EmbeddedStruct...)
}
And then repeat the process on EmbeddedStruct2.
Take care to setup the relationships properly on the DB side to ensure referential integrity (e.g. ON DELETE CASCADE / RESTRICT), and it would probably be a good idea to wrap the whole thing in a transaction.
Related
I have a Go struct which contains a slice of strings which I'd like to save as a jsonB object in Postgres with GORM.
I've come accross a solution which requires to use the GORM specific type (postgres.Jsonb) which I'd like to avoid.
When I try to run the AutoMigrate with a slice in my model, it panics and won't start although when I wrap this slice in a struct (which I'm okay doing) it will run without error but won't create the column in postgres.
type User struct {
gorm.Model
Data []string `sql:"type:"jsonb"; json:"data"`
} //Panics
type User struct {
gorm.Model
Data struct {
NestedData []string
} `sql:"type:"jsonb"; json:"data"`
} //Doesn't crash but doesn't create my column
Has anyone been able to manipulate jsonb with GORM without using the postgres.Jsonb type in models ?
The simplest way to use JSONB in Gorm is to use pgtype.JSONB.
Gorm uses pgx as it driver, and pgx has package called pgtype, which has type named pgtype.JSONB.
If you have already install pgx as Gorm instructed, you don't need install any other package.
This method should be the best practice since it using underlying driver and no custom code is needed. It can also be used for any JSONB type beyond []string.
type User struct {
gorm.Model
Data pgtype.JSONB `gorm:"type:jsonb;default:'[]';not null"`
}
Get value from DB
u := User{}
db.find(&u)
var data []string
err := u.Data.AssignTo(&data)
if err != nil {
t.Fatal(err)
}
Set value to DB
u := User{}
err := u.Data.Set([]string{"abc","def"})
if err != nil {
return
}
db.Updates(&u)
Maybe:
type DataJSONB []string
func (dj DataJSONB) Value() (driver.Value, error) {
return json.Marshal(dj)
}
func (dj *DataJSONB) Scan(value interface{}) error {
b, ok := value.([]byte)
if !ok {
return fmt.Errorf("[]byte assertion failed")
}
return json.Unmarshal(b, dj)
}
// Your bit
type User struct {
gorm.Model
Data DataJSONB `sql:"type:"jsonb"; json:"data"`
}
Define a new type:
type Data map[string]interface{}
And implement the Valuer and Scanner interfaces onto them, which allows the field to be converted to a value for the database, and scanned back into the field, respectively:
// Value converts into []byte
func (d Data) Value() (driver.Value, error) {
j, err := json.Marshal(d)
return j, err
}
// Scan puts the []byte back into Data
func (d *Data) Scan(src interface{}) error {
source, ok := src.([]byte)
if !ok {
return errors.New("Type assertion .([]byte) failed.")
}
var i interface{}
if err := json.Unmarshal(source, &i); err != nil {
return err
}
*d, ok = i.(map[string]interface{})
if !ok {
return errors.New("Type assertion .(map[string]interface{}) failed.")
}
return nil
}
Then you can define your field in your model like this:
type User struct {
gorm.Model
Data Data `type: jsonb not null default '{}'::jsonb`
}
Using the underlying map[string]interface{} type is nice too, as you can Unmarshal/Marshal any JSON to/from it.
I would like to know if there's any approach that would allow me to ignore null types while unmarshalling a MongoDB document into a Go struct.
Right now I have some auto-generate Go structs, something like this:
type User struct {
Name string `bson:"name"`
Email string `bson:"email"`
}
Changing the types declared in this struct is not an option, and here's the problem; in a MongoDB database, which I do not have total control, some of the documents have been inserted with null values were originally I was not expecting nulls. Something like this:
{
"name": "John Doe",
"email": null
}
As the string types declared inside my struct are not pointers, they can't receive a nil value, so whenever I try to unmarshall this document in my struct, it returns an error.
Preventing the insertion of this kind of document into the database would be the ideal solution, but for my use case, ignoring the null values would also be acceptable. So after unmarshalling the document my User instance would look like this
User {
Name: "John Doe",
Email: "",
}
I'm trying to find, either some annotation flag, or an option that could be passed to the method Find/FindOne, or maybe even a query parameter to prevent returning any field containing null values from the database. Without any success until now.
Are there any built-in solutions in the mongo-go-driver for this problem?
The problem is that the current bson codecs do not support encoding / decoding string into / from null.
One way to handle this is to create a custom decoder for string type in which we handle null values: we just use the empty string (and more importantly don't report error).
Custom decoders are described by the type bsoncodec.ValueDecoder. They can be registered at a bsoncodec.Registry, using a bsoncodec.RegistryBuilder for example.
Registries can be set / applied at multiple levels, even to a whole mongo.Client, or to a mongo.Database or just to a mongo.Collection, when acquiring them, as part of their options, e.g. options.ClientOptions.SetRegistry().
First let's see how we can do this for string, and next we'll see how to improve / generalize the solution to any type.
1. Handling null strings
First things first, let's create a custom string decoder that can turn a null into a(n empty) string:
import (
"go.mongodb.org/mongo-driver/bson/bsoncodec"
"go.mongodb.org/mongo-driver/bson/bsonrw"
"go.mongodb.org/mongo-driver/bson/bsontype"
)
type nullawareStrDecoder struct{}
func (nullawareStrDecoder) DecodeValue(dctx bsoncodec.DecodeContext, vr bsonrw.ValueReader, val reflect.Value) error {
if !val.CanSet() || val.Kind() != reflect.String {
return errors.New("bad type or not settable")
}
var str string
var err error
switch vr.Type() {
case bsontype.String:
if str, err = vr.ReadString(); err != nil {
return err
}
case bsontype.Null: // THIS IS THE MISSING PIECE TO HANDLE NULL!
if err = vr.ReadNull(); err != nil {
return err
}
default:
return fmt.Errorf("cannot decode %v into a string type", vr.Type())
}
val.SetString(str)
return nil
}
OK, and now let's see how to utilize this custom string decoder to a mongo.Client:
clientOpts := options.Client().
ApplyURI("mongodb://localhost:27017/").
SetRegistry(
bson.NewRegistryBuilder().
RegisterDecoder(reflect.TypeOf(""), nullawareStrDecoder{}).
Build(),
)
client, err := mongo.Connect(ctx, clientOpts)
From now on, using this client, whenever you decode results into string values, this registered nullawareStrDecoder decoder will be called to handle the conversion, which accepts bson null values and sets the Go empty string "".
But we can do better... Read on...
2. Handling null values of any type: "type-neutral" null-aware decoder
One way would be to create a separate, custom decoder and register it for each type we wish to handle. That seems to be a lot of work.
What we may (and should) do instead is create a single, "type-neutral" custom decoder which handles just nulls, and if the BSON value is not null, should call the default decoder to handle the non-null value.
This is surprisingly simple:
type nullawareDecoder struct {
defDecoder bsoncodec.ValueDecoder
zeroValue reflect.Value
}
func (d *nullawareDecoder) DecodeValue(dctx bsoncodec.DecodeContext, vr bsonrw.ValueReader, val reflect.Value) error {
if vr.Type() != bsontype.Null {
return d.defDecoder.DecodeValue(dctx, vr, val)
}
if !val.CanSet() {
return errors.New("value not settable")
}
if err := vr.ReadNull(); err != nil {
return err
}
// Set the zero value of val's type:
val.Set(d.zeroValue)
return nil
}
We just have to figure out what to use for nullawareDecoder.defDecoder. For this we may use the default registry: bson.DefaultRegistry, we may lookup the default decoder for individual types. Cool.
So what we do now is register a value of our nullawareDecoder for all types we want to handle nulls for. It's not that hard. We just list the types (or values of those types) we want this for, and we can take care of all with a simple loop:
customValues := []interface{}{
"", // string
int(0), // int
int32(0), // int32
}
rb := bson.NewRegistryBuilder()
for _, v := range customValues {
t := reflect.TypeOf(v)
defDecoder, err := bson.DefaultRegistry.LookupDecoder(t)
if err != nil {
panic(err)
}
rb.RegisterDecoder(t, &nullawareDecoder{defDecoder, reflect.Zero(t)})
}
clientOpts := options.Client().
ApplyURI("mongodb://localhost:27017/").
SetRegistry(rb.Build())
client, err := mongo.Connect(ctx, clientOpts)
In the example above I registered null-aware decoders for string, int and int32, but you may extend this list to your liking, just add values of the desired types to the customValues slice above.
You can go through the operator $exists and Query for Null or Missing Fields for a detail explanation.
In the mongo-go-driver, you can try below query:
The email => nil query matches documents that either contains the email field whose value is nil or that do not contain the email field.
cursor, err := coll.Find(
context.Background(),
bson.D{
{"email", nil},
})
You have to just add the $ne operator in the above query to get the records that do not have the field email or do not have the value nil in email. For more details about the operator $ne
If you know ahead of time which fields could potentially be null in your mongoDB records, you can use pointers in your structs instead:
type User struct {
Name string `bson:"name"` // Will still fail to decode if null in Mongo
Email *string `bson:"email"` // Will be nil in go if null in Mongo
}
Just remember that now you'll need to code more defensively around anything that uses this value after decoding from mongo, eg:
var reliableVal string
if User.Email != nil {
reliableVal = *user.Email
} else {
reliableVal = ""
}
I have a table in Postgres that is a Jsonb
Create Table Business(
id serial not null primary key,
id_category integer not null,
name varchar(50) not null,
owner varchar(200) not null,
coordinates jsonb not null,
reason varchar(300) not null,
foreign key(id_category) references Category(id)
);
as you can see i store the coordinates as a jsonb
ex:
Insert into Business(id_category, name, owner, coordinates, reason)
values
(1,'MyName','Owner', [{"latitude": 12.1268142, "longitude": -86.2754}]','Description')
the way that I extract the data and assign it is like this.
type Business struct {
ID int `json:"id,omitempty"`
Name string `json:"name,omitempty"`
Owner string `json:"owner,omitempty"`
Category string `json:"category,omitempty"`
Departments []string `json:"departments,omitempty"`
Location []Coordinates `json:"location,omitempty"`
Reason string `json:"reason,omitempty"`
}
type Coordinates struct {
Latitude float64 `json:"latitude,omitempty"`
Longitude float64 `json:"longitude,omitempty"`
}
func (a Coordinates) Value() (driver.Value, error) {
return json.Marshal(a)
}
func (a *Coordinates) Scan(value []interface{}) error {
b, ok := value.([]byte)
if !ok {
return errors.New("type assertion to []byte failed")
}
return json.Unmarshal(b, &a)
}
However, I keep receiving this message.
sql: Scan error on column index 3, name "coordinates": unsupported
Scan, storing driver.Value type []uint8 into type *models.Coordinates
And the controller that I use to extract the information is this.
func (b *BusinessRepoImpl) Select() ([]models.Business, error) {
business_list := make([]models.Business, 0)
rows, err := b.Db.Query("SELECT business.id, business.name, business.owner, business.coordinates, business.reason_froggy, category.category FROM business INNER JOIN category on category.id = business.id_category group by business.id, business.name, business.owner, business.coordinates, business.reason_froggy, category.category")
if err != nil {
return business_list, err
}
for rows.Next() {
business := models.Business{}
err := rows.Scan(&business.ID, &business.Name, &business.Owner, &business.Location, &business.Reason, &business.Category)
if err != nil {
break
}
business_list = append(business_list, business)
}
err = rows.Err()
if err != nil {
return business_list, err
}
return business_list, nil
}
Can anyone please tell me how to solve this issue? Retrieve the json array of objects and assign it to the coordinates field inside Business.
1.
As you can see from the documentation the Scanner interface, to be satisfied, requires the method
Scan(src interface{}) error
But your *Coordinates type implements a different method
Scan(value []interface{}) error
The types interface{} and []interface{} are two very different things.
2.
The Scanner interface must be implemented on the type of the field which you want to pass as an argument to rows.Scan. That is, you've implemented your Scan method on *Coordinates but the type of the Location field is []Coordinates.
Again, same thing, the types *Coordinates and []Coordinates are two very different things.
So the solution is to implement the interface properly and on the proper type.
Note that since Go doesn't allow adding methods to unnamed types, and []Coordinates is an unnamed type, you need to declare a new type that you'll then use in place of []Coordinates.
type CoordinatesSlice []Coordinates
func (s *CoordinatesSlice) Scan(src interface{}) error {
switch v := src.(type) {
case []byte:
return json.Unmarshal(v, s)
case string:
return json.Unmarshal([]byte(v), s)
}
return errors.New("type assertion failed")
}
// ...
type Business struct {
// ...
Location CoordinatesSlice `json:"location,omitempty"`
// ...
}
NOTE
If the business location will always have only one pair of coordinates store into the db as a jsonb object and change the Location type from CoordinatesSlice to Coordinates and accordingly move the Scanner implementation from *CoordinatesSlice to *Coordinates.
I know that this solution is really unoptimized, but it was the only way that it works.
basically i have to obtain the json and then do an unmarshal into the Location attribute.
var location string = ""
if err := json.Unmarshal([]byte(location), &business.Location); err != nil { panic(err) }
I am fairly new to golang programming and the mongodb interface.
I've got a dbase of records created by another application. I am trying to walk the dbase and examine specific fields of each record. I can decode the full records as bson, but I cannot get the specific values.
This struct defines the 3 fields I would like to extract:
type myDbaseRec struct {
aid string `bson:"pon-util-aid"`
ingressPct string `bson:"ingress-bucket-percent"`
egressPct string `bson:"egress-bucket-percent"`
}
Here is my code to iterate through the cursor after the collection.Find(ctx, queryFilter) and decode the results both as bson and as my struct:
var myResult myDbaseRec
var bsonMResult bson.M
var count int
for cursor.Next(ctx) {
err := cursor.Decode(&myResult)
if err != nil {
fmt.Println("cursor.Next() error:", err)
panic(err)
// If there are no cursor.Decode errors
} else {
fmt.Println("\nresult type:", reflect.TypeOf(myResult))
fmt.Printf("result: %+v\n", myResult)
}
err = cursor.Decode(&bsonMResult)
if err != nil {
fmt.Println("bson decode error:", err)
panic(err)
// If there are no cursor.Decode errors
} else {
fmt.Println("\nresult type:", reflect.TypeOf(bsonMResult))
fmt.Println("\nresult:", bsonMResult)
}
}
Here is an example of one iteration of the loop. The bson decode appears to work, but my struct is empty:
result type: internal.myDbaseRec
result: {aid: ingressPct: egressPct:}
result type: primitive.M
result: map[pon-util-aid:ROLT-1-MONTREAL/1/1/xp2 _id:ObjectID("5d70b4d1b3605301ef72228b")
admitted-assured-upstream-bw:0 admitted-excess-upstream-bw:0 admitted-fixed-upstream-bw:0
assured-upstream-bytes:0 available-excess-upstream-bw:0 available-fixed-upstream-bw:622080
app_counters_key_field:ROLT-1-MONTREAL/1/1/xp2 app_export_time:1567665626 downstream-octets:52639862633214
egress-bucket-bps:8940390198 egress-bucket-percent:91 egress-bucket-seconds:559
excess-upstream-bytes:0 fixed-upstream-bytes:0 ingress-bucket-bps:8253153852
ingress-bucket-percent:84 ingress-bucket-seconds:559 sample-time:0 upstream-octets:48549268162714]
I would have expected to get
result: {aid:"ROLT-1-MONTREAL/1/1/xp2" ingressPct:84 egressPct:91}
Any suggestion on how to properly find these 3 fields from each record?
=== UPDATE: The first comment below answered my question.
First, in Go only fields starting with a (Unicode) upper case letter are exported. See also Exported identifiers. The default decoder will try to decode only to the exported fields. So you should change the struct into:
type myDbaseRec struct {
Aid string `bson:"pon-util-aid"`
IngressPct int32 `bson:"ingress-bucket-percent"`
EgressPct int32 `bson:"egress-bucket-percent"`
}
Also notice that the struct above for IngressPct and EgressPct have type int32. This is because the value in the document is represented in numbers (int/double), and not string. You may change it to other number type accordingly i.e. int16, int64, etc.
I would like to know if there's any approach that would allow me to ignore null types while unmarshalling a MongoDB document into a Go struct.
Right now I have some auto-generate Go structs, something like this:
type User struct {
Name string `bson:"name"`
Email string `bson:"email"`
}
Changing the types declared in this struct is not an option, and here's the problem; in a MongoDB database, which I do not have total control, some of the documents have been inserted with null values were originally I was not expecting nulls. Something like this:
{
"name": "John Doe",
"email": null
}
As the string types declared inside my struct are not pointers, they can't receive a nil value, so whenever I try to unmarshall this document in my struct, it returns an error.
Preventing the insertion of this kind of document into the database would be the ideal solution, but for my use case, ignoring the null values would also be acceptable. So after unmarshalling the document my User instance would look like this
User {
Name: "John Doe",
Email: "",
}
I'm trying to find, either some annotation flag, or an option that could be passed to the method Find/FindOne, or maybe even a query parameter to prevent returning any field containing null values from the database. Without any success until now.
Are there any built-in solutions in the mongo-go-driver for this problem?
The problem is that the current bson codecs do not support encoding / decoding string into / from null.
One way to handle this is to create a custom decoder for string type in which we handle null values: we just use the empty string (and more importantly don't report error).
Custom decoders are described by the type bsoncodec.ValueDecoder. They can be registered at a bsoncodec.Registry, using a bsoncodec.RegistryBuilder for example.
Registries can be set / applied at multiple levels, even to a whole mongo.Client, or to a mongo.Database or just to a mongo.Collection, when acquiring them, as part of their options, e.g. options.ClientOptions.SetRegistry().
First let's see how we can do this for string, and next we'll see how to improve / generalize the solution to any type.
1. Handling null strings
First things first, let's create a custom string decoder that can turn a null into a(n empty) string:
import (
"go.mongodb.org/mongo-driver/bson/bsoncodec"
"go.mongodb.org/mongo-driver/bson/bsonrw"
"go.mongodb.org/mongo-driver/bson/bsontype"
)
type nullawareStrDecoder struct{}
func (nullawareStrDecoder) DecodeValue(dctx bsoncodec.DecodeContext, vr bsonrw.ValueReader, val reflect.Value) error {
if !val.CanSet() || val.Kind() != reflect.String {
return errors.New("bad type or not settable")
}
var str string
var err error
switch vr.Type() {
case bsontype.String:
if str, err = vr.ReadString(); err != nil {
return err
}
case bsontype.Null: // THIS IS THE MISSING PIECE TO HANDLE NULL!
if err = vr.ReadNull(); err != nil {
return err
}
default:
return fmt.Errorf("cannot decode %v into a string type", vr.Type())
}
val.SetString(str)
return nil
}
OK, and now let's see how to utilize this custom string decoder to a mongo.Client:
clientOpts := options.Client().
ApplyURI("mongodb://localhost:27017/").
SetRegistry(
bson.NewRegistryBuilder().
RegisterDecoder(reflect.TypeOf(""), nullawareStrDecoder{}).
Build(),
)
client, err := mongo.Connect(ctx, clientOpts)
From now on, using this client, whenever you decode results into string values, this registered nullawareStrDecoder decoder will be called to handle the conversion, which accepts bson null values and sets the Go empty string "".
But we can do better... Read on...
2. Handling null values of any type: "type-neutral" null-aware decoder
One way would be to create a separate, custom decoder and register it for each type we wish to handle. That seems to be a lot of work.
What we may (and should) do instead is create a single, "type-neutral" custom decoder which handles just nulls, and if the BSON value is not null, should call the default decoder to handle the non-null value.
This is surprisingly simple:
type nullawareDecoder struct {
defDecoder bsoncodec.ValueDecoder
zeroValue reflect.Value
}
func (d *nullawareDecoder) DecodeValue(dctx bsoncodec.DecodeContext, vr bsonrw.ValueReader, val reflect.Value) error {
if vr.Type() != bsontype.Null {
return d.defDecoder.DecodeValue(dctx, vr, val)
}
if !val.CanSet() {
return errors.New("value not settable")
}
if err := vr.ReadNull(); err != nil {
return err
}
// Set the zero value of val's type:
val.Set(d.zeroValue)
return nil
}
We just have to figure out what to use for nullawareDecoder.defDecoder. For this we may use the default registry: bson.DefaultRegistry, we may lookup the default decoder for individual types. Cool.
So what we do now is register a value of our nullawareDecoder for all types we want to handle nulls for. It's not that hard. We just list the types (or values of those types) we want this for, and we can take care of all with a simple loop:
customValues := []interface{}{
"", // string
int(0), // int
int32(0), // int32
}
rb := bson.NewRegistryBuilder()
for _, v := range customValues {
t := reflect.TypeOf(v)
defDecoder, err := bson.DefaultRegistry.LookupDecoder(t)
if err != nil {
panic(err)
}
rb.RegisterDecoder(t, &nullawareDecoder{defDecoder, reflect.Zero(t)})
}
clientOpts := options.Client().
ApplyURI("mongodb://localhost:27017/").
SetRegistry(rb.Build())
client, err := mongo.Connect(ctx, clientOpts)
In the example above I registered null-aware decoders for string, int and int32, but you may extend this list to your liking, just add values of the desired types to the customValues slice above.
You can go through the operator $exists and Query for Null or Missing Fields for a detail explanation.
In the mongo-go-driver, you can try below query:
The email => nil query matches documents that either contains the email field whose value is nil or that do not contain the email field.
cursor, err := coll.Find(
context.Background(),
bson.D{
{"email", nil},
})
You have to just add the $ne operator in the above query to get the records that do not have the field email or do not have the value nil in email. For more details about the operator $ne
If you know ahead of time which fields could potentially be null in your mongoDB records, you can use pointers in your structs instead:
type User struct {
Name string `bson:"name"` // Will still fail to decode if null in Mongo
Email *string `bson:"email"` // Will be nil in go if null in Mongo
}
Just remember that now you'll need to code more defensively around anything that uses this value after decoding from mongo, eg:
var reliableVal string
if User.Email != nil {
reliableVal = *user.Email
} else {
reliableVal = ""
}