I have a Go struct which contains a slice of strings which I'd like to save as a jsonB object in Postgres with GORM.
I've come accross a solution which requires to use the GORM specific type (postgres.Jsonb) which I'd like to avoid.
When I try to run the AutoMigrate with a slice in my model, it panics and won't start although when I wrap this slice in a struct (which I'm okay doing) it will run without error but won't create the column in postgres.
type User struct {
gorm.Model
Data []string `sql:"type:"jsonb"; json:"data"`
} //Panics
type User struct {
gorm.Model
Data struct {
NestedData []string
} `sql:"type:"jsonb"; json:"data"`
} //Doesn't crash but doesn't create my column
Has anyone been able to manipulate jsonb with GORM without using the postgres.Jsonb type in models ?
The simplest way to use JSONB in Gorm is to use pgtype.JSONB.
Gorm uses pgx as it driver, and pgx has package called pgtype, which has type named pgtype.JSONB.
If you have already install pgx as Gorm instructed, you don't need install any other package.
This method should be the best practice since it using underlying driver and no custom code is needed. It can also be used for any JSONB type beyond []string.
type User struct {
gorm.Model
Data pgtype.JSONB `gorm:"type:jsonb;default:'[]';not null"`
}
Get value from DB
u := User{}
db.find(&u)
var data []string
err := u.Data.AssignTo(&data)
if err != nil {
t.Fatal(err)
}
Set value to DB
u := User{}
err := u.Data.Set([]string{"abc","def"})
if err != nil {
return
}
db.Updates(&u)
Maybe:
type DataJSONB []string
func (dj DataJSONB) Value() (driver.Value, error) {
return json.Marshal(dj)
}
func (dj *DataJSONB) Scan(value interface{}) error {
b, ok := value.([]byte)
if !ok {
return fmt.Errorf("[]byte assertion failed")
}
return json.Unmarshal(b, dj)
}
// Your bit
type User struct {
gorm.Model
Data DataJSONB `sql:"type:"jsonb"; json:"data"`
}
Define a new type:
type Data map[string]interface{}
And implement the Valuer and Scanner interfaces onto them, which allows the field to be converted to a value for the database, and scanned back into the field, respectively:
// Value converts into []byte
func (d Data) Value() (driver.Value, error) {
j, err := json.Marshal(d)
return j, err
}
// Scan puts the []byte back into Data
func (d *Data) Scan(src interface{}) error {
source, ok := src.([]byte)
if !ok {
return errors.New("Type assertion .([]byte) failed.")
}
var i interface{}
if err := json.Unmarshal(source, &i); err != nil {
return err
}
*d, ok = i.(map[string]interface{})
if !ok {
return errors.New("Type assertion .(map[string]interface{}) failed.")
}
return nil
}
Then you can define your field in your model like this:
type User struct {
gorm.Model
Data Data `type: jsonb not null default '{}'::jsonb`
}
Using the underlying map[string]interface{} type is nice too, as you can Unmarshal/Marshal any JSON to/from it.
Related
I have model like this in
model.go
type yourTableName struct {
Name string `gorm:"type:varchar(50)" json:"name"`
Email string `gorm:"type:varchar(50)" json:"email"`
FieldNameOfJsonb JSONB `gorm:"type:jsonb" json:"fieldnameofjsonb"`
}
I want to insert FieldNameOfJsonb as an array of object in postgres using GORM
Like given below
{
"name": " james",
"email": "james#gmail.com",
"FieldNameOfJsonb": [
{
"someField1": "value",
"someFiedl2": "somevalue",
},
{
"Field1": "value1",
"Fiedl2": "value2",
}
],
Just add this below code in Model.go (referenceLink)
import (
"errors"
"database/sql/driver"
"encoding/json"
)
// JSONB Interface for JSONB Field of yourTableName Table
type JSONB []interface{}
// Value Marshal
func (a JSONB) Value() (driver.Value, error) {
return json.Marshal(a)
}
// Scan Unmarshal
func (a *JSONB) Scan(value interface{}) error {
b, ok := value.([]byte)
if !ok {
return errors.New("type assertion to []byte failed")
}
return json.Unmarshal(b,&a)
}
-> reference Link for Marshal, Unmarshal
now you can insert data using DB.Create(&yourTableName)
I have answered a similar question in https://stackoverflow.com/a/71636216/13719636 .
The simplest way to use JSONB in Gorm is to use pgtype.JSONB.
Gorm uses pgx as it driver, and pgx has package called pgtype, which has type named pgtype.JSONB.
If you have already install pgx as Gorm instructed, you don't need install any other package.
This method should be the best practice since it using underlying driver and no custom code is needed.
type User struct {
gorm.Model
Data pgtype.JSONB `gorm:"type:jsonb;default:'[]';not null"`
}
Get value from DB
u := User{}
db.find(&u)
var data []string
err := u.Data.AssignTo(&data)
if err != nil {
t.Fatal(err)
}
Set value to DB
u := User{}
err := u.Data.Set([]string{"abc","def"})
if err != nil {
return
}
db.Updates(&u)
You can use gorm-jsonb package.
I have a table in Postgres that is a Jsonb
Create Table Business(
id serial not null primary key,
id_category integer not null,
name varchar(50) not null,
owner varchar(200) not null,
coordinates jsonb not null,
reason varchar(300) not null,
foreign key(id_category) references Category(id)
);
as you can see i store the coordinates as a jsonb
ex:
Insert into Business(id_category, name, owner, coordinates, reason)
values
(1,'MyName','Owner', [{"latitude": 12.1268142, "longitude": -86.2754}]','Description')
the way that I extract the data and assign it is like this.
type Business struct {
ID int `json:"id,omitempty"`
Name string `json:"name,omitempty"`
Owner string `json:"owner,omitempty"`
Category string `json:"category,omitempty"`
Departments []string `json:"departments,omitempty"`
Location []Coordinates `json:"location,omitempty"`
Reason string `json:"reason,omitempty"`
}
type Coordinates struct {
Latitude float64 `json:"latitude,omitempty"`
Longitude float64 `json:"longitude,omitempty"`
}
func (a Coordinates) Value() (driver.Value, error) {
return json.Marshal(a)
}
func (a *Coordinates) Scan(value []interface{}) error {
b, ok := value.([]byte)
if !ok {
return errors.New("type assertion to []byte failed")
}
return json.Unmarshal(b, &a)
}
However, I keep receiving this message.
sql: Scan error on column index 3, name "coordinates": unsupported
Scan, storing driver.Value type []uint8 into type *models.Coordinates
And the controller that I use to extract the information is this.
func (b *BusinessRepoImpl) Select() ([]models.Business, error) {
business_list := make([]models.Business, 0)
rows, err := b.Db.Query("SELECT business.id, business.name, business.owner, business.coordinates, business.reason_froggy, category.category FROM business INNER JOIN category on category.id = business.id_category group by business.id, business.name, business.owner, business.coordinates, business.reason_froggy, category.category")
if err != nil {
return business_list, err
}
for rows.Next() {
business := models.Business{}
err := rows.Scan(&business.ID, &business.Name, &business.Owner, &business.Location, &business.Reason, &business.Category)
if err != nil {
break
}
business_list = append(business_list, business)
}
err = rows.Err()
if err != nil {
return business_list, err
}
return business_list, nil
}
Can anyone please tell me how to solve this issue? Retrieve the json array of objects and assign it to the coordinates field inside Business.
1.
As you can see from the documentation the Scanner interface, to be satisfied, requires the method
Scan(src interface{}) error
But your *Coordinates type implements a different method
Scan(value []interface{}) error
The types interface{} and []interface{} are two very different things.
2.
The Scanner interface must be implemented on the type of the field which you want to pass as an argument to rows.Scan. That is, you've implemented your Scan method on *Coordinates but the type of the Location field is []Coordinates.
Again, same thing, the types *Coordinates and []Coordinates are two very different things.
So the solution is to implement the interface properly and on the proper type.
Note that since Go doesn't allow adding methods to unnamed types, and []Coordinates is an unnamed type, you need to declare a new type that you'll then use in place of []Coordinates.
type CoordinatesSlice []Coordinates
func (s *CoordinatesSlice) Scan(src interface{}) error {
switch v := src.(type) {
case []byte:
return json.Unmarshal(v, s)
case string:
return json.Unmarshal([]byte(v), s)
}
return errors.New("type assertion failed")
}
// ...
type Business struct {
// ...
Location CoordinatesSlice `json:"location,omitempty"`
// ...
}
NOTE
If the business location will always have only one pair of coordinates store into the db as a jsonb object and change the Location type from CoordinatesSlice to Coordinates and accordingly move the Scanner implementation from *CoordinatesSlice to *Coordinates.
I know that this solution is really unoptimized, but it was the only way that it works.
basically i have to obtain the json and then do an unmarshal into the Location attribute.
var location string = ""
if err := json.Unmarshal([]byte(location), &business.Location); err != nil { panic(err) }
I have a postgres db that I would like to generate tables for and write to using Gorp, however I get an error message when I try to insert due to the slices contained within my structs "sql: converting argument $4 type: unsupported type []core.EmbeddedStruct, a slice of struct.
My structs look as follows:
type Struct1 struct {
ID string
Name string
Location string
EmbeddedStruct []EmbeddedStruct
}
type EmbeddedStruct struct {
ID string
Name string
struct1Id string
EmbeddedStruct2 []EmbeddedStruct2
}
type EmbeddedStruct2 struct {
ID string
Name string
embeddedStructId string
}
func (repo *PgStruct1Repo) Write(t *core.Struct1) error {
trans, err := createTransaction(repo.dbMap)
defer closeTransaction(trans)
if err != nil {
return err
}
// Check to see if struct1 item already exists
exists, err := repo.exists(t.ID, trans)
if err != nil {
return err
}
if !exists {
log.Debugf("saving new struct1 with ID %s", t.ID)
err = trans.Insert(t)
if err != nil {
return err
}
return nil
}
return nil
}
Does anyone have any experience with/or know if Gorp supports inserting slices? From what I've read it seems to only support slices for SELECT statements
Gorp supports inserting a variadic number of slices, so if you have a slice records, you can do:
err = db.Insert(records...)
However, from your question it seems you want to save a single record that has a slice struct field.
https://github.com/go-gorp/gorp
gorp doesn't know anything about the relationships between your structs (at least not yet).
So, you have to handle the relationship yourself. The way I personally would solve this issue is to have Gorp ignore the slice on the parent:
type Struct1 struct {
ID string
Name string
Location string
EmbeddedStruct []EmbeddedStruct `db:"-"`
}
And then use the PostInsert hook to save the EmbeddedStruct (side note, this is a poor name as it is not actually an embedded struct)
func (s *Struct1) PostInsert(sql gorp.SqlExecutor) error {
for i := range s.EmbeddedStruct {
s.EmbeddedStruct[i].struct1Id = s.ID
}
return sql.Insert(s.EmbeddedStruct...)
}
And then repeat the process on EmbeddedStruct2.
Take care to setup the relationships properly on the DB side to ensure referential integrity (e.g. ON DELETE CASCADE / RESTRICT), and it would probably be a good idea to wrap the whole thing in a transaction.
Been doing a lot of searching, and although i can find a bunch of good articles that explain how to work with the pq package directly. I'm at a loss of working in context of go-gorm and the postgresql dialect.
If in checks.go I use the ChecksMap it doesn't let me insert but will let me find. If i use postgres.jsonb it lets me insert and query, but the found records will be of jsonb.
Gorm uses the struct of the pointer to determine the db table and schema. This is causing headaches when using a generic searchHandler utility which returns a json response from the API. For any non jsonb types gorm works with the proper structs and uses the json tags, but for jsonb since it doesn't have a reference to the jsonb's "struct" it can't use the json tags. This causes the return API json to have capitalized keys.
{
results: {
id: "123",
someId: "456",
results: [
{
Description: "foobar"
}
]
}
}
Is there an elegant way to handling this sort of thing so the jsonb results column will be of the correct struct and use the lowercased json tags? Am i just trying to do things which shouldn't be done within the context of go-gorm?
POSTGRESQL DDL
CREATE TABLE checks (
id text,
some_id text,
results jsonb
);
checks.go
type CheckRules struct {
Description string `json:"description"`
}
type ChecksMap map[string]CheckRules
type Checks struct {
ID string `gorm: "primary_key", json:"id"`
SomeID *string `json:"someId"`
Results postgres.jsonb `json:"results"` // <-- this
// results ChecksMap `gorm:"type:jsonb" json:"results"` // <-- or this
}
// func (cm *ChecksMap) Value() (driver.Value, error) {...}
// func (cm *ChecksMap) Scan(val interface{}) error {...}
insertChecks.go
var resultsVal = getResultsValue() // simplified
resJson, _ := json.Marshal(resultsVal)
checks := Checks{
SomeID: "123",
Results: postgres.Jsonb{ RawMessage: json.RawMessage(resJson) }
}
err := db.Create(&checks).Error
// ... some error handling
getChecks.go
var checks Checks
err := db.Find(&checks).Error
// ... some error handling
searchHandler.go
func SearchHandler(db *gorm.DB, model, results interface{}) func(c echo.Context) error {
return func(c echo.Context) error {
err := db.Find(results).Error
// ... some error handling
jsnRes, _ := json.Marshal(results) // <-- uppercase "keys"
return c.JSON(http.StatusOK, struct {
Results interface{} `json:"results"`
}{
Results: string(jsnRes),
})
}
}
You can use the custom ChecksMap type but implement the driver.Valuer interface on its value receiver, not the pointer reciever.
So, instead of:
func (cm *ChecksMap) Value() (driver.Value, error) { ...
You would write this:
func (cm ChecksMap) Value() (driver.Value, error) {
if cm == nil {
return nil, nil
}
return json.Marshal(cm)
}
Alternatively, you can probably make it work with the pointer implementation but then you'll have to turn the field into a pointer, e.g:
type Checks struct {
ID string `gorm: "primary_key", json:"id"`
SomeID *string `json:"someId"`
Results *ChecksMap `json:"results"`
}
(although I haven't tested this so I'm not 100% sure how gorm will handle this case)
I have a struct that contains math/big.Int fields. I would like to save the struct in mongodb using mgo. Saving the numbers as a strings is good enough in my situation.
I have looked at the available field's tags and nothing seams to allow custom serializer. I was expecting to implement an interface similar to encoding/json.Marshaler but I have found None of such interface in the documentation.
Here is a trivial example of what I want I need.
package main
import (
"labix.org/v2/mgo"
"math/big"
)
type Point struct {
X, Y *big.Int
}
func main() {
session, err := mgo.Dial("localhost")
if err != nil {
panic(err)
}
defer session.Close()
c := session.DB("test").C("test")
err = c.Insert(&Point{big.NewInt(1), big.NewInt(1)})
if err != nil { // should not panic
panic(err)
}
// The code run as expected but the fields X and Y are empty in mongo
}
Thnaks!
The similar interface is named bson.Getter:
http://labix.org/v2/mgo/bson#Getter
It can look similar to this:
func (point *Point) GetBSON() (interface{}, error) {
return bson.D{{"x", point.X.String()}, {"y", point.Y.String()}}, nil
}
And there's also the counterpart interface in the setter side, if you're interested:
http://labix.org/v2/mgo/bson#Setter
For using it, note that the bson.Raw type provided as a parameter has an Unmarshal method, so you could have a type similar to:
type dbPoint struct {
X string
Y string
}
and unmarshal it conveniently:
var dbp dbPoint
err := raw.Unmarshal(&dbp)
and then use the dbp.X and dbp.Y strings to put the big ints back into the real (point *Point) being unmarshalled.