I have model like this in
model.go
type yourTableName struct {
Name string `gorm:"type:varchar(50)" json:"name"`
Email string `gorm:"type:varchar(50)" json:"email"`
FieldNameOfJsonb JSONB `gorm:"type:jsonb" json:"fieldnameofjsonb"`
}
I want to insert FieldNameOfJsonb as an array of object in postgres using GORM
Like given below
{
"name": " james",
"email": "james#gmail.com",
"FieldNameOfJsonb": [
{
"someField1": "value",
"someFiedl2": "somevalue",
},
{
"Field1": "value1",
"Fiedl2": "value2",
}
],
Just add this below code in Model.go (referenceLink)
import (
"errors"
"database/sql/driver"
"encoding/json"
)
// JSONB Interface for JSONB Field of yourTableName Table
type JSONB []interface{}
// Value Marshal
func (a JSONB) Value() (driver.Value, error) {
return json.Marshal(a)
}
// Scan Unmarshal
func (a *JSONB) Scan(value interface{}) error {
b, ok := value.([]byte)
if !ok {
return errors.New("type assertion to []byte failed")
}
return json.Unmarshal(b,&a)
}
-> reference Link for Marshal, Unmarshal
now you can insert data using DB.Create(&yourTableName)
I have answered a similar question in https://stackoverflow.com/a/71636216/13719636 .
The simplest way to use JSONB in Gorm is to use pgtype.JSONB.
Gorm uses pgx as it driver, and pgx has package called pgtype, which has type named pgtype.JSONB.
If you have already install pgx as Gorm instructed, you don't need install any other package.
This method should be the best practice since it using underlying driver and no custom code is needed.
type User struct {
gorm.Model
Data pgtype.JSONB `gorm:"type:jsonb;default:'[]';not null"`
}
Get value from DB
u := User{}
db.find(&u)
var data []string
err := u.Data.AssignTo(&data)
if err != nil {
t.Fatal(err)
}
Set value to DB
u := User{}
err := u.Data.Set([]string{"abc","def"})
if err != nil {
return
}
db.Updates(&u)
You can use gorm-jsonb package.
Related
I have a Go struct which contains a slice of strings which I'd like to save as a jsonB object in Postgres with GORM.
I've come accross a solution which requires to use the GORM specific type (postgres.Jsonb) which I'd like to avoid.
When I try to run the AutoMigrate with a slice in my model, it panics and won't start although when I wrap this slice in a struct (which I'm okay doing) it will run without error but won't create the column in postgres.
type User struct {
gorm.Model
Data []string `sql:"type:"jsonb"; json:"data"`
} //Panics
type User struct {
gorm.Model
Data struct {
NestedData []string
} `sql:"type:"jsonb"; json:"data"`
} //Doesn't crash but doesn't create my column
Has anyone been able to manipulate jsonb with GORM without using the postgres.Jsonb type in models ?
The simplest way to use JSONB in Gorm is to use pgtype.JSONB.
Gorm uses pgx as it driver, and pgx has package called pgtype, which has type named pgtype.JSONB.
If you have already install pgx as Gorm instructed, you don't need install any other package.
This method should be the best practice since it using underlying driver and no custom code is needed. It can also be used for any JSONB type beyond []string.
type User struct {
gorm.Model
Data pgtype.JSONB `gorm:"type:jsonb;default:'[]';not null"`
}
Get value from DB
u := User{}
db.find(&u)
var data []string
err := u.Data.AssignTo(&data)
if err != nil {
t.Fatal(err)
}
Set value to DB
u := User{}
err := u.Data.Set([]string{"abc","def"})
if err != nil {
return
}
db.Updates(&u)
Maybe:
type DataJSONB []string
func (dj DataJSONB) Value() (driver.Value, error) {
return json.Marshal(dj)
}
func (dj *DataJSONB) Scan(value interface{}) error {
b, ok := value.([]byte)
if !ok {
return fmt.Errorf("[]byte assertion failed")
}
return json.Unmarshal(b, dj)
}
// Your bit
type User struct {
gorm.Model
Data DataJSONB `sql:"type:"jsonb"; json:"data"`
}
Define a new type:
type Data map[string]interface{}
And implement the Valuer and Scanner interfaces onto them, which allows the field to be converted to a value for the database, and scanned back into the field, respectively:
// Value converts into []byte
func (d Data) Value() (driver.Value, error) {
j, err := json.Marshal(d)
return j, err
}
// Scan puts the []byte back into Data
func (d *Data) Scan(src interface{}) error {
source, ok := src.([]byte)
if !ok {
return errors.New("Type assertion .([]byte) failed.")
}
var i interface{}
if err := json.Unmarshal(source, &i); err != nil {
return err
}
*d, ok = i.(map[string]interface{})
if !ok {
return errors.New("Type assertion .(map[string]interface{}) failed.")
}
return nil
}
Then you can define your field in your model like this:
type User struct {
gorm.Model
Data Data `type: jsonb not null default '{}'::jsonb`
}
Using the underlying map[string]interface{} type is nice too, as you can Unmarshal/Marshal any JSON to/from it.
Been doing a lot of searching, and although i can find a bunch of good articles that explain how to work with the pq package directly. I'm at a loss of working in context of go-gorm and the postgresql dialect.
If in checks.go I use the ChecksMap it doesn't let me insert but will let me find. If i use postgres.jsonb it lets me insert and query, but the found records will be of jsonb.
Gorm uses the struct of the pointer to determine the db table and schema. This is causing headaches when using a generic searchHandler utility which returns a json response from the API. For any non jsonb types gorm works with the proper structs and uses the json tags, but for jsonb since it doesn't have a reference to the jsonb's "struct" it can't use the json tags. This causes the return API json to have capitalized keys.
{
results: {
id: "123",
someId: "456",
results: [
{
Description: "foobar"
}
]
}
}
Is there an elegant way to handling this sort of thing so the jsonb results column will be of the correct struct and use the lowercased json tags? Am i just trying to do things which shouldn't be done within the context of go-gorm?
POSTGRESQL DDL
CREATE TABLE checks (
id text,
some_id text,
results jsonb
);
checks.go
type CheckRules struct {
Description string `json:"description"`
}
type ChecksMap map[string]CheckRules
type Checks struct {
ID string `gorm: "primary_key", json:"id"`
SomeID *string `json:"someId"`
Results postgres.jsonb `json:"results"` // <-- this
// results ChecksMap `gorm:"type:jsonb" json:"results"` // <-- or this
}
// func (cm *ChecksMap) Value() (driver.Value, error) {...}
// func (cm *ChecksMap) Scan(val interface{}) error {...}
insertChecks.go
var resultsVal = getResultsValue() // simplified
resJson, _ := json.Marshal(resultsVal)
checks := Checks{
SomeID: "123",
Results: postgres.Jsonb{ RawMessage: json.RawMessage(resJson) }
}
err := db.Create(&checks).Error
// ... some error handling
getChecks.go
var checks Checks
err := db.Find(&checks).Error
// ... some error handling
searchHandler.go
func SearchHandler(db *gorm.DB, model, results interface{}) func(c echo.Context) error {
return func(c echo.Context) error {
err := db.Find(results).Error
// ... some error handling
jsnRes, _ := json.Marshal(results) // <-- uppercase "keys"
return c.JSON(http.StatusOK, struct {
Results interface{} `json:"results"`
}{
Results: string(jsnRes),
})
}
}
You can use the custom ChecksMap type but implement the driver.Valuer interface on its value receiver, not the pointer reciever.
So, instead of:
func (cm *ChecksMap) Value() (driver.Value, error) { ...
You would write this:
func (cm ChecksMap) Value() (driver.Value, error) {
if cm == nil {
return nil, nil
}
return json.Marshal(cm)
}
Alternatively, you can probably make it work with the pointer implementation but then you'll have to turn the field into a pointer, e.g:
type Checks struct {
ID string `gorm: "primary_key", json:"id"`
SomeID *string `json:"someId"`
Results *ChecksMap `json:"results"`
}
(although I haven't tested this so I'm not 100% sure how gorm will handle this case)
I have the following models
type User struct {
gorm.Model
Languages []Language `gorm:"many2many:user_languages;"`
}
type Language struct {
gorm.Model
Name string
Users []User `gorm:"many2many:user_languages;"`
}
and for creating a language I do:
func CreateLanguage(db *gorm.DB, w http.ResponseWriter, r *http.Request) {
language := models.Language{}
decoder := json.NewDecoder(r.Body)
if err := decoder.Decode(&language); err != nil {
respondError(w, http.StatusBadRequest, err.Error())
return
}
defer r.Body.Close()
if err := db.Save(&language).Error; err != nil {
respondWithError(w, http.StatusInternalServerError, err.Error())
return
}
respondWithJSON(w, http.StatusCreated, language)
}
when I check the database, I have the table Language filled with the language I created, but user_languages was not filled. I thought gorm was in charge of updating the intermediate table when you user gorm:"many2many:user_languages;", and the engine will figure out how to manage creations.
So question: how to manage creation with gorm when you have many2many relationships?
Gorm has a feature to auto save associations and its references from the struct. In your case you need to pass correct JSON object, for example if you pass:
{
"Name": "EN",
"Users": [
{
"ID": 1
}
]
}
Gorm will create new language with name "EN" and join it with the user row found by id 1 by creating new row in user_language table.
Read more about Gorm associations: http://gorm.io/docs/associations.html
My json looks like the following,
[
{
"key1": 1,
"key2": "val2"
},
{
"key1": 2,
"key2": "val2"
}
]
This json comes in string format and I want the objects in the json array to be inserted as individual records in mongodb. I referred to https://labix.org/mgo but wasn't able to find enough examples on the above use-case. Appreciate your thoughts in finding a solution.
Unmarshal the JSON to []interface{} and insert the result in the database. Assuming that c is an mgo.Collection and data is a []byte containing the JSON value, use the following code:
var v []interface{}
if err := json.Unmarshal(data, &v); err != nil {
// handle error
}
if err := c.Insert(v...); err != nil {
// handle error
}
In this example I will store the mixed array
test_string := '[[1,"a","b",2,"000000",[[1,2,3],[1,2,3]],"\"x","[y","'z",[[1,2,3],[1,2,3]]]]'
inside the mongodb as the json:
{datum: [[1,"a","b",2,"000000",[[1,2,3],[1,2,3]],"\"x","[y","'z",[[1,2,3],[1,2,3]]]]}
package main
import (
"strings"
"context"
"encoding/json"
"fmt"
"log"
"net/http"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/bson/primitive"
)
type datum2 struct {
Datum interface{} `json:datum`
}
var userCollection = db().Database("goTest").Collection("users") // get collection "users" from db() which returns *mongo.Client
func typeinterface2mongo() {
var datum2 datum2_instance
var interfacevalue []interface{}
test_string := `[[1,"a","b",2,"000000",[[1,2,3],[1,2,3]],"\"x","[y","'z",[[1,2,3],[1,2,3]]]]`
if err := json.Unmarshal([]byte(test_string), &interfacevalue); err != nil {
fmt.Println(err)
return
}
fmt.Println(test_string)
fmt.Println(interfacevalue)
datum2_instance.Datum=interfacevalue
userCollection.InsertOne(context.TODO(), datum2_instance)
fmt.Println(datum2_instance)
fmt.Println(datum2_instance.Datum)
}
If you have json data already please follow from step 2.
If you have Xml data first you need to convert to the json format by using this package("github.com/basgys/goxml2json")
type JsonFileResponse struct {
JsonData string `bson:"JsonData " json:"JsonData"`
}
step 1: jsonData, err := xml2json.Convert(xml)
if err != nil {
panic("getting error while converting xml to json",err)
}
step 2: session need to open by using your mongodb credentials.
collection := session.DB("database name").C("Collection Name")
err = collection.Insert(JsonFileResponse{JsonData :json.String()})
if err != nil {
log.Fatal(err)
}
The mongodb documentation says:
The fields and values of both the and parameters if the parameter contains only update operator expressions. The update creates a base document from the equality clauses in the parameter, and then applies the update expressions from the parameter.
And the mgo documentation says:
Upsert finds a single document matching the provided selector document and modifies it according to the update document. If no document matching the selector is found, the update document is applied to the selector document and the result is inserted in the collection.
But if i do an upsert like this:
session.UpsertId(data.Code, data)
I end up with an entry which have an ObjectID generated automatically by mongodb, instead of data.Code.
this means that UpsertId expect data to be formated with update operators and you can't use a an arbitrary struct? Or what i'm missing here?
Pd. Mongo 2.4.9 mgo v2 golang go version devel +f613443bb13a
EDIT:
This is a sample of what i mean, using the sample code from Neil Lunn:
package main
import (
"fmt"
"gopkg.in/mgo.v2"
// "gopkg.in/mgo.v2/bson"
)
type Person struct {
Code string
Name string
}
func main() {
session, err := mgo.Dial("admin:admin#localhost");
if err != nil {
fmt.Println("Error: ", err)
return
// panic(err)
}
defer session.Close()
session.SetMode(mgo.Monotonic, true)
c := session.DB("test").C("people")
var p = Person{
Code: "1234",
Name: "Bill",
}
_, err = c.UpsertId( p.Code, &p )
result := Person{}
err = c.FindId(p.Code).One(&result)
if err != nil {
fmt.Println("FindId Error: ", err)
return
// panic(err)
}
fmt.Println("Person", result)
}
I found the documentation of the MongoDB was right. The correct way to do this is to wrap the struct to insert into an update operator.
The sample code provided by Neil Lunn, would look like:
package main
import (
"fmt"
"gopkg.in/mgo.v2"
"gopkg.in/mgo.v2/bson"
)
type Person struct {
Code string
Name string
}
func main() {
session, err := mgo.Dial("admin:admin#localhost");
if err != nil {
fmt.Println("Error: ", err)
return
}
defer session.Close()
session.SetMode(mgo.Monotonic, true)
c := session.DB("test").C("people")
var p = Person{
Code: "1234",
Name: "Bill",
}
upsertdata := bson.M{ "$set": p}
info , err2 := c.UpsertId( p.Code, upsertdata )
fmt.Println("UpsertId -> ", info, err2)
result := Person{}
err = c.FindId(p.Code).One(&result)
if err != nil {
fmt.Println("FindId Error: ", err)
return
}
fmt.Println("Person", result)
}
Thank you very much for your interest and help Neil.
You seem to be talking about assigning a struct with a custom _id field here. This really comes down to how you define your struct. Here is a quick example:
package main
import (
"fmt"
"gopkg.in/mgo.v2"
"gopkg.in/mgo.v2/bson"
)
type Person struct {
ID string `bson:"_id"`
Name string
}
func main() {
session, err := mgo.Dial("127.0.0.1");
if err != nil {
panic(err)
}
defer session.Close()
session.SetMode(mgo.Monotonic, true)
c := session.DB("test").C("people")
var p = Person{
ID: "1",
Name: "Bill",
}
_, err = c.UpsertId( p.ID, &p )
result := Person{}
err = c.Find(bson.M{"_id": p.ID}).One(&result)
if err != nil {
panic(err)
}
fmt.Println("Person", result)
}
So in the custom definition here I am mapping the ID field to bson _id and defining it's type as string. As shown in the example this is exactly what happens when serialized via UpsertId and then retrieved.
Now you have elaborated I'll point to the difference on the struct definition.
What I have produces this:
{ "_id": 1, "name": "Bill" }
What you have ( without the same mapping on the struct ) does this:
{ "_id": ObjectId("53cfa557e248860d16e1f7e0"), "code": 1, "name": "Bill" }
As you see, the _id given in the upsert will never match because none of your fields in the struct are mapped to _id. You need the same as I have:
type Person struct {
Code string `bson:"_id"`
Name string
}
That maps a field to the mandatory _id field, otherwise one is automatically produced for you.