Losing underscores in fields name when inserting in mongo with mgo [duplicate] - mongodb

Some JSON data I am getting have spaces in the key names. I am using standard encoding/json library to unmarshal the data. However it is unable to understand the keys with spaces in the schema. For e.g. following code:
package main
import (
"encoding/json"
"fmt"
)
func main() {
var jsonBlob = []byte(`[
{"Na me": "Platypus", "Order": "Monotremata"},
{"Na me": "Quoll", "Order": "Dasyuromorphia"}
]`)
type Animal struct {
Name string `json: "Na me"`
Order string `json: "Order,omitempty"`
}
var animals []Animal
err := json.Unmarshal(jsonBlob, &animals)
if err != nil {
fmt.Println("error:", err)
}
fmt.Printf("%+v", animals)
}
Gives the output as:
[{Name: Order:Monotremata} {Name: Order:Dasyuromorphia}]
So in the schema the library removes the space(from Na me) and try to find the key (Name), which is obviously not present. Any suggestion what can I do here?

Your json tag specification is incorrect, that's why the encoding/json library defaults to the field name which is Name. But since there is no JSON field with "Name" key, Animal.Name will remain its zero value (which is the empty string "").
Unmarshaling Order will still work, because the json package will use the field name if json tag specification is missing (tries with both lower and upper-case). Since the field name is identical to the JSON key, it works without extra JSON tag mapping.
You can't have a space in the tag specification after the colon and before the quotation mark:
type Animal struct {
Name string `json:"Na me"`
Order string `json:"Order,omitempty"`
}
With this simple change, it works (try it on the Go Playground):
[{Name:Platypus Order:Monotremata} {Name:Quoll Order:Dasyuromorphia}]

Related

Unable to INSERT/UPDATE data with custom type in postgresql using golang

I am trying to insert/update data in PostgreSQL using jackc/pgx into a table that has column of custom type. This is the table type written as a golan struct:
// Added this struct as a Types in PSQL
type DayPriceModel struct {
Date time.Time `json:"date"`
High float32 `json:"high"`
Low float32 `json:"low"`
Open float32 `json:"open"`
Close float32 `json:"close"`
}
// The 2 columns in my table
type SecuritiesPriceHistoryModel struct {
Symbol string `json:"symbol"`
History []DayPriceModel `json:"history"`
}
I have written this code for inserting data:
func insertToDB(data SecuritiesPriceHistoryModel) {
DBConnection := config.DBConnection
_, err := DBConnection.Exec(context.Background(), "INSERT INTO equity.securities_price_history (symbol) VALUES ($1)", data.Symbol, data.History)
}
But I am unable to insert the custom data type (DayPriceModel).
I am getting an error
Failed to encode args[1]: unable to encode
The error is very long and mostly shows my data so I have picked out the main part.
How do I INSERT data into PSQL with such custom data types?
PS: An implementation using jackc/pgx is preferred but database/SQL would just do fine
I'm not familiar enough with pgx to know how to setup support for arrays of composite types. But, as already mentioned in the comment, you can implement the driver.Valuer interface and have that implementation produce a valid literal, this also applies if you are storing slices of structs, you just need to declare a named slice and have that implement the valuer, and then use it instead of the unnamed slice.
// named slice type
type DayPriceModelList []DayPriceModel
// the syntax for array of composites literal looks like
// this: '{"(foo,123)", "(bar,987)"}'. So the implementation
// below must return the slice contents in that format.
func (l DayPriceModelList) Value() (driver.Value, error) {
// nil slice? produce NULL
if l == nil {
return nil, nil
}
// empty slice? produce empty array
if len(l) == 0 {
return []byte{'{', '}'}, nil
}
out := []byte{'{'}
for _, v := range l {
// This assumes that the date field in the pg composite
// type accepts the default time.Time format. If that is
// not the case then you simply provide v.Date in such a
// format which the composite's field understand, e.g.,
// v.Date.Format("<layout that the pg composite understands>")
x := fmt.Sprintf(`"(%s,%f,%f,%f,%f)",`,
v.Date,
v.High,
v.Low,
v.Open,
v.Close)
out = append(out, x...)
}
out[len(out)-1] = '}' // replace last "," with "}"
return out, nil
}
And when you are writing the insert query, make sure to add an explicit cast right after the placeholder, e.g.
type SecuritiesPriceHistoryModel struct {
Symbol string `json:"symbol"`
History DayPriceModelList `json:"history"` // use the named slice type
}
// ...
_, err := db.Exec(ctx, `INSERT INTO equity.securities_price_history (
symbol
, history
) VALUES (
$1
, $2::my_composite_type[])
`, data.Symbol, data.History)
// replace my_composite_type with the name of the composite type in the database
NOTE#1: Depending on the exact definition of your composite type in postgres the above example may or may not work, if it doesn't, simply adjust the code to make it work.
NOTE#2: The general approach in the example above is valid, however it is likely not very efficient. If you need the code to be performant do not use the example verbatim.

How to range over bson.D primitive.A slice mongo-go-driver?

My data structure is
{
_id: ObjectID,
...
fields: [
{ name: "Aryan" },
{ books : [ 1,2,3 ] },
]
}
In our application a user can define his own fields data structure but with a key value structure. So, we had no way of knowing the structure of the data.
So in our document struct we had
type Document struct {
Fields map[string]interface{}
}
as the second parameter returned by mongo was primitive.A ( []interface{} under the hood ), the individual item could have been an array, map, anything.
But we couldn't range over that for it being.
How can I get the individual values like the book ids [1,2,3] or the name value "Aryan" ?
After a couple of attempts to solving this, I was at a state where my current element it self is an interface{} ( [1,2,3] in this case ) and I couldn't get the individual 1,2,3s.
But finally managed to solve it
for k, val := range doc.Fields {
v := reflect.ValueOf(val)
switch reflect.TypeOf(t).Kind() {
case reflect.Slice:
// getting the individual ids
for i := 0; i < v.Len(); i++ {
fmt.Println(v.Index(i))
}
case reflect.String:
default:
// handle
}
}
Note: v.Index(i) returns a data type of Value .
to change the data type
v.Index(i).Interface().(string) // string
v.Index(i).Interface().(float64) // float
Ideally you should avoid working with interface{} type since it's very error prone and the compiler can't help you. The idiomatic way is to define a struct for your model with BSON tags like in this example`
type MyType struct {
ID primitive.ObjectID `bson:"_id,omitempty"`
Fields []Field `bson:"fields,omitempty"`
}
type Field struct {
Name string `bson:"name,omitempty"`
Books []int `bson:"books,omitempty"`
}
Field here is defined as the reunion of all possible fields which again is not ideal but at least the compiler can help you and developers know what to expect from the database document.

go mongodb driver and struct, find messes with uppercase and lowercase

var Messages []Token
c2 := session.DB("mydatabase").C("pages")
query2 := c2.Find(bson.M{}).All(&Messages)
fmt.Print(Messages)
Here's the structure in my Mongo DB:
id_
pageUrl
token
pageId
I first tried the structure as this:
type Token struct {
PageUrl string
Token string
PageId string
}
but only the token was being printed, perhaps because it's all lowercase. The other two fields were not being retrieved because they contain uppercase. Then I tried this:
type Token struct {
PageUrl string `json: "pageUrl" bson: "pageUrl"`
Token string `json: "token" bson: "token"`
PageId string `json: "pageId" bson: "pageId"`
}
what are those bson and json things? I've only put it there because I've seen in the internet, but it doesn't work, I still get only the token field
UPDATE with solution and tested example for nested documents
I've seen that there was no posts regarding this question so remember that the solution was to remove the spaces between json: and bson:
Also, to help someone who might be wondering how to do it for nested structs, here I give two structures that worked for me:
type Token struct {
PageUrl string `json:"pageUrl" bson:"pageUrl"`
Token string `json:"token" bson:"token"`
PageId string `json:"pageId" bson:"pageId"`
}
type Message struct {
Sender struct {
Id string `json:"id" bson:"id"`
} `json:"sender" bson:"sender"`
Recipient struct {
Id string `json:"id" bson:"id"`
} `json:"recipient" bson:"recipient"`
Message struct {
Mid string `json:"mid" bson:"mid"`
Seq int `json:"seq" bson:"seq"`
Message string `json:"text" bson:"text"`
}
}
these json and bson stuff is called tags
My best guess is that because Go requires a variable or function to be public by Capitalize the first character, so serialize frameworks like json or bson require the struct Capitalize its first character to expose the field(so that it could see the field). Thus the exposed field name should be defined with a tag (to avoid the restriction).
the space between bson: and "token" seems to have cause the problem
I tried following code snippet and seems works fine.
type Token struct {
PageUrl string `json:"pageUrl" bson:"pageUrl"`
Token string `json:"token" bson:"token"`
PageId string `json:"pageId" bson:"pageId"`
}

how to get distinct values in mongodb using golang

I tried to retrieve a document from my collection with unique id.
I have a collection with fields: name, age, city, and rank. I want to get 'city' results from mongodb using golang.
My struct code
type exp struct {
name string `bson:"name"`
age int `bson:"age"`
city string `bson:"city"`
rank int `bson:"rank"`
}
With the following code to retrieve results from mongodb:
var result []exp //my struct type
err = coll.Find(bson.M{"City":bson.M{}}).Distinct("City",&result)
fmt.Println(result)
With this code I get an empty array as the result. How would I get all the cities?
Try this code
var result []string
err = c.Find(nil).Distinct("city", &result)
if err != nil {
log.Fatal(err)
}
fmt.Println(result)
Due to restrictions in reflection, mgo (as well as encoding/json and other similar packages) is unable to use unexported fields to marshal or unmarshal data. What you need to do is to export your fields by capitalize the first letter:
type exp struct {
Name string `bson:"name"`
Age int `bson:"age"`
City string `bson:"city"`
Rank int `bson:"rank"`
}
A side note: you do not need to specify the bson tags if the desired name is the same as the lowercase field name. The documentation for bson states:
The lowercased field name is used as the key for each exported field,
but this behavior may be changed using the respective field tag.
Edit:
I just realized you did get an empty slice and not a slice with empty struct fields. My answer is then not an actual answer to the question, but it is still an issue that you need to consider.

Go mgo not storing object

Using mgo I'm unable to store any meaningful data. Only the _id gets stored
type Person struct {
name string
age int
}
func main() {
session, err := mgo.Dial("localhost")
if err != nil {
log.Fatal(err)
}
defer session.Close()
p := Person{"Joe", 50}
ppl := session.DB("rest").C("people")
ppl.Insert(p)
}
The result in Mongo is just the _id field - no sign of "Joe".
Using go 1.1.2 on Arch linux, MongoDB 2.4.6.
type Person struct {
name string
age int
}
The mgo package can't access unexported (lowercase) fields of your struct (i.e. no other package than the one the struct is defined in can). You need to export them (first letter must be upper case), like this:
type Person struct {
Name string
Age int
}
If you wish to have the field names in lower case in the DB you must provide a struct tag for them, like this:
type Person struct {
Name string `bson:"name"`
Age int `bson:"age"`
}
See the documentation on names:
Names are as important in Go as in any other language. They even have
semantic effect: the visibility of a name outside a package is
determined by whether its first character is upper case. [...]
EDIT:
Gustavo Niemeyer (author of the mgo and bson packages) noted in the comments that unlike the json package, the bson marshaller will lowercase all struct field names when commiting to the database, effectively making the last step in this answer superfluous.