Go's zero value for a bool type is false. Postgres supports an undefined BOOL type, represented as NULL. This leads to problems when trying to fetch a BOOL value from Postgres in Go:
rows,err := db.Query("SELECT UNNEST('{TRUE,FALSE,NULL}'::BOOL[])");
if err != nil {
log.Fatal(err)
}
for rows.Next() {
var value bool
if err := rows.Scan(&value); err != nil {
log.Fatal(err)
}
fmt.Printf("Value: %t\n",value);
}
Output:
Value: true
Value: false
2014/11/17 09:34:26 sql: Scan error on column index 0: sql/driver: couldn't convert <nil> (<nil>) into type bool
What's the most idiomatic way around this problem? The two solutions I have imagined are neither very attractive:
Don't use Go's bool type. Instead I would probably use a string, and do my own conversion which accounts for nil
Always make sure BOOLs are either TRUE or FALSE in Postgres by using COALESCE() or some other means.
See http://golang.org/pkg/database/sql/#NullBool in the standard library.
NullBool represents a bool that may be null. NullBool implements the
Scanner interface so it can be used as a scan destination, similar to
NullString.
My preference is using a pointer:
for rows.Next() {
var value *bool
if err := rows.Scan(&value); err != nil {
log.Fatal(err)
}
if value == nil {
// Handle nil
}
fmt.Printf("Value: %t\n", *value);
}
In case of using SQL + graphQL (gqlgen) my preference is using pointers as well.
Why?
Because when I generate a schema it will have pointers on the output and this is much easy to connect with a relevant struct which contains pointers (which handle DB operations).
Related
I would like to know if there's any approach that would allow me to ignore null types while unmarshalling a MongoDB document into a Go struct.
Right now I have some auto-generate Go structs, something like this:
type User struct {
Name string `bson:"name"`
Email string `bson:"email"`
}
Changing the types declared in this struct is not an option, and here's the problem; in a MongoDB database, which I do not have total control, some of the documents have been inserted with null values were originally I was not expecting nulls. Something like this:
{
"name": "John Doe",
"email": null
}
As the string types declared inside my struct are not pointers, they can't receive a nil value, so whenever I try to unmarshall this document in my struct, it returns an error.
Preventing the insertion of this kind of document into the database would be the ideal solution, but for my use case, ignoring the null values would also be acceptable. So after unmarshalling the document my User instance would look like this
User {
Name: "John Doe",
Email: "",
}
I'm trying to find, either some annotation flag, or an option that could be passed to the method Find/FindOne, or maybe even a query parameter to prevent returning any field containing null values from the database. Without any success until now.
Are there any built-in solutions in the mongo-go-driver for this problem?
The problem is that the current bson codecs do not support encoding / decoding string into / from null.
One way to handle this is to create a custom decoder for string type in which we handle null values: we just use the empty string (and more importantly don't report error).
Custom decoders are described by the type bsoncodec.ValueDecoder. They can be registered at a bsoncodec.Registry, using a bsoncodec.RegistryBuilder for example.
Registries can be set / applied at multiple levels, even to a whole mongo.Client, or to a mongo.Database or just to a mongo.Collection, when acquiring them, as part of their options, e.g. options.ClientOptions.SetRegistry().
First let's see how we can do this for string, and next we'll see how to improve / generalize the solution to any type.
1. Handling null strings
First things first, let's create a custom string decoder that can turn a null into a(n empty) string:
import (
"go.mongodb.org/mongo-driver/bson/bsoncodec"
"go.mongodb.org/mongo-driver/bson/bsonrw"
"go.mongodb.org/mongo-driver/bson/bsontype"
)
type nullawareStrDecoder struct{}
func (nullawareStrDecoder) DecodeValue(dctx bsoncodec.DecodeContext, vr bsonrw.ValueReader, val reflect.Value) error {
if !val.CanSet() || val.Kind() != reflect.String {
return errors.New("bad type or not settable")
}
var str string
var err error
switch vr.Type() {
case bsontype.String:
if str, err = vr.ReadString(); err != nil {
return err
}
case bsontype.Null: // THIS IS THE MISSING PIECE TO HANDLE NULL!
if err = vr.ReadNull(); err != nil {
return err
}
default:
return fmt.Errorf("cannot decode %v into a string type", vr.Type())
}
val.SetString(str)
return nil
}
OK, and now let's see how to utilize this custom string decoder to a mongo.Client:
clientOpts := options.Client().
ApplyURI("mongodb://localhost:27017/").
SetRegistry(
bson.NewRegistryBuilder().
RegisterDecoder(reflect.TypeOf(""), nullawareStrDecoder{}).
Build(),
)
client, err := mongo.Connect(ctx, clientOpts)
From now on, using this client, whenever you decode results into string values, this registered nullawareStrDecoder decoder will be called to handle the conversion, which accepts bson null values and sets the Go empty string "".
But we can do better... Read on...
2. Handling null values of any type: "type-neutral" null-aware decoder
One way would be to create a separate, custom decoder and register it for each type we wish to handle. That seems to be a lot of work.
What we may (and should) do instead is create a single, "type-neutral" custom decoder which handles just nulls, and if the BSON value is not null, should call the default decoder to handle the non-null value.
This is surprisingly simple:
type nullawareDecoder struct {
defDecoder bsoncodec.ValueDecoder
zeroValue reflect.Value
}
func (d *nullawareDecoder) DecodeValue(dctx bsoncodec.DecodeContext, vr bsonrw.ValueReader, val reflect.Value) error {
if vr.Type() != bsontype.Null {
return d.defDecoder.DecodeValue(dctx, vr, val)
}
if !val.CanSet() {
return errors.New("value not settable")
}
if err := vr.ReadNull(); err != nil {
return err
}
// Set the zero value of val's type:
val.Set(d.zeroValue)
return nil
}
We just have to figure out what to use for nullawareDecoder.defDecoder. For this we may use the default registry: bson.DefaultRegistry, we may lookup the default decoder for individual types. Cool.
So what we do now is register a value of our nullawareDecoder for all types we want to handle nulls for. It's not that hard. We just list the types (or values of those types) we want this for, and we can take care of all with a simple loop:
customValues := []interface{}{
"", // string
int(0), // int
int32(0), // int32
}
rb := bson.NewRegistryBuilder()
for _, v := range customValues {
t := reflect.TypeOf(v)
defDecoder, err := bson.DefaultRegistry.LookupDecoder(t)
if err != nil {
panic(err)
}
rb.RegisterDecoder(t, &nullawareDecoder{defDecoder, reflect.Zero(t)})
}
clientOpts := options.Client().
ApplyURI("mongodb://localhost:27017/").
SetRegistry(rb.Build())
client, err := mongo.Connect(ctx, clientOpts)
In the example above I registered null-aware decoders for string, int and int32, but you may extend this list to your liking, just add values of the desired types to the customValues slice above.
You can go through the operator $exists and Query for Null or Missing Fields for a detail explanation.
In the mongo-go-driver, you can try below query:
The email => nil query matches documents that either contains the email field whose value is nil or that do not contain the email field.
cursor, err := coll.Find(
context.Background(),
bson.D{
{"email", nil},
})
You have to just add the $ne operator in the above query to get the records that do not have the field email or do not have the value nil in email. For more details about the operator $ne
If you know ahead of time which fields could potentially be null in your mongoDB records, you can use pointers in your structs instead:
type User struct {
Name string `bson:"name"` // Will still fail to decode if null in Mongo
Email *string `bson:"email"` // Will be nil in go if null in Mongo
}
Just remember that now you'll need to code more defensively around anything that uses this value after decoding from mongo, eg:
var reliableVal string
if User.Email != nil {
reliableVal = *user.Email
} else {
reliableVal = ""
}
I would like to know if there's any approach that would allow me to ignore null types while unmarshalling a MongoDB document into a Go struct.
Right now I have some auto-generate Go structs, something like this:
type User struct {
Name string `bson:"name"`
Email string `bson:"email"`
}
Changing the types declared in this struct is not an option, and here's the problem; in a MongoDB database, which I do not have total control, some of the documents have been inserted with null values were originally I was not expecting nulls. Something like this:
{
"name": "John Doe",
"email": null
}
As the string types declared inside my struct are not pointers, they can't receive a nil value, so whenever I try to unmarshall this document in my struct, it returns an error.
Preventing the insertion of this kind of document into the database would be the ideal solution, but for my use case, ignoring the null values would also be acceptable. So after unmarshalling the document my User instance would look like this
User {
Name: "John Doe",
Email: "",
}
I'm trying to find, either some annotation flag, or an option that could be passed to the method Find/FindOne, or maybe even a query parameter to prevent returning any field containing null values from the database. Without any success until now.
Are there any built-in solutions in the mongo-go-driver for this problem?
The problem is that the current bson codecs do not support encoding / decoding string into / from null.
One way to handle this is to create a custom decoder for string type in which we handle null values: we just use the empty string (and more importantly don't report error).
Custom decoders are described by the type bsoncodec.ValueDecoder. They can be registered at a bsoncodec.Registry, using a bsoncodec.RegistryBuilder for example.
Registries can be set / applied at multiple levels, even to a whole mongo.Client, or to a mongo.Database or just to a mongo.Collection, when acquiring them, as part of their options, e.g. options.ClientOptions.SetRegistry().
First let's see how we can do this for string, and next we'll see how to improve / generalize the solution to any type.
1. Handling null strings
First things first, let's create a custom string decoder that can turn a null into a(n empty) string:
import (
"go.mongodb.org/mongo-driver/bson/bsoncodec"
"go.mongodb.org/mongo-driver/bson/bsonrw"
"go.mongodb.org/mongo-driver/bson/bsontype"
)
type nullawareStrDecoder struct{}
func (nullawareStrDecoder) DecodeValue(dctx bsoncodec.DecodeContext, vr bsonrw.ValueReader, val reflect.Value) error {
if !val.CanSet() || val.Kind() != reflect.String {
return errors.New("bad type or not settable")
}
var str string
var err error
switch vr.Type() {
case bsontype.String:
if str, err = vr.ReadString(); err != nil {
return err
}
case bsontype.Null: // THIS IS THE MISSING PIECE TO HANDLE NULL!
if err = vr.ReadNull(); err != nil {
return err
}
default:
return fmt.Errorf("cannot decode %v into a string type", vr.Type())
}
val.SetString(str)
return nil
}
OK, and now let's see how to utilize this custom string decoder to a mongo.Client:
clientOpts := options.Client().
ApplyURI("mongodb://localhost:27017/").
SetRegistry(
bson.NewRegistryBuilder().
RegisterDecoder(reflect.TypeOf(""), nullawareStrDecoder{}).
Build(),
)
client, err := mongo.Connect(ctx, clientOpts)
From now on, using this client, whenever you decode results into string values, this registered nullawareStrDecoder decoder will be called to handle the conversion, which accepts bson null values and sets the Go empty string "".
But we can do better... Read on...
2. Handling null values of any type: "type-neutral" null-aware decoder
One way would be to create a separate, custom decoder and register it for each type we wish to handle. That seems to be a lot of work.
What we may (and should) do instead is create a single, "type-neutral" custom decoder which handles just nulls, and if the BSON value is not null, should call the default decoder to handle the non-null value.
This is surprisingly simple:
type nullawareDecoder struct {
defDecoder bsoncodec.ValueDecoder
zeroValue reflect.Value
}
func (d *nullawareDecoder) DecodeValue(dctx bsoncodec.DecodeContext, vr bsonrw.ValueReader, val reflect.Value) error {
if vr.Type() != bsontype.Null {
return d.defDecoder.DecodeValue(dctx, vr, val)
}
if !val.CanSet() {
return errors.New("value not settable")
}
if err := vr.ReadNull(); err != nil {
return err
}
// Set the zero value of val's type:
val.Set(d.zeroValue)
return nil
}
We just have to figure out what to use for nullawareDecoder.defDecoder. For this we may use the default registry: bson.DefaultRegistry, we may lookup the default decoder for individual types. Cool.
So what we do now is register a value of our nullawareDecoder for all types we want to handle nulls for. It's not that hard. We just list the types (or values of those types) we want this for, and we can take care of all with a simple loop:
customValues := []interface{}{
"", // string
int(0), // int
int32(0), // int32
}
rb := bson.NewRegistryBuilder()
for _, v := range customValues {
t := reflect.TypeOf(v)
defDecoder, err := bson.DefaultRegistry.LookupDecoder(t)
if err != nil {
panic(err)
}
rb.RegisterDecoder(t, &nullawareDecoder{defDecoder, reflect.Zero(t)})
}
clientOpts := options.Client().
ApplyURI("mongodb://localhost:27017/").
SetRegistry(rb.Build())
client, err := mongo.Connect(ctx, clientOpts)
In the example above I registered null-aware decoders for string, int and int32, but you may extend this list to your liking, just add values of the desired types to the customValues slice above.
You can go through the operator $exists and Query for Null or Missing Fields for a detail explanation.
In the mongo-go-driver, you can try below query:
The email => nil query matches documents that either contains the email field whose value is nil or that do not contain the email field.
cursor, err := coll.Find(
context.Background(),
bson.D{
{"email", nil},
})
You have to just add the $ne operator in the above query to get the records that do not have the field email or do not have the value nil in email. For more details about the operator $ne
If you know ahead of time which fields could potentially be null in your mongoDB records, you can use pointers in your structs instead:
type User struct {
Name string `bson:"name"` // Will still fail to decode if null in Mongo
Email *string `bson:"email"` // Will be nil in go if null in Mongo
}
Just remember that now you'll need to code more defensively around anything that uses this value after decoding from mongo, eg:
var reliableVal string
if User.Email != nil {
reliableVal = *user.Email
} else {
reliableVal = ""
}
I have a small Go program which uses a a postgresql db. In it there is a query which can return no rows, and the code I'm using to deal with this isn't working correctly.
// Get the karma value for nick from the database.
func getKarma(nick string, db *sql.DB) string {
var karma int
err := db.QueryRow("SELECT SUM(delta) FROM karma WHERE nick = $1", nick).Scan(&karma)
var karmaStr string
switch {
case err == sql.ErrNoRows:
karmaStr = fmt.Sprintf("%s has no karma.", nick)
case err != nil:
log.Fatal(err)
default:
karmaStr = fmt.Sprintf("Karma for %s is %d.", nick, karma)
}
return karmaStr
}
This logic is taken directly from the Go documentation. When there are no rows corresponding to nick, the following error occurs:
2016/07/24 19:37:07 sql: Scan error on column index 0: converting driver.Value type <nil> ("<nil>") to a int: invalid syntax
I must be doing something stupid - clues appreciated.
I believe your issue is that you're getting a NULL value back from the database, which go translates into nil. However, you're scanning into an integer, which has no concept of nil. One thing you can do is scan into a type that implements the sql.Scanner interface (and can handle NULL values), e.g., sql.NullInt64.
In the example code in the documentation, I'd assume they have a NOT NULL constraint on the username column. I think the reason for this is because they didn't want to lead people to believe that you have to use NULL-able types across the board.
I reworked the code to get the results I wanted.
// Get the karma value for nick from the database.
func getKarma(nick string, db *sql.DB) string {
var karma int
rows, err := db.Query("SELECT SUM(delta) FROM karma WHERE nick = $1", nick)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
karmaStr := fmt.Sprintf("%s has no karma.", nick)
if rows.Next() {
rows.Scan(&karma)
karmaStr = fmt.Sprintf("Karma for %s is %d.", nick, karma)
}
return karmaStr
}
Tempted to submit a documentation patch of some sort to the database/sql package.
I am using the GOB encoding for my project and i figured out (after a long fight) that empty strings are not encoded/decoded correctly. In my code i use a errormessage (string) to report any problems, this errormessage is most of the time empty. If i encode a empty string, it become nothing, and this gives me a problem with decoding. I don't want to alter the encoding/decoding because these parts are used the most.
How can i tell Go how to encode/decode empty strings?
Example:
Playground working code.
Playground not working code.
The problem isn't the encoding/gob module, but instead the custom MarshalBinary/UnmarshalBinary methods you've declared for Msg, which can't correctly round trip an empty string. There are two ways you could go here:
Get rid of the MarshalBinary/UnmarshalBinary methods and rely on GOB's default encoding for structures. This change alone wont' be enough because the fields of the structure aren't exported. If you're happy to export the fields then this is the simplest option: https://play.golang.org/p/rwzxTtaIh2
Use an encoding that can correctly round trip empty strings. One simple option would be to use GOB itself to encode the struct fields:
func (m Msg) MarshalBinary() ([]byte, error) {
var b bytes.Buffer
enc := gob.NewEncoder(&b)
if err := enc.Encode(m.x); err != nil {
return nil, err
}
if err := enc.Encode(m.y); err != nil {
return nil, err
}
if err := enc.Encode(m.z); err != nil {
return nil, err
}
return b.Bytes(), nil
}
// UnmarshalBinary modifies the receiver so it must take a pointer receiver.
func (m *Msg) UnmarshalBinary(data []byte) error {
dec := gob.NewDecoder(bytes.NewBuffer(data))
if err := dec.Decode(&m.x); err != nil {
return err
}
if err := dec.Decode(&m.y); err != nil {
return err
}
return dec.Decode(&m.z)
}
You can experiment with this example here: https://play.golang.org/p/oNXgt88FtK
The first option is obviously easier, but the second might be useful if your real example is a little more complex. Be careful with custom encoders though: GOB includes a few features that are intended to detect incompatibilities (e.g. if you add a field to a struct and try to decode old data), which are missing from this custom encoding.
I have a struct that contains math/big.Int fields. I would like to save the struct in mongodb using mgo. Saving the numbers as a strings is good enough in my situation.
I have looked at the available field's tags and nothing seams to allow custom serializer. I was expecting to implement an interface similar to encoding/json.Marshaler but I have found None of such interface in the documentation.
Here is a trivial example of what I want I need.
package main
import (
"labix.org/v2/mgo"
"math/big"
)
type Point struct {
X, Y *big.Int
}
func main() {
session, err := mgo.Dial("localhost")
if err != nil {
panic(err)
}
defer session.Close()
c := session.DB("test").C("test")
err = c.Insert(&Point{big.NewInt(1), big.NewInt(1)})
if err != nil { // should not panic
panic(err)
}
// The code run as expected but the fields X and Y are empty in mongo
}
Thnaks!
The similar interface is named bson.Getter:
http://labix.org/v2/mgo/bson#Getter
It can look similar to this:
func (point *Point) GetBSON() (interface{}, error) {
return bson.D{{"x", point.X.String()}, {"y", point.Y.String()}}, nil
}
And there's also the counterpart interface in the setter side, if you're interested:
http://labix.org/v2/mgo/bson#Setter
For using it, note that the bson.Raw type provided as a parameter has an Unmarshal method, so you could have a type similar to:
type dbPoint struct {
X string
Y string
}
and unmarshal it conveniently:
var dbp dbPoint
err := raw.Unmarshal(&dbp)
and then use the dbp.X and dbp.Y strings to put the big ints back into the real (point *Point) being unmarshalled.