Mocking MongoDB response in Go - mongodb

I'm fetching a document from MongoDB and passing it into function transform, e.g.
var doc map[string]interface{}
err := collection.FindOne(context.TODO(), filter).Decode(&doc)
result := transform(doc)
I want to write unit tests for transform, but I'm not sure how to mock a response from MongoDB. Ideally I want to set something like this up:
func TestTransform(t *testing.T) {
byt := []byte(`
{"hello": "world",
"message": "apple"}
`)
var doc map[string]interface{}
>>> Some method here to Decode byt into doc like the code above <<<
out := transform(doc)
expected := ...
if diff := deep.Equal(expected, out); diff != nil {
t.Error(diff)
}
}
One way would be to json.Unmarshal into doc, but this sometimes gives different results. For example, if the document in MongoDB has an array in it, then that array is decoded into doc as a bson.A type not []interface{} type.

A member from my team recently found out there is a hidden gem inside the official MongoDB driver for GO: https://pkg.go.dev/go.mongodb.org/mongo-driver#v1.9.1/mongo/integration/mtest. Although the package is in experimental mode and there is no backward compatibility guaranteed for it, it can help you to perform unit testing, at least with this version of the driver.
You can check this cool article with plenty of examples of how to use it: https://medium.com/#victor.neuret/mocking-the-official-mongo-golang-driver-5aad5b226a78. Additionally, here is the repository with the code samples for this article: https://github.com/victorneuret/mongo-go-driver-mock.
So, based in your example and the samples from the article I think you could try something like the following (of course, you might need to tweak and experiment with this):
func TestTransform(t *testing.T) {
mt := mtest.New(t, mtest.NewOptions().ClientType(mtest.Mock))
defer mt.Close()
mt.Run("find & transform", func(mt *mtest.T) {
myollection = mt.Coll
expected := myStructure{...}
mt.AddMockResponses(mtest.CreateCursorResponse(1, "foo.bar", mtest.FirstBatch, bson.D{
{"_id", expected.ID},
{"field-1", expected.Field1},
{"field-2", expected.Field2},
}))
response, err := myFindFunction(expected.ID)
if err != nil {
t.Error(err)
}
out := transform(response)
if diff := deep.Equal(expected, out); diff != nil {
t.Error(diff)
}
})
}
Alternatively, you can perform a more real testing and in an automated way via integration testing with Docker containers. There are a few good packages that could help you with this:
https://github.com/ory/dockertest
https://github.com/testcontainers/testcontainers-go
I have followed this approach with dockertest library to automate a full integration testing environment that could be setUp and tearDown via the go test -v -run Integration command. See a full example here: https://github.com/AnhellO/learn-dockertest/tree/master/mongo.
Hope this helps.

The best solution to write testable could would be to extract your code to a DAO or Data-Repository. You would define an interface which would return what you need. This way, you can just used a Mocked Version for testing.
// repository.go
type ISomeRepository interface {
Get(string) (*SomeModel, error)
}
type SomeRepository struct { ... }
func (r *SomeRepository) Get(id string) (*SomeModel, error) {
// Handling a real repository access and returning your Object
}
When you need to mock it, just create a Mock-Struct and implement the interface:
// repository_test.go
type SomeMockRepository struct { ... }
func (r *SomeRepository) Get(id string) (*SomeModel, error) {
return &SomeModel{...}, nil
}
func TestSomething() {
// You can use your mock as ISomeRepository
var repo *ISomeRepository
repo = &SomeMockRepository{}
someModel, err := repo.Get("123")
}
This is best used with some kind of dependency-injection, so passing this repository as ISomeRepository into the function.

Using monkey library to hook any function from mongo driver.
For example:
func insert(collection *mongo.Collection) (int, error) {
ctx, _ := context.WithTimeout(context.Background(), 10*time.Second)
u := User{
Name: "kevin",
Age: 20,
}
res, err := collection.InsertOne(ctx, u)
if err != nil {
log.Printf("error: %v", err)
return 0, err
}
id := res.InsertedID.(int)
return id, nil
}
func TestInsert(t *testing.T) {
var c *mongo.Collection
var guard *monkey.PatchGuard
guard = monkey.PatchInstanceMethod(reflect.TypeOf(c), "InsertOne",
func(c *mongo.Collection, ctx context.Context, document interface{}, opts ...*options.InsertOneOptions) (*mongo.InsertOneResult, error) {
guard.Unpatch()
defer guard.Restore()
log.Printf("record: %+v, collection: %s, database: %s", document, c.Name(), c.Database().Name())
res := &mongo.InsertOneResult{
InsertedID: 100,
}
return res, nil
})
collection := client.Database("db").Collection("person")
id, err := insert(collection)
require.NoError(t, err)
assert.Equal(t, id, 100)
}

Related

What is the bson syntax for $set in UpdateOne for the official mongo-go-driver

I am trying to get some familiarity with the official mongo-go-driver and the right syntax for UpdateOne.
My simplest full example follows:
(NOTE: in order to use this code you will need to substitute in your own user and server names as well as export the login password to the environment as MONGO_PW):
package main
import (
"context"
"fmt"
"os"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/bson/primitive"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
type DB struct {
User string
Server string
Database string
Collection string
Client *mongo.Client
Ctx context.Context
}
var db = DB{
User: <username>,
Server: <server_IP>,
Database: "test",
Collection: "movies",
Ctx: context.TODO(),
}
type Movie struct {
ID primitive.ObjectID `bson:"_id" json:"id"`
Name string `bson:"name" json:"name"`
Description string `bson:"description" json:"description"`
}
func main() {
if err := db.Connect(); err != nil {
fmt.Println("error: unable to connect")
os.Exit(1)
}
fmt.Println("connected")
// The code assumes the original entry for dunkirk is the following
// {"Name":"dunkirk", "Description":"a world war 2 movie"}
updatedMovie := Movie{
Name: "dunkirk",
Description: "movie about the british evacuation in WWII",
}
res, err := db.UpdateByName(updatedMovie)
if err != nil {
fmt.Println("error updating movie:", err)
os.Exit(1)
}
if res.MatchedCount < 1 {
fmt.Println("error: update did not match any documents")
os.Exit(1)
}
}
// UpdateByName changes the description for a movie identified by its name
func (db *DB) UpdateByName(movie Movie) (*mongo.UpdateResult, error) {
filter := bson.D{{"name", movie.Name}}
res, err := db.Client.Database(db.Database).Collection(db.Collection).UpdateOne(
db.Ctx,
filter,
movie,
)
if err != nil {
return nil, err
}
return res, nil
}
// Connect assumes that the database password is stored in the
// environment variable MONGO_PW
func (db *DB) Connect() error {
pw, ok := os.LookupEnv("MONGO_PW")
if !ok {
fmt.Println("error: unable to find MONGO_PW in the environment")
os.Exit(1)
}
mongoURI := fmt.Sprintf("mongodb+srv://%s:%s#%s", db.User, pw, db.Server)
// Set client options and verify connection
clientOptions := options.Client().ApplyURI(mongoURI)
client, err := mongo.Connect(db.Ctx, clientOptions)
if err != nil {
return err
}
err = client.Ping(db.Ctx, nil)
if err != nil {
return err
}
db.Client = client
return nil
}
The function signature for UpdateOne from the package docs is:
func (coll *Collection) UpdateOne(ctx context.Context, filter interface{},
update interface{}, opts ...*options.UpdateOptions) (*UpdateResult, error)
So I am clearly making some sort of mistake in creating the update interface{} argument to the function because I am presented with this error
error updating movie: update document must contain key beginning with '$'
The most popular answer here shows that I need to use a document sort of like this
{ $set: {"Name" : "The Matrix", "Decription" "Neo and Trinity kick butt" } }
but taken verbatim this will not compile in the mongo-go-driver.
I think I need some form of a bson document to comply with the Go syntax. What is the best and/or most efficient syntax to create this bson document for the update?
After playing around with this for a little while longer I was able to solve the problem after A LOT OF TRIAL AND ERROR using the mongodb bson package by changing the UpdateByName function in my code above as follows:
// UpdateByName changes the description for a movie identified by its name
func (db *DB) UpdateByName(movie Movie) (*mongo.UpdateResult, error) {
filter := bson.D{{"name", movie.Name}}
update := bson.D{{"$set",
bson.D{
{"description", movie.Description},
},
}}
res, err := db.Client.Database(db.Database).Collection(db.Collection).UpdateOne(
db.Ctx,
filter,
update,
)
if err != nil {
return nil, err
}
return res, nil
}
Note the use of bson.D{{$"set", .... It is unfortunate the way MongoDB has implemented the bson package this syntax still does not pass the go-vet. If anyone has a comment to fix the lint conflict below it would be appreciated.
go.mongodb.org/mongo-driver/bson/primitive.E composite literal uses unkeyed fields
In many cases you can replace construction
filter := bson.D{{"name", movie.Name}}
with
filter := bson.M{"name": movie.Name}
if arguments order dowsn't matter

too many open files in mgo go server

I'm getting these errors in the logs:
Accept error: accept tcp [::]:80: accept4: too many open files;
for a mongodb server on ubuntu, written in go using mgo. They start appearing after it's been running for about a day.
code:
package main
import (
"encoding/json"
"io"
"net/http"
"gopkg.in/mgo.v2/bson"
)
var (
Database *mgo.Database
)
func hello(w http.ResponseWriter, r *http.Request) {
io.WriteString(w, "hello")
}
func setTile(w http.ResponseWriter, r *http.Request) {
var requestJSON map[string]interface{}
err := json.NewDecoder(r.Body).Decode(&requestJSON)
if err != nil {
http.Error(w, err.Error(), 400)
return
}
collection := Database.C("tiles")
if requestJSON["tileId"] != nil {
query := bson.M{"tileId": requestJSON["tileId"]}
collection.RemoveAll(query)
collection.Insert(requestJSON)
w.WriteHeader(200)
w.Header().Set("Content-Type", "application/json")
js, _ := json.Marshal(map[string]string{"result": "ok"})
w.Write(js)
} else {
w.WriteHeader(200)
w.Header().Set("Content-Type", "application/json")
w.Write(js)
}
}
func getTile(w http.ResponseWriter, r *http.Request) {
var requestJSON map[string]interface{}
err := json.NewDecoder(r.Body).Decode(&requestJSON)
if err != nil {
http.Error(w, err.Error(), 400)
return
}
collection := Database.C("tiles")
var result []map[string]interface{}
if requestJSON["tileId"] != nil {
query := bson.M{"tileId": requestJSON["tileId"]}
collection.Find(query).All(&result)
}
if len(result) > 0 {
w.WriteHeader(200)
w.Header().Set("Content-Type", "application/json")
js, _ := json.Marshal(result[0])
w.Write(js)
} else {
w.WriteHeader(200)
w.Header().Set("Content-Type", "application/json")
js, _ := json.Marshal(map[string]string{"result": "tile id not found"})
w.Write(js)
}
}
func main() {
session, _ := mgo.Dial("localhost")
Database = session.DB("mapdb")
mux := http.NewServeMux()
mux.HandleFunc("/", hello)
mux.HandleFunc("/setTile", setTile)
mux.HandleFunc("/getTile", getTile)
http.ListenAndServe(":80", mux)
}
Is there something in there that needs closing? Or is it structured wrong in some way?
There seems to be lots of places to set the open file limits, so i'm not sure how to find out what the limits actually are. But it seems like increasing the limit isn't the problem anyway, surely something is being opened on every request and not closed.
This is not how you store and use a MongoDB connection in Go.
You have to store an mgo.Session, not an mgo.Database instance. And whenever you need to interact with the MongoDB, you acquire a copy or a clone of the session (e.g. with Session.Copy() or Session.Clone()), and you close it when you don't need it (preferable using a defer statement). This will ensure you don't leak connections.
You also religiously omit checking for errors, please don't do that. Whatever returns an error, do check it and act on it properly (the least you can do is print / log it).
So basically what you need to do is something like this:
var session *mgo.Session
func init() {
var err error
if session, err = mgo.Dial("localhost"); err != nil {
log.Fatal(err)
}
}
func someHandler(w http.ResponseWriter, r *http.Request) {
sess := session.Copy()
defer sess.Close() // Must close!
c := sess.DB("mapdb").C("tiles")
// Do something with the collection, e.g.
var tile bson.M
if err := c.FindId("someTileID").One(&result); err != nil {
// Tile does not exist, send back error, e.g.:
log.Printf("Tile with ID not found: %v, err: %v", "someTileID", err)
http.NotFound(w, r)
return
}
// Do something with tile
}
See related questions:
mgo - query performance seems consistently slow (500-650ms)
Concurrency in gopkg.in/mgo.v2 (Mongo, Go)
You are missing:
defer r.Body.Close()
Make sure it is used before return statement.

How to marshal json string to bson document for writing to MongoDB?

What I am looking is equivalent of Document.parse()
in golang, that allows me create bson from json directly? I do not want to create intermediate Go structs for marshaling
The gopkg.in/mgo.v2/bson package has a function called UnmarshalJSON which does exactly what you want.
The data parameter should hold you JSON string as []byte value.
func UnmarshalJSON(data []byte, value interface{}) error
UnmarshalJSON unmarshals a JSON value that may hold non-standard syntax as defined in BSON's extended JSON specification.
Example:
var bdoc interface{}
err = bson.UnmarshalJSON([]byte(`{"id": 1,"name": "A green door","price": 12.50,"tags": ["home", "green"]}`),&bdoc)
if err != nil {
panic(err)
}
err = c.Insert(&bdoc)
if err != nil {
panic(err)
}
mongo-go-driver has a function bson.UnmarshalExtJSON that does the job.
Here's the example:
var doc interface{}
err := bson.UnmarshalExtJSON([]byte(`{"foo":"bar"}`), true, &doc)
if err != nil {
// handle error
}
There is no longer a way to do this directly with supported libraries (e.g. the mongo-go-driver). You would need to write your own converter based on the bson spec.
Edit: here's one that by now has seen a few Terabytes of use in prod.
https://github.com/dustinevan/mongo/blob/main/bsoncv/bsoncv.go
I do not want to create intermediate Go structs for marshaling
If you do want/need to create an intermediate Go BSON structs, you could use a conversion module such github.com/sindbach/json-to-bson-go. For example:
import (
"fmt"
"github.com/sindbach/json-to-bson-go/convert"
"github.com/sindbach/json-to-bson-go/options"
)
func main() {
doc := `{"foo": "buildfest", "bar": {"$numberDecimal":"2021"} }`
opt := options.NewOptions()
result, _ := convert.Convert([]byte(doc), opt)
fmt.Println(result)
}
Will produce output:
package main
import "go.mongodb.org/mongo-driver/bson/primitive"
type Example struct {
Foo string `bson:"foo"`
Bar primitive.Decimal128 `bson:"bar"`
}
This module is compatible with the official MongoDB Go driver, and as you can see it supports Extended JSON formats.
You can also visit https://json-to-bson-map.netlify.app to try the module in action. You can paste a JSON document, and see the Go BSON structs as output.
A simple converter that uses go.mongodb.org/mongo-driver/bson/bsonrw:
func JsonToBson(message []byte) ([]byte, error) {
reader, err := bsonrw.NewExtJSONValueReader(bytes.NewReader(message), true)
if err != nil {
return []byte{}, err
}
buf := &bytes.Buffer{}
writer, _ := bsonrw.NewBSONValueWriter(buf)
err = bsonrw.Copier{}.CopyDocument(writer, reader)
if err != nil {
return []byte{}, err
}
marshaled := buf.Bytes()
return marshaled, nil
}

How do you select all records from a mongodb collection in golang using mgo

In MongoDB doing something like db.mycollection.find() returns all documents in a collection.
When working in GoLang using the package labix.org/v2/mgo and I do for example:
query := db.C("client").Find();
It complains that it requires input in the form of an interface. All I need to do is retrieve all documents and iterate through them and display each one for now. How do I achieve this effect? All examples I have seen seem to have filters in place.
Found a solution:
var results []client
err := db.C("client").Find(nil).All(&results)
if err != nil {
// TODO: Do something about the error
} else {
fmt.Println("Results All: ", results)
}
func (uc UserController) GetUsersList(w http.ResponseWriter,r *http.Request,p httprouter.Params){
var u []models.User
// Fetch user
if err := uc.session.DB("mydb").C("users").Find(nil).All(&u); err != nil {
w.WriteHeader(404)
fmt.Println("Results All: ", u)
return
}
uj, _ := json.Marshal(u)
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(200)
fmt.Fprintf(w, "%s", uj)
}

How to insert math/big.Int in mongo via mgo in golang

I have a struct that contains math/big.Int fields. I would like to save the struct in mongodb using mgo. Saving the numbers as a strings is good enough in my situation.
I have looked at the available field's tags and nothing seams to allow custom serializer. I was expecting to implement an interface similar to encoding/json.Marshaler but I have found None of such interface in the documentation.
Here is a trivial example of what I want I need.
package main
import (
"labix.org/v2/mgo"
"math/big"
)
type Point struct {
X, Y *big.Int
}
func main() {
session, err := mgo.Dial("localhost")
if err != nil {
panic(err)
}
defer session.Close()
c := session.DB("test").C("test")
err = c.Insert(&Point{big.NewInt(1), big.NewInt(1)})
if err != nil { // should not panic
panic(err)
}
// The code run as expected but the fields X and Y are empty in mongo
}
Thnaks!
The similar interface is named bson.Getter:
http://labix.org/v2/mgo/bson#Getter
It can look similar to this:
func (point *Point) GetBSON() (interface{}, error) {
return bson.D{{"x", point.X.String()}, {"y", point.Y.String()}}, nil
}
And there's also the counterpart interface in the setter side, if you're interested:
http://labix.org/v2/mgo/bson#Setter
For using it, note that the bson.Raw type provided as a parameter has an Unmarshal method, so you could have a type similar to:
type dbPoint struct {
X string
Y string
}
and unmarshal it conveniently:
var dbp dbPoint
err := raw.Unmarshal(&dbp)
and then use the dbp.X and dbp.Y strings to put the big ints back into the real (point *Point) being unmarshalled.