Unit test CountDocument with MongoDB mongo-driver in Go - mongodb

Currently I am trying to unit test a mongoDB adapter written in GO.
I use the mtest package of the mongo-driver.
I successfully handle Update, Find and so on but have a hard time creating a working mock response for CountDocuments.
I tried different Responses but always got invalid response from server, no 'n' field"
I also can't find any good documentation about that.
func Test_Case(t *testing.T) {
//DbInit before
...
mt := mtest.New(t, mtest.NewOptions().ClientType(mtest.Mock))
defer mt.Close()
mt.Run(mt.Name(), func(mt *mtest.T) {
itemId := "item-id-to-count"
mt.AddMockResponses(mtest.CreateCursorResponse(1, "foo.bar", mtest.FirstBatch, bson.D{
{Key: "n", Value: bson.D{{Key: "Key", Value: "123"}}},
}))
memberCount, err := dbCollection.CountDocuments(context.TODO(), bson.M{"_id": itemId}
if err != nil {
mt.Error("did not expect an error, got: ", err)
}
...
})
}
Can someone tell how the mtest.CreateCursorResponse(1, "...) should look like to make it work

Related

Creating an index causes unauthorized error

I'm working on a project using Go microservices connecting to an Azure CosmosDB.
On the dev / stage environment I'm using the MongoDB API 3.6, for production 4.0.
The microservices creating indices on the collections. For the dev / stage environment all work's fine. But on production I'm retrieving the following error:
(Unauthorized) Error=13, Details='Response status code does not
indicate success, Number of regions attempted:1
I've checked the connection string twice and currently there are no firewall rules for the production db.
My code looks familiar to this:
package repository
import (
"go.mongodb.org/mongo-driver/mongo"
"log"
)
func Collection(db *mongo.Database, c string, indices ...mongo.IndexModel) *mongo.Collection {
col := db.Collection(c)
if indices != nil {
_, err := col.Indexes().CreateMany(ctx, indices)
if err != nil {
log.Fatal(err)
}
}
return col
}
// .....
package service
import (
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
"repository"
)
col := repository.Collection(db, "my_col", []mongo.IndexModel{
{
Keys: bson.M{"uuid": 1},
Options: options.Index().SetUnique(true),
},
}...)
Anyone an idea what causes this error?
I've contacted the Microsoft support and this is what they replied:
This is a limitation of accounts with Point in Time Restore. The collection must be created with a unique index.
https://learn.microsoft.com/en-us/azure/cosmos-db/continuous-backup-restore-introduction
You can use a command such as this to create the collection with the unique index already present (from the Mongo shell, or Robo3T, or another client)
MongoDB extension commands to manage data in Azure Cosmos DB’s API for MongoDB | Microsoft Docs
For example:
db.runCommand({
customAction: "CreateCollection",
collection: "my_collection",
shardKey: "my_shard_key",
offerThroughput: 100,
indexes: [{key: {_id: 1}, name: "_id_1"}, {key: {a: 1, b: 1}, name:"a_1_b_1", unique: true} ]
})
So now my code looks like this:
func Collection(db *mongo.Database, c string, indices []bson.M) *mongo.Collection {
ctx, cls := context.WithTimeout(context.Background(), time.Second * 15)
defer cls()
if cursor, _ := db.ListCollectionNames(ctx, bson.M{"name": c}); len(cursor) < 1 {
cmd := bson.D{{"customAction", "CreateCollection"}, {"collection", c}}
if indices != nil {
cmd = append(cmd, bson.E{Key: "indexes", Value: indices})
}
res := db.RunCommand(ctx, cmd)
if res.Err() != nil {
log.Fatal(res.Err())
}
}
return db.Collection(c)
}

How to handle validations for corrupted data on update?

I have thousands of data, some of which are older and collection validations have changed so now when I want to update the old data I get document validation failed.
My first approach was to find a way to ignore validation when updating but I don't know how to do it and I'm also not sure if it's the best way.
Is it a good approach to ignore validations when updating and if so, how to achieve it?
What i've tried:
filter := bson.M{"status": models.TICKET_STATUS_ACTIVE, "expire_at": bson.M{"$lte": time.Now()}}
update := bson.D{{"$set", bson.M{"status": models.TICKET_STATUS_EXPIRED}}}
updatedRows, err := collection.UpdateMany(dbCtx, filter, update)
if err != nil {
fmt.Println("update error ", err)
return
}
fmt.Println("updated rows: ", updatedRows)
alternative solutions are aprreciated.
There is a SetBypassDocumentValidation method which you can set in update options and if is set to true, it ignores validations when updating.
E.g:
updatedRows, err := collection.UpdateMany(dbCtx, filter, update, options.Update().SetBypassDocumentValidation(true))
if err != nil {
fmt.Println("update error ", err)
return
}

Setting TTL for expire data from collection

Is there a correct way to configure data self-deletion by key using the official mongo driver? The only method that I found in the Mongo-driver module is ExpireAfterSeconds, but I'm not sure how to use it correctly.
Here's the repository with what is ready at the moment.
You need to create an ttl index on the field which needs to be removed after n seconds.
In the below code snippet, have created an expirationTime field on which ttl can be set. After 60 seconds from the expirationTime set in the record, the record will be removed.
Following is the code to create a TTL index:
var ttl *int32
*ttl = 60
keys := bsonx.Doc{{Key: "expirationTime", Value: bsonx.Int32(int32(1))}}
idx := mongo.IndexModel{Keys: keys, Options: &options.IndexOptions{ExpireAfterSeconds: ttl}}
_, err := collection.Indexes().CreateOne(context.Background(), idx)
if err != nil {
fmt.Println("Error occurred while creating index", err)
} else {
fmt.Println("Index creation success")
}

Why are my mgo queries running slower than they do in php

I've swapped an endpoint from our PHP 7 app to a new Go service. The service takes a geographic bounding box and returns properties from a mongo database. The problem is that it's currently taking 4-5 times as long as the old PHP service took to do the same thing. ~90% of the time is spent in the GetProps function below.
var session *mgo.Session
func connectToDB() *mgo.Session {
dialInfo := &mgo.DialInfo{
Addrs: []string{"xxx1.mongodb.net:27017", "xxx2.mongodb.net:27017", "xxx3.mongodb.net:27017"},
Database: "admin",
Username: "me",
Password: "xxx",
DialServer: func(addr *mgo.ServerAddr) (net.Conn, error) {
return tls.Dial("tcp", addr.String(), &tls.Config{})
},
Timeout: time.Second * 10,
}
session, err := mgo.DialWithInfo(dialInfo)
if err != nil {
log.Panic(err)
}
session.SetMode(mgo.Monotonic, true)
return session
}
func GetProps(propRequest Request) []Property {
results := make([]Property, 0)
sessionCopy := session.Copy()
defer sessionCopy.Close()
props := sessionCopy.DB("mapov").C("properties")
props.Find(bson.M{
"geo": bson.M{
"$geoWithin": bson.M{
"$geometry": bson.M{
"type": "Polygon",
"coordinates": propRequest.BoundingPoly,
},
},
},
}).Sort("-rank").Limit(propRequest.CpPropsRequired).All(&results)
return results
}
func init() {
session = connectToDB()
}
The PHP 7 service does pretty much the same thing -
$collection = $mapovdb->properties;
$query = ['geo' => [
'$geoWithin' => [
'$geometry' => [
'type' => 'Polygon',
'coordinates' => $boundingPoly
]
]
]];
$cursor = $collection->find( $query, $queryOptions); // $queryOptions includes the matching sort and limit
But it's way quicker (I ran the two services next to each other for 12 hours randomising the traffic).
I tried changing my property struct so it just takes a single field, but that didn't seem to affect the performance.
type Property struct {
Name string `bson:"name" json:"name"`
}
What am I doing wrong? Surely I should be able to match the performance of the php7 driver?
UPDATE
I've swapped out the builtin http library for fasthttp. This seems to have made everything faster. I haven't had time to work out why yet (but will come back here when I do). My current theory is that the builtin http library creates a new goroutine for each new tcp connection not each new http connection and this is causing my db queries to queue - either because the load balancer is reusing tcp connections, or because the client is reusing them (http/2?).

Mongo writes query.UpsertId multiple documents before I can validate with query.Count()

Description: I`m using mongoDB on my project. This is short logic for handler when user tries to put his item for sale. Before putting offer to mongo I validate the offer, so there would be no active offers with save assetId
Using:
mgo.v2
mongo 3.6
golang 1.10
Problem: If user clicks really fast sends several requests to my handler (lets say if he double click the mouse fast), validation doesn`t work as it seems like the first offer is not in the collection yet, so as a result I get 2-3 offers with same assetId.
I tried
Set mongoUrl?replicaSet=rs0, so our master and slaves would now about each other
Set time.Sleep(200 * time.Millisecond) after validation
Question:
Is there any way I can handle this with mongo instruments, or someone would suggest some other workaround?
Thank you in advance!
count, _ := r.DB.C(sellOfferCollectionName).Find(
bson.M{
"state": someState,
"asset_id": assetId,
"seller_id": seller,
},
).Count()
if count > 0 {
return
}
id := uuid.New().String()
OfferModel := Offer{
Id: id,
AssetId: assetId,
State: someState,
SellerId: sellerId,
CreatingDate: time.Now(),
}
if _, err := r.DB.C(sellOfferCollectionName).UpsertId(offer.Id, offer); err != nil {
return err
}
UPDATE
I tried to recreate the problem even more. So I wrote this little test code, so in result managed to write 60 documents before validation (count > 0) worked. This example fully recreates my problem.
type User struct {
Id string `bson:"_id"`
FirstName string `bson:"first_name"`
LastName string `bson:"last_name"`
State bool `bson:"state"`
}
func main() {
mongoSession, mgErr := mgo.Dial("127.0.0.1:27018")
if mgErr != nil {
panic(mgErr)
}
var mongoDbSession = mongoSession.DB("test_db")
for i := 0; i < 1000; i++ {
go func() {
count, _ := mongoDbSession.C("users").Find(
bson.M{
"state": true,
"first_name": "testUser",
},
).Count()
if count > 0 {
return
}
user := User{
Id: uuid.New().String(),
FirstName: "testUser",
LastName: "testLastName",
State: true,
}
if _, err := mongoDbSession.C("users").UpsertId(user.Id, user); err != nil {
panic(mgErr)
}
}()
}
count, _ := mongoDbSession.C("users").Find(
bson.M{
"state": true,
"first_name": "testUser",
},
).Count()
fmt.Println(count)
fmt.Scanln()
}
First thing would be to disable the "Send" button at client side while the call is in progress, so if the user double or triple clicks, that will have no effect, as the second and subsequent calls will target a disabled button, hence nothing will happen.
If the same order may come from multiple places which you want to save multiple times, this is already enough and the correct way to do it.
If the ID also comes from the client, and if only a single order may exist with the given ID, then the next thing you should do is simply use the Order ID as the document ID in MongoDB: assign and use this value as the MongoDB _id field. This will give you the guarantee that multiple items with the same order ID will not exists, the 2nd attempt to insert the order would return an error. Note that using Query.UpsertId() will always succeed, inserting the document if not exists, and updating it if it does. Query.Insert() insert the document if it does not exists, and returns an error if it already does. Using none of UpsertId() and Insert() will result in multiple documents with the same ID.
If for some reason you can't or don't want to use the order ID as the document ID, then define a unique index for the property which stores the order ID, for details see MongoDB Unique Indexes.
Note that using the MongoDB _id field or another one with unique index in itself ensures you can't insert multiple documents with the same Order ID (ensured by MongoDB). Also note that this will work even if you have a cluster with multiple MongoDB instances, as writes (including inserts) always happen at the Master node. So nothing else is required for this to work in a multi-server cluster environment.
Eventually, after close investigation the bug, we found out that the reason was when user sent request it was handled in goroutine. Means a lot requests = a lot of concurrent goroutines. So, it out validator (check if the offer is in the collection), couldn't find it as it was not in the mongo yet. So, in the end, we decided to use redis as our validator.
Here is short implementation:
incr, err := redisClient.Incr(offer.AssetId).Result()
if err != nil {
return err
}
if incr > 1 {
return errors.New("ASSET_ALREADY_ON_SALE")
}
redisClient.Expire(offer.AssetId, time.Second*10)
Hope it will help someone facing same issue.
Link on implementation description: