Is there a correct way to configure data self-deletion by key using the official mongo driver? The only method that I found in the Mongo-driver module is ExpireAfterSeconds, but I'm not sure how to use it correctly.
Here's the repository with what is ready at the moment.
You need to create an ttl index on the field which needs to be removed after n seconds.
In the below code snippet, have created an expirationTime field on which ttl can be set. After 60 seconds from the expirationTime set in the record, the record will be removed.
Following is the code to create a TTL index:
var ttl *int32
*ttl = 60
keys := bsonx.Doc{{Key: "expirationTime", Value: bsonx.Int32(int32(1))}}
idx := mongo.IndexModel{Keys: keys, Options: &options.IndexOptions{ExpireAfterSeconds: ttl}}
_, err := collection.Indexes().CreateOne(context.Background(), idx)
if err != nil {
fmt.Println("Error occurred while creating index", err)
} else {
fmt.Println("Index creation success")
}
Related
Currently I am trying to unit test a mongoDB adapter written in GO.
I use the mtest package of the mongo-driver.
I successfully handle Update, Find and so on but have a hard time creating a working mock response for CountDocuments.
I tried different Responses but always got invalid response from server, no 'n' field"
I also can't find any good documentation about that.
func Test_Case(t *testing.T) {
//DbInit before
...
mt := mtest.New(t, mtest.NewOptions().ClientType(mtest.Mock))
defer mt.Close()
mt.Run(mt.Name(), func(mt *mtest.T) {
itemId := "item-id-to-count"
mt.AddMockResponses(mtest.CreateCursorResponse(1, "foo.bar", mtest.FirstBatch, bson.D{
{Key: "n", Value: bson.D{{Key: "Key", Value: "123"}}},
}))
memberCount, err := dbCollection.CountDocuments(context.TODO(), bson.M{"_id": itemId}
if err != nil {
mt.Error("did not expect an error, got: ", err)
}
...
})
}
Can someone tell how the mtest.CreateCursorResponse(1, "...) should look like to make it work
I have thousands of data, some of which are older and collection validations have changed so now when I want to update the old data I get document validation failed.
My first approach was to find a way to ignore validation when updating but I don't know how to do it and I'm also not sure if it's the best way.
Is it a good approach to ignore validations when updating and if so, how to achieve it?
What i've tried:
filter := bson.M{"status": models.TICKET_STATUS_ACTIVE, "expire_at": bson.M{"$lte": time.Now()}}
update := bson.D{{"$set", bson.M{"status": models.TICKET_STATUS_EXPIRED}}}
updatedRows, err := collection.UpdateMany(dbCtx, filter, update)
if err != nil {
fmt.Println("update error ", err)
return
}
fmt.Println("updated rows: ", updatedRows)
alternative solutions are aprreciated.
There is a SetBypassDocumentValidation method which you can set in update options and if is set to true, it ignores validations when updating.
E.g:
updatedRows, err := collection.UpdateMany(dbCtx, filter, update, options.Update().SetBypassDocumentValidation(true))
if err != nil {
fmt.Println("update error ", err)
return
}
Description: I`m using mongoDB on my project. This is short logic for handler when user tries to put his item for sale. Before putting offer to mongo I validate the offer, so there would be no active offers with save assetId
Using:
mgo.v2
mongo 3.6
golang 1.10
Problem: If user clicks really fast sends several requests to my handler (lets say if he double click the mouse fast), validation doesn`t work as it seems like the first offer is not in the collection yet, so as a result I get 2-3 offers with same assetId.
I tried
Set mongoUrl?replicaSet=rs0, so our master and slaves would now about each other
Set time.Sleep(200 * time.Millisecond) after validation
Question:
Is there any way I can handle this with mongo instruments, or someone would suggest some other workaround?
Thank you in advance!
count, _ := r.DB.C(sellOfferCollectionName).Find(
bson.M{
"state": someState,
"asset_id": assetId,
"seller_id": seller,
},
).Count()
if count > 0 {
return
}
id := uuid.New().String()
OfferModel := Offer{
Id: id,
AssetId: assetId,
State: someState,
SellerId: sellerId,
CreatingDate: time.Now(),
}
if _, err := r.DB.C(sellOfferCollectionName).UpsertId(offer.Id, offer); err != nil {
return err
}
UPDATE
I tried to recreate the problem even more. So I wrote this little test code, so in result managed to write 60 documents before validation (count > 0) worked. This example fully recreates my problem.
type User struct {
Id string `bson:"_id"`
FirstName string `bson:"first_name"`
LastName string `bson:"last_name"`
State bool `bson:"state"`
}
func main() {
mongoSession, mgErr := mgo.Dial("127.0.0.1:27018")
if mgErr != nil {
panic(mgErr)
}
var mongoDbSession = mongoSession.DB("test_db")
for i := 0; i < 1000; i++ {
go func() {
count, _ := mongoDbSession.C("users").Find(
bson.M{
"state": true,
"first_name": "testUser",
},
).Count()
if count > 0 {
return
}
user := User{
Id: uuid.New().String(),
FirstName: "testUser",
LastName: "testLastName",
State: true,
}
if _, err := mongoDbSession.C("users").UpsertId(user.Id, user); err != nil {
panic(mgErr)
}
}()
}
count, _ := mongoDbSession.C("users").Find(
bson.M{
"state": true,
"first_name": "testUser",
},
).Count()
fmt.Println(count)
fmt.Scanln()
}
First thing would be to disable the "Send" button at client side while the call is in progress, so if the user double or triple clicks, that will have no effect, as the second and subsequent calls will target a disabled button, hence nothing will happen.
If the same order may come from multiple places which you want to save multiple times, this is already enough and the correct way to do it.
If the ID also comes from the client, and if only a single order may exist with the given ID, then the next thing you should do is simply use the Order ID as the document ID in MongoDB: assign and use this value as the MongoDB _id field. This will give you the guarantee that multiple items with the same order ID will not exists, the 2nd attempt to insert the order would return an error. Note that using Query.UpsertId() will always succeed, inserting the document if not exists, and updating it if it does. Query.Insert() insert the document if it does not exists, and returns an error if it already does. Using none of UpsertId() and Insert() will result in multiple documents with the same ID.
If for some reason you can't or don't want to use the order ID as the document ID, then define a unique index for the property which stores the order ID, for details see MongoDB Unique Indexes.
Note that using the MongoDB _id field or another one with unique index in itself ensures you can't insert multiple documents with the same Order ID (ensured by MongoDB). Also note that this will work even if you have a cluster with multiple MongoDB instances, as writes (including inserts) always happen at the Master node. So nothing else is required for this to work in a multi-server cluster environment.
Eventually, after close investigation the bug, we found out that the reason was when user sent request it was handled in goroutine. Means a lot requests = a lot of concurrent goroutines. So, it out validator (check if the offer is in the collection), couldn't find it as it was not in the mongo yet. So, in the end, we decided to use redis as our validator.
Here is short implementation:
incr, err := redisClient.Incr(offer.AssetId).Result()
if err != nil {
return err
}
if incr > 1 {
return errors.New("ASSET_ALREADY_ON_SALE")
}
redisClient.Expire(offer.AssetId, time.Second*10)
Hope it will help someone facing same issue.
Link on implementation description:
how do I create a TTL (time to live) index with golang and mongodb?
This is how I'm trying to do it currently:
sessionTTL := mgo.Index{
Key: []string{"created"},
Unique: false,
DropDups: false,
Background: true,
ExpireAfter: session_expire} // session_expire is a time.Duration
if err := db.C("session").EnsureIndex(sessionTTL); err != nil {
panic(err)
}
But if I look it up using:
db.session.getIndexes()
session_expire is set to 5*time.Second. The field "created" in the document is set to current date using time.Now(), so I expected the documents the be deleted after 5 seconds.
So the issue was that I had to drop the collection. The index existed already so it was not recreated with the expiration constraint.
I was trying to use the answer to this question, and ran into a problem. Consider the following small change:
sessionTTL := mgo.Index{
Key: []string{"created"},
Unique: false,
DropDups: false,
Background: true,
ExpireAfter: 60 * 60} // one hour
if err := db.C("session").EnsureIndex(sessionTTL); err != nil {
panic(err)
}
The problem with this is that the code silently fails if ExpireAfter is not a proper time.Duration.
I had to change to:
ExpireAfter: time.Duration(60 * 60) * time.Second,
I'm currently starting with GoLang and MongoDB.I'm writing a small web application, a blog to be more specific (which is like the first webapp I write when I try new languages). Everything works fine with MGO even if I had some troubles at first. But now I'd like to access each blog entry (articles will be referred as entries to stick with my models) separately. I could use the ObjectID in the url. But that's damn ugly. For example :
mydomain.com/entries/543fd8940e82533995000002/
That's not user friendly. I did a lot of research on the internet to find a suitable solution, because using any other database engine I could just use the id (and that would be fine).
Could someone help me with the creation of a custom (public) id which would auto-increment when I insert a new entry and that I could use in the url ?
Here is the code of my model for now :
package models
import (
"time"
"labix.org/v2/mgo"
"labix.org/v2/mgo/bson"
)
type (
Entries []Entry
Entry struct {
ID bson.ObjectId `bson:"_id,omitempty"`
Title string `bson:"title"`
Short string `bson:"short"`
Content string `bson:"content"`
Posted time.Time `bson:"posted"`
}
)
// Insert an entry to the database
func InsertEntry(database *mgo.Database, entry *Entry) error {
entry.ID = bson.NewObjectId()
return database.C("entries").Insert(entry)
}
// Find an entry by id
func GetEntryByID(database *mgo.Database, id string) (entry Entry, err error) {
bid := bson.ObjectIdHex(id)
err = database.C("entries").FindId(bid).One(&entry)
return
}
// Retrieves all the entries
func AllEntries(database *mgo.Database) (entries Entries, err error) {
err = database.C("entries").Find(nil).All(&entries)
return
}
// Retrieve all the entries sorted by date.
func AllEntriesByDate(database *mgo.Database) (entries Entries, err error) {
err = database.C("entries").Find(nil).Sort("-posted").All(&entries)
return
}
// Counts all the entries.
func CountAllEntries(database *mgo.Database) (count int, err error) {
count, err = database.C("entries").Find(nil).Count()
return
}
As you know the _id is a mandatory field, that is automatically fill by the driver when you do not set it. This is the behavior that you have in your current application/code. You can find information about this type and its generation here: http://docs.mongodb.org/manual/reference/object-id/
However, you can create your own _id and set the value to anything that makes sense for your business.
This is why I am do not understand the following statement:
I did a lot of research on the internet to find a suitable solution, because using any other database engine I could just use the id (and that would be fine).
You can use any value you want as soon as it is unique for your collection.
About the auto-increment, MongoDB does not provide any auto increment field, so you have to implement it yourself, and call the increment from your application.
For example you create a new collection that contains your "sequences/counters": (showing shell commands not go)
{
_id : "entry",
sequence : 0
}
Then when you want an new id for your document you have first to update, with a findand modify the document you have created with a simple $inc operation
var ret = db.counters.findAndModify(
{
query: { _id: "entry" },
update: { $inc: { seq: 1 } },
new: true
}
);
You can then use the returned value into you new document as an _id.
This pattern is documented here:
http://docs.mongodb.org/manual/tutorial/create-an-auto-incrementing-field/