Create a custom ID with Mgo - mongodb

I'm currently starting with GoLang and MongoDB.I'm writing a small web application, a blog to be more specific (which is like the first webapp I write when I try new languages). Everything works fine with MGO even if I had some troubles at first. But now I'd like to access each blog entry (articles will be referred as entries to stick with my models) separately. I could use the ObjectID in the url. But that's damn ugly. For example :
mydomain.com/entries/543fd8940e82533995000002/
That's not user friendly. I did a lot of research on the internet to find a suitable solution, because using any other database engine I could just use the id (and that would be fine).
Could someone help me with the creation of a custom (public) id which would auto-increment when I insert a new entry and that I could use in the url ?
Here is the code of my model for now :
package models
import (
"time"
"labix.org/v2/mgo"
"labix.org/v2/mgo/bson"
)
type (
Entries []Entry
Entry struct {
ID bson.ObjectId `bson:"_id,omitempty"`
Title string `bson:"title"`
Short string `bson:"short"`
Content string `bson:"content"`
Posted time.Time `bson:"posted"`
}
)
// Insert an entry to the database
func InsertEntry(database *mgo.Database, entry *Entry) error {
entry.ID = bson.NewObjectId()
return database.C("entries").Insert(entry)
}
// Find an entry by id
func GetEntryByID(database *mgo.Database, id string) (entry Entry, err error) {
bid := bson.ObjectIdHex(id)
err = database.C("entries").FindId(bid).One(&entry)
return
}
// Retrieves all the entries
func AllEntries(database *mgo.Database) (entries Entries, err error) {
err = database.C("entries").Find(nil).All(&entries)
return
}
// Retrieve all the entries sorted by date.
func AllEntriesByDate(database *mgo.Database) (entries Entries, err error) {
err = database.C("entries").Find(nil).Sort("-posted").All(&entries)
return
}
// Counts all the entries.
func CountAllEntries(database *mgo.Database) (count int, err error) {
count, err = database.C("entries").Find(nil).Count()
return
}

As you know the _id is a mandatory field, that is automatically fill by the driver when you do not set it. This is the behavior that you have in your current application/code. You can find information about this type and its generation here: http://docs.mongodb.org/manual/reference/object-id/
However, you can create your own _id and set the value to anything that makes sense for your business.
This is why I am do not understand the following statement:
I did a lot of research on the internet to find a suitable solution, because using any other database engine I could just use the id (and that would be fine).
You can use any value you want as soon as it is unique for your collection.
About the auto-increment, MongoDB does not provide any auto increment field, so you have to implement it yourself, and call the increment from your application.
For example you create a new collection that contains your "sequences/counters": (showing shell commands not go)
{
_id : "entry",
sequence : 0
}
Then when you want an new id for your document you have first to update, with a findand modify the document you have created with a simple $inc operation
var ret = db.counters.findAndModify(
{
query: { _id: "entry" },
update: { $inc: { seq: 1 } },
new: true
}
);
You can then use the returned value into you new document as an _id.
This pattern is documented here:
http://docs.mongodb.org/manual/tutorial/create-an-auto-incrementing-field/

Related

Check if a new field was added in a specific document like documentChanges for a collection in Firestore

I use this code for load comments in a table view:
func observePostComments(postId: String, completion: #escaping (String) -> Void) {
let db = Firestore.firestore()
db.collection("post-comments").document(postId).addSnapshotListener { (snapshot, err) in
if snapshot!.exists {
for key in (snapshot?.data()!.keys)! {
completion(key)
}
} else {
return
}
}
}
It works like it should, but every time a user creates a new comment, all comments are added again. I know how it works for a collection with:
querySnapshot?.documentChanges.forEach { diff in
if (diff.type == .added) { ....
But I can not figure out how to implement that functionality on a document / field level. If I want to do the same on a document level, I receive
Value of type 'DocumentSnapshot?' has no member 'documentChanges'.
How can I track changes on a specific document level, when a new Key-Value pair was added to a document?
Firestore's change detection only works on complete documents. If you need to know what changed inside a document, you will have to detect this in your own code, for example by comparing the previous DocumentSnapshot with the new one.
The exact way to do this depends a bit on what data you store, but there are two broad approaches:
You take something that is unique about each comment, and check if that's already present in your UI. This can for example be the ID of each comment, but anything else that's unique works too.
You store a timestamp for each comment, and keep track of the most recent timestamp you've already processed. Then in an update, you skip all comments up until that timestamp.
Another approach would be to clear the UI before adding the same comments to it. So something like:
db.collection("post-comments").document(postId).addSnapshotListener { (snapshot, err) in
if snapshot!.exists {
clearCommentsFromUI() // this is a function you will have to implement
for key in (snapshot?.data()!.keys)! {
completion(key)
}

FireStore - how to get around array "does-not-contain" queries

After some research, it's seems clear that I cannot use FireStore to query items a given array does NOT contain. Does anyone have a workaround for this use case?...
After a user signs up, the app fetches a bunch of cards that each have a corresponding "card" document in FireStore. After a user interacts with a card, the card document adds the user's uid to a field array (ex: usersWhoHaveSeenThisCard: [userUID]) and the "user" document adds the card's uid to a field array (ex: cardsThisUserHasSeen: [cardUID]). The "user" documents live in a "user" collection and the "card" documents live in a "card" collection.
Currently, I'd like to fetch all cards that a user has NOT interacted with. However, this is problematic, as I only know the cards that a user has interacted with, so a .whereField(usersWhoHaveSeenThisCard, arrayContains: currentUserUID) will not work, as I'd need an "arrayDoesNotContain" statement, which does not exist.
Finally, a user cannot own a card, so I cannot create a true / false boolian field in the card document (ex: userHasSeenThisCard: false) and search on that criteria.
The only solution I can think of, would be to create a new field array on the card document that includes every user who has NOT seen a card (ex: usersWhoHaveNotSeenThisCard: [userUID]), but that means that every user who signs up would have to write their uid to 1000+ card documents, which would eat up my data.
I might just be out of luck, but am hoping someone more knowledgeable with NOSQL / FireStore could provide some insight.
// If any code sample would help, please let me know and I'll update - I think this is largely conceptual as of now
As you've discovered from query limitations, there is no easy workaround for this using Cloud Firestore alone. You will need to somehow store a list of documents seen, load that into memory in the client app, then manually subtract those documents from the query results of all potential documents.
You might want to consider augmenting your app with another database that can do this sort of operation more cleanly (such as a SQL database that can perform joins and subqueries), and maintain them in parallel.
Either that, or require all the documents to be seen in a predictable order, such as by timestamp. Then all you have to store is the timestamp of the last document seen, and use that to filter the results.
There is an accepted and good answer, however, it doesn't provide a direct solution to the question so here goes... (this may or may not be helpful but it does work)
I don't know exactly what your Firestore structure is so here's my assumption:
cards
card_id_0
usersWhoHaveSeenThisCard
0: uid_0
1: uid_1
2: uid_2
card_id_1
usersWhoHaveSeenThisCard
0: uid_2
1: uid_3
card_id_2
usersWhoHaveSeenThisCard
0: uid_1
1: uid_3
Suppose we want to know which cards uid_2 has not seen - which in this case is card_id_2
func findCardsUserHasNotSeen(uidToCheck: String, completion: #escaping ( ([String]) -> Void ) ) {
let ref = self.db.collection("cards")
ref.getDocuments(completion: { snapshot, err in
if let err = err {
print(err.localizedDescription)
return
}
guard let docs = snapshot?.documents else {
print("no docs")
return
}
var documentsIdsThatDoNotContainThisUser = [String]()
for doc in docs {
let uidArray = doc.get("usersWhoHaveSeenThisCard") as! [String]
let x = uidArray.contains(uidToCheck)
if x == false {
documentsIdsThatDoNotContainThisUser.append(doc.documentID)
}
}
completion(documentsIdsThatDoNotContainThisUser)
})
}
Then, the use case like this
func checkUserAction() {
let uid = "uid_2" //the user id to check
self.findCardsUserHasNotSeen(uidToCheck: uid, completion: { result in
if result.count == 0 {
print("user: \(uid) has seen all cards")
return
}
for docId in result {
print("user: \(uid) has not seen: \(docId)")
}
})
}
and the output
user: uid_2 has not seen: card_id_2
This code goes through the documents, gets the array of uid's stored within each documents usersWhoHaveSeenThisCard node and determines if the uid is in the array. If not, it adds that documentID to the documentsIdsThatDoNotContainThisUser array. Once all docs have been checked, the array of documentID's that do not contain the user id is returned.
Knowing how fast Firestore is, I ran the code against a large dataset and the results were returned very quickly so it should not cause any kind of lag for most use cases.

Mongo writes query.UpsertId multiple documents before I can validate with query.Count()

Description: I`m using mongoDB on my project. This is short logic for handler when user tries to put his item for sale. Before putting offer to mongo I validate the offer, so there would be no active offers with save assetId
Using:
mgo.v2
mongo 3.6
golang 1.10
Problem: If user clicks really fast sends several requests to my handler (lets say if he double click the mouse fast), validation doesn`t work as it seems like the first offer is not in the collection yet, so as a result I get 2-3 offers with same assetId.
I tried
Set mongoUrl?replicaSet=rs0, so our master and slaves would now about each other
Set time.Sleep(200 * time.Millisecond) after validation
Question:
Is there any way I can handle this with mongo instruments, or someone would suggest some other workaround?
Thank you in advance!
count, _ := r.DB.C(sellOfferCollectionName).Find(
bson.M{
"state": someState,
"asset_id": assetId,
"seller_id": seller,
},
).Count()
if count > 0 {
return
}
id := uuid.New().String()
OfferModel := Offer{
Id: id,
AssetId: assetId,
State: someState,
SellerId: sellerId,
CreatingDate: time.Now(),
}
if _, err := r.DB.C(sellOfferCollectionName).UpsertId(offer.Id, offer); err != nil {
return err
}
UPDATE
I tried to recreate the problem even more. So I wrote this little test code, so in result managed to write 60 documents before validation (count > 0) worked. This example fully recreates my problem.
type User struct {
Id string `bson:"_id"`
FirstName string `bson:"first_name"`
LastName string `bson:"last_name"`
State bool `bson:"state"`
}
func main() {
mongoSession, mgErr := mgo.Dial("127.0.0.1:27018")
if mgErr != nil {
panic(mgErr)
}
var mongoDbSession = mongoSession.DB("test_db")
for i := 0; i < 1000; i++ {
go func() {
count, _ := mongoDbSession.C("users").Find(
bson.M{
"state": true,
"first_name": "testUser",
},
).Count()
if count > 0 {
return
}
user := User{
Id: uuid.New().String(),
FirstName: "testUser",
LastName: "testLastName",
State: true,
}
if _, err := mongoDbSession.C("users").UpsertId(user.Id, user); err != nil {
panic(mgErr)
}
}()
}
count, _ := mongoDbSession.C("users").Find(
bson.M{
"state": true,
"first_name": "testUser",
},
).Count()
fmt.Println(count)
fmt.Scanln()
}
First thing would be to disable the "Send" button at client side while the call is in progress, so if the user double or triple clicks, that will have no effect, as the second and subsequent calls will target a disabled button, hence nothing will happen.
If the same order may come from multiple places which you want to save multiple times, this is already enough and the correct way to do it.
If the ID also comes from the client, and if only a single order may exist with the given ID, then the next thing you should do is simply use the Order ID as the document ID in MongoDB: assign and use this value as the MongoDB _id field. This will give you the guarantee that multiple items with the same order ID will not exists, the 2nd attempt to insert the order would return an error. Note that using Query.UpsertId() will always succeed, inserting the document if not exists, and updating it if it does. Query.Insert() insert the document if it does not exists, and returns an error if it already does. Using none of UpsertId() and Insert() will result in multiple documents with the same ID.
If for some reason you can't or don't want to use the order ID as the document ID, then define a unique index for the property which stores the order ID, for details see MongoDB Unique Indexes.
Note that using the MongoDB _id field or another one with unique index in itself ensures you can't insert multiple documents with the same Order ID (ensured by MongoDB). Also note that this will work even if you have a cluster with multiple MongoDB instances, as writes (including inserts) always happen at the Master node. So nothing else is required for this to work in a multi-server cluster environment.
Eventually, after close investigation the bug, we found out that the reason was when user sent request it was handled in goroutine. Means a lot requests = a lot of concurrent goroutines. So, it out validator (check if the offer is in the collection), couldn't find it as it was not in the mongo yet. So, in the end, we decided to use redis as our validator.
Here is short implementation:
incr, err := redisClient.Incr(offer.AssetId).Result()
if err != nil {
return err
}
if incr > 1 {
return errors.New("ASSET_ALREADY_ON_SALE")
}
redisClient.Expire(offer.AssetId, time.Second*10)
Hope it will help someone facing same issue.
Link on implementation description:

MongoDB (Mgo v2) Projection returns parent struct

I have here a Building Object where inside sits an Array of Floor Objects.
When Projecting, my goal is to return or count the number of Floor Objects inside a Building Object after matching the elements accordingly. The code is as follows:
Objects:
type Floor struct {
// Binary JSON Identity
ID bson.ObjectId `bson:"_id,omitempty"`
// App-level Identity
FloorUUID string `bson:"f"`
// Floor Info
FloorNumber int `bson:"l"`
// Units
FloorUnits []string `bson:"u"`
// Statistics
Created time.Time `bson:"y"`
}
type Building struct {
// Binary JSON Identity
ID bson.ObjectId `bson:"_id,omitempty"`
// App-level Identity
BldgUUID string `bson:"b"`
// Address Info
BldgNumber string `bson:"i"` // Street Number
BldgStreet string `bson:"s"` // Street
BldgCity string `bson:"c"` // City
BldgState string `bson:"t"` // State
BldgCountry string `bson:"x"` // Country
// Building Info
BldgName string `bson:"w"`
BldgOwner string `bson:"o"`
BldgMaxTenant int `bson:"m"`
BldgNumTenant int `bson:"n"`
// Floors
BldgFloors []Floor `bson:"p"`
// Statistics
Created time.Time `bson:"z"`
}
Code:
func InsertFloor(database *mgo.Database, bldg_uuid string, fnum int) error {
fmt.Println(bldg_uuid)
fmt.Println(fnum) // Floor Number
var result Floor // result := Floor{}
database.C("buildings").Find(bson.M{"b": bldg_uuid}).Select(
bson.M{"p": bson.M{"$elemMatch": bson.M{"l": fnum}}}).One(&result)
fmt.Printf("AHA %s", result)
return errors.New("x")
}
It turns out, no matter how I try the query returns a Building Object, not a floor object? What changes do I need to make in order to have the query fetch and count Floors and not Buildings?
This is done so to check if a Floor inside Building already exists before insertion. If there's a better a approach then I'll replace mine with the better!
Thanks!
You are querying for a Building document so mongo returns that to you even though you try to mask some of its fields using projection.
I don't know of a way to count the number of elements in a mongo array in a find query, but you can use the aggregation framework, where you have the $size operator that does exactly this. So you should send a query like this to mongo :
db.buildings.aggregate([
{
"$match":
{
"_id": buildingID,
"p": {
"$elemMatch": {"l": fNum}
}
}
},
{
"$project":
{
nrOfFloors: {
"$size": "$p"
}
}
}])
Which in go it would look like
result := []bson.M{}
match := bson.M{"$match": bson.M{"b": bldg_uuid, "p": bson.M{"$elemMatch": bson.M{"l": fNum}}}}
count := bson.M{"$project": bson.M{"nrOfFloors": bson.M{"$size": "$p"}}}
operations := []bson.M{match, count}
pipe := sess.DB("mgodb").C("building").Pipe(operations)
pipe.All(&result)

Subscribing to Meteor.Users Collection

// in server.js
Meteor.publish("directory", function () {
return Meteor.users.find({}, {fields: {emails: 1, profile: 1}});
});
// in client.js
Meteor.subscribe("directory");
I want to now get the directory listings queried from the client like directory.findOne() from the browser's console. //Testing purposes
Doing directory=Meteor.subscribe('directory')/directory=Meteor.Collection('directory') and performing directory.findOne() doesn't work but when I do directory=new Meteor.Collection('directory') it works and returns undefined and I bet it CREATES a mongo collection on the server which I don't like because USER collection already exists and it points to a new Collection rather than the USER collection.
NOTE: I don't wanna mess with how Meteor.users collection handles its function... I just want to retrieve some specific data from it using a different handle that will only return the specified fields and not to override its default function...
Ex:
Meteor.users.findOne() // will return the currentLoggedIn users data
directory.findOne() // will return different fields taken from Meteor.users collection.
If you want this setup to work, you need to do the following:
Meteor.publish('thisNameDoesNotMatter', function () {
var self = this;
var handle = Meteor.users.find({}, {
fields: {emails: 1, profile: 1}
}).observeChanges({
added: function (id, fields) {
self.added('thisNameMatters', id, fields);
},
changed: function (id, fields) {
self.changed('thisNameMatters', id, fields);
},
removed: function (id) {
self.removed('thisNameMatters', id);
}
});
self.ready();
self.onStop(function () {
handle.stop();
});
});
No on the client side you need to define a client-side-only collection:
directories = new Meteor.Collection('thisNameMatters');
and subscribe to the corresponding data set:
Meteor.subscribe('thisNameDoesNotMatter');
This should work now. Let me know if you think this explanation is not clear enough.
EDIT
Here, the self.added/changed/removed methods act more or less as an event dispatcher. Briefly speaking they give instructions to every client who called
Meteor.subscribe('thisNameDoesNotMatter');
about the updates that should be applied on the client's collection named thisNameMatters assuming that this collection exists. The name - passed as the first parameter - can be chosen almost arbitrarily, but if there's no corresponding collection on the client side all the updates will be ignored. Note that this collection can be client-side-only, so it does not necessarily have to correspond to a "real" collection in your database.
Returning a cursor from your publish method it's only a shortcut for the above code, with the only difference that the name of an actual collection is used instead of our theNameMatters. This mechanism actually allows you to create as many "mirrors" of your datasets as you wish. In some situations this might be quite useful. The only problem is that these "collections" will be read-only (which totally make sense BTW) because if they're not defined on the server the corresponding `insert/update/remove' methods do not exist.
The collection is called Meteor.users and there is no need to declare a new one on neither the server nor the client.
Your publish/subscribe code is correct:
// in server.js
Meteor.publish("directory", function () {
return Meteor.users.find({}, {fields: {emails: 1, profile: 1}});
});
// in client.js
Meteor.subscribe("directory");
To access documents in the users collection that have been published by the server you need to do something like this:
var usersArray = Meteor.users.find().fetch();
or
var oneUser = Meteor.users.findOne();