Dose mongo has some functions similar to oracle trigger? When i insert a document it can automatic update the fields createdTimestamp and modifiedTimestamp.
Excample:
After inserting/updating data:
{
'name': 'bobo',
'age': 17
}
final data will be updated as below. It means trigger dose time fields updating for me.
{
'name': 'bobo',
'age': 17,
'createdTimestamp': 2020-09-17T09:31:14.416+00:00,
'modifiedTimestamp': 2020-09-17T09:31:14.440+00:00
}
My solution is to use $currentDate and $setOnInsert, to update set upsert=true, code as below:
created_modified_timestamp_operation = {
'$currentDate': {
modifiedTimestamp: true
},
'$setOnInsert': {
createdTimestamp: new Date()
}
}
But for this solution I need to modify a lot of data operations,so I want to know is there any functions is similar to Oracle trigger, i just need to write a trigger to monitor whether the database is modified.
Thanks~~
Well triggers are only available in MongoDB atlas. But you can create triggers if you are using something like mongoose since it supports creating pre/post save, update triggers which I believe though are application level but can be of great help. And yes there are Change Streams you can read about them as well.
Related
We need to cache records for a service with a terrible API.
This service provides us with API to query for data about our employees, but does not inform us whether employees are new or have been updated. Nor can we filter our queries to them for this information.
Our proposed solution to the problems this creates for us is to periodically (e.g. every 15 minutes) query all our employee data and upsert it into a Mongo database. Then, when we write to the MongoDb, we would like to include an additional property which indicates whether the record is new or whether the record has any changes since the last time it was upserted (obviously not including the field we are using for the timestamp).
The idea is, instead of querying the source directly, which we can't filter by such timestamps, we would instead query our cache which would include said timestamp and use it for a filter.
(Ideally, we'd like to write this in C# using the MongoDb driver, but more important right now is whether we can do this in an upsert call or whether we'd need to load all the records into memory, do comparisons, and then add the timestamps before upserting them....)
There might be a way of doing that, but how efficient that is, still needs to be seen. The update command in MongoDB can take an aggregation pipeline to perform an update operation. We can use the $addFields stage of MongoDB to add a new field denoting the update status, and we can use $function to compute its value. A short example is:
db.collection.update({
key: 1
},
[
{
"$addFields": {
changed: {
"$function": {
lang: "js",
"args": [
"$$ROOT",
{
"key": 1,
data: "somedata"
}
],
"body": "function(originalDoc, newDoc) { return JSON.stringify(originalDoc) !== JSON.stringify(newDoc) }"
}
}
}
}
],
{
upsert: true
})
Here's the playground link.
Some points to consider here, are:
If the order of fields in the old and new versions of the doc is not the same then JSON.stringify will fail.
The function specified in $function will run on the server-side, so ideally it needs to be lightweight. If there is a large number of users, that will get upserted, then it may or may not act as a bottleneck.
I have a collection that with something like:
{
_id: 'abc',
_remoteId: 'xyz',
submitted_on: ISODate('2015-01-24T15:00:39.171Z"');
}
Where _remoteId is a reference to another collection. What I need is to publish the latest of documents, grouped by _remoteId. I think I need to use the $group aggregate, but the only examples (example here) seem to not return a Cursor, and thus do not seem to be reactive. Is there a way to publish a group aggregate in such a way to be reactive, either by returning a Cursor directly or by observing on the server and setting up the updates manually?
The second code snippet in the example that you reference shows how you would create a reactive cursor. He missed returning it though... at the end of the previousInviteContacts, he should have returned:
return self.ready();
Other than that, to consume it, just subscribe to the previousInviteContacts publication and query the contacts collection.
I am new to Sails and Mongo Db. Currently I am trying to implement a CRUD Function using Sails where I want to save user details in Mongo db.In the model I have the following attributes
"id":{
type:'Integer',
min:100,
autoincrement:true
},
attributes: {
name:{
type:'String',
required:true,
unique:true
},
email_id:{
type:'EMAIL',
required:false,
unique:false
},
age:{
type:'Integer',
required:false,
unique:false
}
}
I want to ensure that the _id is overridden with my values starting from 100 and is auto incremented with each new entry. I am using the waterline model and when I call the Api in DHC, I get the following output
"name": "abc"
"age": 30
"email_id": "abc#gmail.com"
"id": "5587bb76ce83508409db1e57"
Here the Id given is the object Id.Can somebody tell me how to override the object id with an Integer starting from 100 and is auto incremented with every new value.
Attention: Mongo id should be unique as possible in order to scale well. The default ObjectId is consist of a timestamp, machine ID, process ID and a random incrementing value. Leaving it with only the latter would make it collision prone.
However, sometimes you badly want to prettify the never-ending ObjectID value (i.e. to be shown in the URL after encoding). Then, you should consider using an appropriate atomic increment strategy.
Overriding the _id example:
db.testSOF.insert({_id:"myUniqueValue", a:1, b:1})
Making an Auto-Incrementing Sequence:
Use Counters Collection: Basically a separated collection which keeps track the last number of the sequence. Personally, I have found it more cohesive to store the findAndModify function in the system.js collection, although it lacks version control's capabilities.
Optimistic Loop
Edit:
I've found an issue in which the owner of sails-mongo said:
MongoDb doesn't have an auto incrementing attribute because it doesn't
support it without doing some kind of manual sequence increment on a
separate collection or document. We don't currently do this in the
adapter but it could be added in the future or if someone wants to
submit a PR. We do something similar for sails-disk and sails-redis to
get support for autoIncremeting fields.
He mentions the first technique I added in this answer:
Use Counters Collection. In the same issue, lewins shows a workaround.
I've started developing an app recently and have finally got my node.js server communicating with my mongodb database.
I want to insert a bunch a JSON objects that look something like this:
{
'Username': 'Bob',
'longitude': '58.3',
'latitude': '0.3'
}
If this Object is inserted into myCollection, and then I try to insert an object again with the Username Bob, but with different coordinates, I want the latest 'Username': 'Bob' object to replace the earlier one. There can only be one object in myCollection with the 'Username': 'Bob' basically.
If this was a relational database I would make Bob a primary key or something, but I was wondering what the best way to do this with mongoDb would be. Should I use the update+upsert method? I tried that and it didn't seem to work!
Apologies if this seems like a silly question, but I am still new to all of this.
Yes, a simple update query with the upsert option should satisfy your use case:
db.collection.update(
{username:"Bob"},
{$set:{'longitude': '58.3', 'latitude': '0.3'}},
{ upsert: true}
)
When you run the above query the first time (i.e., Bob doesn't exist in the collection), a new document is created. But when you run it the second time with new values for lat/long, the existing document is updated with the new lat/long values.
You can also create a unique index on the username field to prevent multiple records for 'Bob' from being created even accidentally:
db.collection.ensureIndex( { "username": 1 }, { unique: true } )
EDIT:
db.collection.ensureIndex() is now deprecated and is an alias for db.collection.createIndex(). So, use db.collection.createIndex() for creating indexes
i have a problem for a simple select and update logic :
task = Task.queueing.where(conditions).order(:created_at.asc).first
if task
task.set(:status=>2)
end
it's simple right ?
BUT, the problem is : 100+ requests coming in the same time have.
SO many client got the same record, that's what i DONT want.
in mysql, i can do some thing like this to avoid duplicate load:
rnd_str = 10000000 * rand
Task.update(status:rnd_str).limit(1) # this may be wrong code
task = Task.where(status:rnd_str).first
task.set(status:2)
render :json=>task
BUT HOW TO UPDATE 1 RECORD WITH QUERY IN mongomapper ?
thx !
An update in MongoDB will only update one record by default. You have an option to update all the matching documents in a query with {multi:true} option but by default your update will only update one document.
So what you have to do is combine your "query" into your update statement so they execute atomically (just like you do it in SQL) and not do two separate operations. In shell syntax, something like:
db.queueing.update({conditions}, {$set:{status:2}})
Now, if you also need the task document you updated to work with then you can use findAndModify to update and return the document in one atomic operation. Like this:
task = db.queueing.findAndModify( {your-condition},
sort: { your-ordering },
update: { $set: { status: 2 } }
} );