I have a working code below that inserts a data if it does not exist (but not updates if it exist). In below implementation I am looping upsert, and it just works fine.
My question is, how to get those newly inserted data? (exclude data that is already existing). Do have idea how to achieve this, in a shortest way as possible?
I did some research about it and found this possible github solution, but I don't get the point. Because it also returning data even its already existing.
this.data = await prisma.$transaction(
temp.map((provider) =>
prisma.provider.upsert({
where: {
user_id_api_key: {
user_id: provider.user_id,
api_key: provider.api_key
}
},
create: provider,
update: {}
})
)
)
console.log(this.data) // it still return data even if its already existing
Related
My app isn't letting me create more than one profile for some reason. Here's the setup in the service file:
//This finds the profile if it exists
async getProfile(user) {
let profile = await dbContext.Profile.findOne({
email: user.email
});
profile = await createProfileIfNeeded(profile, user);
await mergeSubsIfNeeded(profile, user);
return profile;
}
//This is supposed to create one if one doesn't exist
async function createProfileIfNeeded(profile, user) {
if (!profile) {
profile = await dbContext.Profile.create({
...user,
subs: [user.sub]
});
}
return profile;
}
It works for the first user, but when I make another, I get the error:
{"error":{"message":"MongoError: E11000 duplicate key error collection: TownMiner.profiles index: info.subs_1 dup key: { info.subs: null }","status":400},"url":"/api/profile"}
What's confusing is that subs are set via Auth0. When I look at it with a break-point in the server, it shows all the info there. Also, when I look in my MongoDB collections, nowhere does it say that any of the values are "null". I've used this same setup for a few projects now and they've all worked perfectly (and this new project is cloned from the same template). Also noted to make sure that the sub info is all different and it is.
This is the MongoDB collection:
_id: ObjectId("***")
subs:Array
0:"auth0|***dda6a"
1:"auth0|***aa288
name:"kevin#test.com"
picture:"https://s.gravatar.com/avatar/c6788456e2639d2d10823298cc219aaf?s=480&r..."
email:"kevin#test.com"
createdAt:2020-08-07T21:23:05.867+00:00
updatedAt:2020-08-17T17:24:05.583+00:00
__v:1
I've looked at the other answers for similar questions on here but couldn't quite find where it fit into this project. Any help would be great. Thanks!
There is a unique index in the TownMiner.profiles collection on {info.subs:1}.
That sample document doesn't include an info field, so the value entered in the index for that document would be null.
Since the index is tagged unique, the mongod will not permit you to insert any other document that would also be entered into the info.subs index using null.
Turns out the error was because I went on the mongoDB site and manually added a collection and probably set it up wrong. I deleted it and let my app build the collection itself and it seems to be working fine. Thank you for taking time to help! Always appreciated!
I am attempting to use pre('findOneAndUpdate') to update the icon attribute of the Meeting document. The update is based on the pre-existing value of the yearlymeeting attribute (see below).
Because pre and post save() hooks are not executed on update(), I seem to be unable to access the original document at all. Yet this is critical for the operation I'm trying to perform. Is there any way around this?
For example, I am able to accomplish my purpose on pre('save'), like so:
meetingSchema.pre('save', function(next) {
const yearlymeetingSlug = this.yearlymeeting[0].toLowerCase().replace(/[^A-z0-9]/g, '');
this.icon = `${yearlymeetingSlug}.png`
next();
});
What I would like to be able to do is something like this:
meetingSchema.pre('findOneAndUpdate', function(next) {
const yearlymeetingSlug = originalDocument.yearlymeeting[0].toLowerCase().replace(/[^A-z0-9]/g, '');
this.icon = `${yearlymeetingSlug}.png`
next();
});
I understand that this in pre(findOneAndUpdate) refers to the query, rather than the stored document itself. Is there any way to access the document, so that I can update icon based on the stored value of yearlymeeting?
tl;dr
Not possible via middleware. Query for the doc first, and then separately update a specific version of the doc to prevent race conditions.
Can't do it the way you're trying according to this issue on the Mongoose Github (from the main dev):
By design - the document being updated might not even be in the server's memory. In order to do that, mongoose would have to do a findOne() to load the document before doing the update(), which is not acceptable.
The design is to enable you to manipulate the query object by adding or removing filters, update params, options, etc. For instance, automatically calling .populate() with find() and findOne(), setting the multi: true option by default on certain models, access control, and other possibilities.
findOneAndUpdate() is a bit of a misnomer, it uses the underlying mongodb findAndModify command, it's not the same as findOne() + update(). As a separate operation, it should have its own middleware.
Following this, there are no other suggestions in the issue thread to access the original document inside of the middleware itself.
What I've seen done (and what I've had to do many times myself), is simply have to query for the document before updating it (which, of course, could lead to a race condition depending on who is updating the doc, and when, but you can fix that by also querying for a specific version of the document -- a sort of "optimistic locking"):
let meeting = yield Meeting.findOne({}).exec()
let update = {}
// ... some conditional logic to figure out which icon to set
update.icon = // whatever
yield Meeting.update({ _id: meeting._id, version: meeting.version }, update)
This is of course assuming you have a "version" field in your schema. This sort of locking will prevent you from updating an old version of the doc. If you're gonna use this kind of versioning, you'll also probably want to add some middleware that updates the version of a doc any time the doc is updated/saved.
You can also use a more naïve implementation, where you don't use locking, which may be fine in your specific business case, as long as you're aware of the possibility of a race condition, and the risks.
This may not be the best solution, but I did find a way to make it work. I used the controller rather than schema pre hooks. Here's what my update controller looks like now:
exports.updateMeeting = async (req, res) => {
const _id = req.params.id
let meeting = await Meeting.findOneAndUpdate({ _id }, req.body, {
new: true,
runValidators: true
});
/* New Code: */
const yearlymeetingSlug = meeting.yearlymeeting[0].toLowerCase().replace(/[^A-z0-9]/g, '');
meeting.icon = `${yearlymeetingSlug}.png`;
meeting.save();
req.flash('success', 'meeting successfully updated!');
res.redirect(`/meetings/${meeting.slug}`);
};
I welcome your feedback on any problems you see with this solution.
I want to create a backend service which monitors a mongodb collection for new entries. As those are being created, I wish to run processing and update them.
I thought doing so with a Meteor service/app would be a wise idea because Meteor uses 'oplog tailing' which seems ideal for this purpose (I'd rather avoid polling if possible).
As such, I figured creating a minimal server-side-only app should solve it.
So basically, I need something along these lines:
if (Meteor.isServer) {
MyCollection = new Mongo.Collection('myCollection');
Meteor.publish('myCollectionPub', function () {
return MyCollection.find({ some: criteria... });
}
// is there such a thing?
Meteor.serverSideSubscribe('MyCollectionPub',
function (newDocs) {
// process/update newDocs
});
}
According to the Meteor docs, I cannot use Meteor.subscribe() on the server (and indeed it crashes if I try).
Question is:
Are there ways of 'subscribing' to collection updates on the server?
The PeerLibrary server-autorun package (along with it's dependant, reactive-mongo) will provide you with easy server-side observation of collections.
An alternative to #tarmes suggestion is the collection-hooks package, however as pointed out by David Weldon, it will only trigger in instance it is run in:
https://github.com/matb33/meteor-collection-hooks
MyCollection.after.insert(function (userId, doc) {
// ...
});
If you need it to run even when another instance makes a change in the mongo database, you can observe a cursor that is returned from your collection:
MyCollection.find({created_at : {$gt: some_current_time}}).observe({
added: function(item) {
// Alert code
}
});
I'm a couple hours new to Meteor and Mongo, coming from a Rails background and trying to understand how migrations work - or don't maybe?
I have a server/bootstrap.js file that I use to seed some data:
// if the database is empty on server start, create some sample data.
Meteor.startup(function () {
if (Users.find().count() === 0) {
var userData = [
{ name: 'Cool guy' },
{ name: 'Other dude' }
];
for (var i = 0; userData.length; i++) {
var userId = Users.insert({
name: userData[i].name
});
}
}
});
It seems like every time I want to change the database, say to add a new field, I have to run meteor reset to get it to pick up the changes.
But what happens if I create records or other data through the UI that I want to keep? In Rails, working with MySQL or PostgreSQL, I'd create a migration to create new fields without blowing away the entire database.
How does this work with Meteor and Mongo? Also thinking of the case of rolling out new changes from development to production. Thanks!
-- Update: 2013/09/24 --
Apparently, the schema-less nature of Mongo reduces or eliminates the need for migrations. In my case, modifying userData to add new fields won't work after it runs initially because of the Users count check - which is why I kept running meteor reset. I'll need to rethink my approach here and study up.
That said, there are projects out there that use migrations, like Telescope: https://github.com/SachaG/Telescope/blob/master/server/migrations.js
I also found the tutorial at http://try.mongodb.org/ useful.
First of all, your code is perfectly valid. And you know that.
mrt reset gives you a 'fresh' - empty database (as mentionned already).
If you want to reset a particular collection, you can do it so :
MyCollection.remove({});
But you have to understand the nature of NoSQL : there are no constraints on the data. It could be called NoREL (as in not a relational database, source : Wikipedia ).
MongoDB is also schema-less.
This means that you can use any field you want in your data. This is up to you (the programmer) to enforce specific constraints if you want some. In other words, there is no logic on the mongo side. It should accept any data you throw at it, just like Hubert OG demonstrated. Your code snippet could be :
// if the database is empty on server start, create some sample data.
Meteor.startup(function () {
if (Users.find().count() === 0) {
var userData = [
{ name: 'Cool guy' },
{ name: 'Other dude' },
{ nickname: 'Yet another dude' } // this line shows that mongo takes what you throw him
];
for (var i = 0; userData.length; i++) {
var userId = Users.insert({
name: userData[i].name
});
}
}
});
Source : http://www.mongodb.com/nosql
There is no need for migration there. You only have to add the logic in your application code.
Note : To import/export a database, you can have a look there : mongo import/export doc, and maybe at the db.copyDatabase(origin, destination, hostname) function.
There are no migrations in Mongo — there is no scheme! If you want to add a new field that was not there before, just do it and it will work. You can even have completely different documents in the same collection!
Items.insert({name: "keyboard", type: "input", interface: "usb"});
Items.insert({cherries: true, count: 5, unit: "buckets", taste: "awesome"});
This will just work. One of main reasons to use NoSQL (and advantages of Meteor over Rails) is that you don't have migrations to worry about.
Using mrt reset to change db model is a terrible idea. What it actually does is complete reset of db — it removes all of your data! While it's sometimes usefull in development, I bet it's not what you want in this case.
I'm doing the following using Mongoose:
that.model.update({_id: dao._id}, dao, { upsert: true }, cb);
Where dao is a mongoose representation containing (among other things) a couple of embedded documents. As a test I've deleted a couple of the embedded docs from the array before calling the update-method above.
The result is that the change to the array of embedded docs IS NOT persisted.
Anything I'm overlooking?
Hard to be certain w/o seeing more code, but if dao is a Mongoose model instance, you should be calling dao.save(cb); instead.
I solved the problem by doing something similar as proposed in the following issue: https://github.com/LearnBoost/mongoose/issues/571
For completeness some background which led to the problem.
I'm using DDD repositories which are populated on app-start. Under the hood this fetches Mongoose-objects (which are treate as DAOs in my situation) and are translated to domainobjects, which are cached in the repository. I need this separation between domainobjects and mongoose-objects, don't ask.
This means that getById, getAll and all other public interfaces of the repo work with domainobjects and not with mongoose-objects.
When doing things like add or update on the repo this internally only updates the in-mem cache (which, again, only uses domainobjects instead of mongoose-objects)
Only when doing commit on the repo does the possibly changed collection of domainobjects get persisted. This is done by creating NEW Mongoose-objects instead of fetching Existing mongoose-objects and updating those.
This is why I can't use dao.save() since, when I'm saving a different (just created) mongoose-object while a mongoose-object with the same id may possibly already exist in Mongo, it throws a duplicate id error.
Some relevant snippet from by code illustrating the solution:
var dao = that.createDAO(domainobject);
//https://github.com/LearnBoost/mongoose/issues/571
// Convert the Model instance to a simple object using Model's 'toObject' function
// to prevent weirdness like infinite looping...
var upsertData = dao.toObject();
// Delete the _id property, otherwise Mongo will return a "Mod on _id not allowed" error
delete upsertData._id;
that.model.update({_id: dao._id}, upsertData, { upsert: true }, cb);