Create an object from multiple database collections (SailsJS, MongoDB, WaterlineJS) - mongodb

I'm very new to Sails and noSQL databases and I'm having trouble gathering information together from different collections. Basically I need to gather an object of items from one collection and then use a foreign key stored in that collection to add data from a separate collection so the whole thing can be sent as one object.
Currently I find all the items in a collection called Artwork, then I'm using a for loop to iterate through the artworks. I need to use an id stored in Artworks to query a collection called Contacts but having successfully found the contact I am unable to pass it back out of the function to add it to the Artwork object.
find: function ( req, res, next ) {
Artwork.find().done( function ( err, artwork ) {
// Error handling
if (err) {
return console.log(err);
} else {
for ( x in artwork ) {
var y = artwork[x]['artistID'];
// Get the artsists name
Contact.find(y).done( function( err, contact ) {
// Error handling
if ( err ) {
return console.log(err);
// The Artist was found successfully!
} else {
var artist = contact[0]['fullName'];
}
});
artwork[x]['artistsName'] = artist;
}
res.send(artwork);
}
});
}
The result of the above code is an error thrown that tells me 'artist' is undefined. The variable is not being passed outside the function?
Any advice greatly received.

Sails is about to release an update that will include associations. In the meantime, here's an answer for how you can accomplish it using async. https://stackoverflow.com/a/20050821/1262998

Related

DynamoDB - How to upsert nested objects with updateItem

Hi I am newbie to dynamoDB. Below is the schema of the dynamo table
{
"user_id":1, // partition key
"dob":"1991-09-12", // sort key
"movies_watched":{
"1":{
"movie_name":"twilight",
"movie_released_year":"1990",
"movie_genre":"action"
},
"2":{
"movie_name":"harry potter",
"movie_released_year":"1996",
"movie_genre":"action"
},
"3":{
"movie_name":"lalaland",
"movie_released_year":"1998",
"movie_genre":"action"
},
"4":{
"movie_name":"serendipity",
"movie_released_year":"1999",
"movie_genre":"action"
}
}
..... 6 more attributes
}
I want to insert a new item if the item(that user id with dob) did not exist, otherwise add the movies to existing movies_watched map by checking if the movie is not already available the movies_watched map .
Currently, I am trying to use update(params) method.
Below is my approach:
function getInsertQuery (item) {
const exp = {
UpdateExpression: 'set',
ExpressionAttributeNames: {},
ExpressionAttributeValues: {}
}
Object.entries(item).forEach(([key, item]) => {
if (key !== 'user_id' && key !== 'dob' && key !== 'movies_watched') {
exp.UpdateExpression += ` #${key} = :${key},`
exp.ExpressionAttributeNames[`#${key}`] = key
exp.ExpressionAttributeValues[`:${key}`] = item
}
})
let i = 0
Object.entries(item. movies_watched).forEach(([key, item]) => {
exp.UpdateExpression += ` movies_watched.#uniqueID${i} = :uniqueID${i},`
exp.ExpressionAttributeNames[`#uniqueID${i}`] = key
exp.ExpressionAttributeValues[`:uniqueID${i}`] = item
i++
})
exp.UpdateExpression = exp.UpdateExpression.slice(0, -1)
return exp
}
The above method just creates update expression with expression names and values for all top level attributes as well as nested attributes (with document path).
It works well if the item is already available by updating movies_watched map. But throws exception if the item is not available and while inserting. Below is exception:
The document path provided in the update expression is invalid for update
However, I am still not sure how to check for duplicate movies in movies_watched map
Could someone guide me in right direction, any help is highly appreciated!
Thanks in advance
There is no way to do this, given your model, without reading an item from DDB before an update (at that point the process is trivial). If you don't want to impose this additional read capacity on your table for update, then you would need to re-design your data model:
You can change movies_watched to be a Set and hold references to movies. Caveat is that Set can contain only Numbers or Strings, thus you would have movie id or name or keep the data but as JSON Strings in your Set and then parse it back into JSON on read. With SET you can perform ADD operation on the movies_watched attribute. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.UpdateExpressions.html#Expressions.UpdateExpressions.ADD
You can go with single table design approach and have these movies watched as separate items with (PK:userId and SK:movie_id). To get a user you would perform a query and specify only PK=userId -> you will get a collection where one item is your user record and others are movies_watched. If you are new to DynamoDB and are learning the ropes, then I would suggest go with this approach. https://www.alexdebrie.com/posts/dynamodb-single-table/

How do I skip duplicate documents in a bulk insert and ignore duplicates with a specific field c#

I need to insert many documents and ignore the duplicated docs.
Doc format:
_id:5b84e2588aceda018a974450
Name:"Jeff M"
Email:"jeff.m#xtrastaff.com"
Type:"Client"
UserId:Binary('Rw+KMGpSAECQ3gwCtfoKUg==')
UserImage:null
I want to check the duplication using the EmailId field when I am inserting. Insert only if it is not existing.
To prevent the duplicates being inserted you need a unique index which can be created in C# code:
public void CreateIndex()
{
var options = new CreateIndexOptions() { Unique = true };
var field = new StringFieldDefinition<Model>(nameof(Model.Email));
var indexDefinition = new IndexKeysDefinitionBuilder<Model>().Ascending(field);
Collection.Indexes.CreateOne(indexDefinition, options);
}
Then you can insert multiple documents using BulkWrite operation. The problem is that by default the processing will be stopped when first insert operation fails (which happens when you try to insert a duplicate) and you'll get an exception in C#. You can modify that by setting ordered parameter to false which means that all the inserts will be processed "in parallel" and you'll get one exception which aggregates all failed inserts. That exception is of type MongoBulkWriteException and you can try to catch it. So you can try following method:
public void InsertData(List<Model> data)
{
var writeOps = data.Select(x => new InsertOneModel<Model>(x));
try
{
Collection.BulkWrite(writeOps, new BulkWriteOptions() { IsOrdered = false });
}
catch (MongoBulkWriteException ex)
{
// will be thrown when there were any duplicates
}
}

Update an already tracked entities

I want to update the itemsToUpdate collection.
This collection is already used in a query thus the resulting entities are already tracked in the context local property.
What is the most efficient way of overriding properties of the context.items.Local property from the itemsToUpdate collection?
private async Task<IEnumerable<item>> GetitemsAsync(IEnumerable<item> itemIds)
{
return await context.items.Where(t => itemIds.Select(x => x.Id).Contains(t.Id)).ToListAsync();
}
public async Task Update(...)
{
// Update
var queryUpdateitems = await GetitemsAsync(itemsToUpdate);
bool canUpdate = queryUpdateitems.All(t => t.UserId == userId);
if (!canUpdate)
{
throw new NotAuthorizedException();
}
else
{
// update here the itemsToUpdate collection
}
context.SaveChangesAsync();
}
In your case, you know that you have to update all these items, you just want to make sure that current user can update all items (by comparing Item.UserId). Instead of fetching all the existing items from database to make the check, you can query database to give result of the check and then you can just send update to database if check is true.
var itemIds = itemsToUpdate.Select(x => x.Id).ToList();
var canUpdate = await db.Blogs.Where(b => itemIds.Contains(b.Id)).AllAsync(t => t.UserId == userId);
if (canUpdate)
{
db.UpdateRange(itemsToUpdate);
}
else
{
throw new NotSupportedException();
}
await db.SaveChangesAsync();
Here, you have to make list of itemIds first because EF cannot inline list of items in a query and will do evaluation on client otherwise. That means EF is fetching whole table. Same is true for your GetitemsAsync method. It also queries whole table. Consider creating itemIds locally in that method too.
Once you pass in List<int> in the method EF will be happy to inline it in query and for query of canUpdate it will sent single query to database and fetch just true/false from database. Then you can use UpdateRange directly since there are nomore tracking records. Since it does not fetch all items from database, it will be faster too.

How to get auto Id after upsert on a persisted model in loopback?

I have a some models generated from postgresql db using looback-connector postgresql. Id column of these models is a auto incremented integer column of postgresql db.
1) I have a remote method added on one of persisted models, where i perform simple update or insert(upsert.
Car.CreateOrUpdateCar = function (carobj, req) {
Car.upsert(Carobj, function (err, Car) {
if (err)
console.log(err);
else {
req(err, Car);
}
});
};
2) have added a remote hook to execute after this remote method.
Car.afterRemote('CreateOrUpdateCar', function (context, remoteMethodOutput, next) {
//Remaining code goes here
next();
});
3) I want to use Id of newly inserted row in step (1), in the remote hook mentioned in step (2)
I don't have much idea about postgresql db. But Try it like this
var carObj;
Car.CreateOrUpdateCar = function (carobj, req) {
Car.upsert(Carobj, function (err, Car) {
if (err)
console.log(err);
else {
req(err, Car); // Your Car object contains final result after upserting along with Id
carObj = Car;
}
});
};
Now you can get id by using carObj.id and you can use it where ever you want. I hope this helps
You can access to generated id in remote hook like this :
Car.afterRemote('CreateOrUpdateCar', function (context, remoteMethodOutput, next) {
var genId = remoteMethodOutput.id || context.result.id;
next();
});

How to return and update a table in bookshelf knex

I am using postgresql, knex, and bookshelf to make queries to update my users table. I would like to find all users who didn't sign in during a specific time and then update their numAbsences and numTardies field.
However it appears that when running a raw sql query using bookshelf.knex the result that I get for users is an array of objects rather than an array of bookshelf objects of objects because I can't save the objects directly to the database when I try to use .save(). I get the exception user.save is not a function.
Does anyone know how I can update the values in the database for the users? I've seen the update function but I need to also return the users in absentUsers so I select them currently.
// field indicates whether the student was late or absent
var absentUsers = function(field){
// returns all users who did not sign in during a specific time
if (ongoingClasses){
return bookshelf.knex('users')
.join('signed_in', 'signed_in.studentId', '=', 'users.id')
.where('signed_in.signedIn', false)
.select()
.then(function(users){
markAbsent(users, field);
return users;
});
}
}
var markAbsent = function(users, field){
users.forEach(function(user){
user[field]++;
user.save();
})
}
I've solved my problem by using another sql query in knex. It seemed there was no way to use a sql query and then use standard bookshelf knex methods since the objects returned were not bookshelf wrapper objects.
var absentUsers = function(field){
// returns all users who did not sign in during a specific time
if (ongoingClasses){
return bookshelf.knex('users')
.join('signed_in', 'signed_in.studentId', '=', 'users.id')
.where('signed_in.signedIn', false)
.select()
.then(function(users){
markAbsent(users, field);
});
}
}
var markAbsent = function(users, field){
users.forEach(function(user){
var updatedUser = {};
updatedUser[field] = user[field]+1;
bookshelf.knex('users')
.where('users.id', user.id)
.update(updatedUser).then(function(){
});
});
}
With your code bookshelf.knex('users') you leave the "Bookshelf world" and are in "raw knex world". Knex alone doesn't know about your Bookshelf wrapper objects.
You may use Bookshelf query method to get the best of both worlds.
Assuming your model class is User, your example would look approximately like
User.query(function(qb) {
qb.join('signed_in', 'signed_in.studentId', 'users.id')
.where('signed_in.signedIn', false);
})
.fetchAll()
.then(function(bookshelfUserObjects) {
/*mark absent*/
return bookshelfUserObjects.invokeThen('save'); // <1>
});
<1> invokeThen: Call model method on each instance in collection