I have a tree-like document model like the image below. Is it possible to create a unique index for different layers? For example, in the below example, I have index field 1, then different index fields in objects of l2 array and l3 array. I am trying to create an index where index of all layers together should be unique. For example, if I have an index 1, I can't have the same index value throughout the child documents or any other documents. I tried searching a solution for it, but couldn't find any. Please help me with this issue. Thanks in advance.
I'm assuming you are using NodeJs and Mongoose since you did not specify that. You can get an ObjectId for every level by using different schemas in nested objects like the below example.
const level2Schema = new Schema({
unit: {
type: String,
required: true
},
price: {
type: Number,
required: true
}
});
const level1Schema = new Schema({
feildx: {
type: String,
required: true
},
anyNameArray: {
type: [level2Schema],
required: true
}
});
var MainSchema = new Schema(
{
field1: String,
field2: String,
anyNameArray: {
type: [level1Schema],
default: [],
required: true
},
},
{ timestamps: true }
);
This will create a unique ObjectId for every nested document.
Related
I came across a problem while implementing user deletion functionality. A
Suppose I have a model:
const UserSchema = new mongoose.Schema(
{
email: { type: String, required: true, unique: true },
name: { type: String, required: true },
password: { type: String, required: true },
deleted: {type: Date, default: null}
},
{ timestamps: true }
);
This clearly states that the field email has to be unique. However, I would like to set it unique only in the set filtered for deleted != null.
In other words, I would like to filter out the deleted users' records before checking if it is unique or not.
Are there any best practices regarding this?
Or should I just create a field called del-email and null the email field to avoid over-complication and preserve the data?
You can try,
Partial index with unique constraint:
The partial unique index, you can specify the filter expression condition, if it matches then the unique index will take the role,
UserSchema.index(
{ email: 1 },
{ unique: true, partialFilterExpression: { deleted: { $eq: null } } }
);
Note:
As noted in the query coverage documentation for partial indexes:
To use the partial index, a query must contain the filter expression (or a modified filter expression that specifies a subset of the filter expression) as part of its query condition.
User.find({ email: "something#mail.com", deleted: null });
I have a one-to-many relationship where a place can have multiple reviews. Here are the 2 schemas
export const PlaceSchema = new mongoose.Schema({
name: { type: String, required: true, unique: true },
center: { type: [Number], required: true },
borders: { type: [], required: true },
reviews: [{ type: mongoose.Schema.Types.ObjectId, ref: 'Review' }]
});
export const ReviewSchema = new mongoose.Schema({
user: { type: String, required: true },
city: { type: mongoose.Schema.Types.ObjectId, ref: 'Place', required: true },
creation_date: { type: Date, required: true },
...
});
I have reviews with correct place ID. But when I do a simple this.placeModel.find().populate('reviews').exec(), the reviews always come back as an empty array. But the IDs seem to be fine, as visible here (place on the left, review on the right)
It's my first side project where I play with with Mongo, so I don't really see what I'm missing.
Your query this.placeModel.find().populate('reviews').exec() will work in this manner:
Find all place documents from the places collection.
For each place document, iterate through the reviews field (of array type) and search for the document in the reviews collection with the matching id, and replace the array element with the review document.
Return the list of place documents where the reviews field has been populated with the review documents.
Hence, you need to ensure that your place documents contain the correct id of the review documents in the reviews field instead of ensuring that you have the correct place id in the review documents for the query you want to execute.
I created a simple dynamic fields in React-Redux with a plus button to add as many field as I want (hobbies) of an already existing form. I'm using mongodb as a database and so I have this error that tells me that my fields/data don't have iDs.
so how can I generate iDs for my data?
this below is my model with featherJs. as you can see this is how I added my hobbies array in the existing model called myService. I can see that my hobbies are created in mongo (using Robo 3T) which is great but i'm having difficulty reusing them (hobbies) in an other component in Redux. I'm not sure if I should give IDs to this fields or create a new service just for them. I never coded something in backend so I'm confused. what's the rule for this kind of situations.
Any other suggestions would be helpful.
warning in Redux: Each child in a list should have a unique "key" prop.
error in api : Cast to ObjectId failed for value at path "_id" for model "
const { Schema } = mongooseClient;
const myService = new Schema({
type: { type: String, enum: VALID_TYPES, required: true },
user: {
type: mongooseClient.Schema.Types.ObjectId,
ref: 'user',
required: true
},
comment: String,
hobbies: [{
type: mongooseClient.Schema.Types.ObjectId,
ref: 'hobbies',
default: [],
required: false }],
date: {
begin: { type: Date, default: Date.now },
current: { type: Date, default: Date.now },
end: { type: Date, required: true },
},
}, {
timestamps: true
});
return mongooseClient.model('myService', myService);
};
I have searched for join two collection in MongoDB. I found populate. But it is not working for my scenario. I am using mongoose in node js. My schema are like below.
const CoordinateSchema = new mongoose.Schema({
activationId: {
type: mongoose.Schema.ObjectId,
ref: 'Activation'
},
mac: {
type: mongoose.SchemaTypes.String,
required: true,
set: toLower
},
t: { type: Date },
userId: {
type: mongoose.Schema.ObjectId,
ref: 'User'
}
});
const UserSchema = new mongoose.Schema({
email: {
type: mongoose.SchemaTypes.String,
required: true,
//unique: true,
set: toLower
},
mac: {
type: mongoose.SchemaTypes.String,
required: true,
unique: true,
set: toLower,
index: true
},
dob: {
type: mongoose.SchemaTypes.Date,
},
gender: { type: mongoose.SchemaTypes.String, set: toLower },
activations: [{
activationId: {
type: mongoose.Schema.ObjectId,
ref: 'Activation'
},
userType: { type: mongoose.SchemaTypes.String, set: toLower },
_id: false
}]
}
i have thousands of records for single activation in coordinates collection.
My query query requires to filter distinct mac from coordinates collection which matches userType in user collection.
If i use populate method & then apply filter on that it won't restrict fetching record count because of it query is taking so much time because it will return thousands of records.
I want to fetch only coordinates which match userType in user collection.
So far i haven't found any efficient method to join two collection & apply where condition on it.
I want to know efficient method to join two collection in mongodb & apply where condition on both collections.
My collection has approximately 500 documents, which will be double that in a few weeks:
How can I make getting all the documents faster? I'm currently using db.registrations.find(), so that I can have all the documents available for searching, sorting, and filtering data. Using skip/limit makes the query display quickly, but you can't search all the registrations for players, and that's necessary.
My schema:
var mongoose = require('mongoose');
var Schema = mongoose.Schema;
var playerSchema = new mongoose.Schema({
first: {
type: String,
required: true
},
last: {
type: String,
required: true
},
email: {
type: String,
required: true
},
phone: {
type: String,
required: true
},
address: {
address: String,
city: String,
state: String,
zip: String,
country: {
type: String,
"default" : "USA"
},
adult: Boolean
}
});
var registrationsSchema = new mongoose.Schema({
event : {
type: String,
required: true
},
day : {
type: String,
required: true
},
group : {
type: String,
required: true
},
field : {
type: String,
},
price : {
type: Number,
required: true
},
players : [playerSchema],
net : Number,
team : {
type: Number,
min: 0,
max: 7,
"default" : null
},
notes: String,
paymentID : {
type: String,
required: true,
"default": "VOLUNTEER"
},
paymentStatus : {
type: String,
"default" : "PAID"
},
paymentNote : String,
// users : [userSchema],
active : {
type: Boolean,
default: true
},
users: [{
type: Schema.Types.ObjectId,
ref: 'User'
}],
createdOn : {
type : Date,
"default" : Date.now
},
updatedLast : {
type: Date
}
});
mongoose.model('Registration', registrationsSchema);
There is no big deal to load 1000 records from mongodb using mongoose. I did it in the past (2-3k records) and it worked well as long as I respected this 2 rules:
Don't load all the mongoose stuff
Use lean query.
This way it won't load all the mongoose methods / attributes and it will get just the data of your objects. You can't use .save() or other methods but it's way faster to load.
Use stream to load your data.
Streams are a good way to load large dataset with nodejs/mongoose. It will read the data block by block from mongodb and send them to your application for usage. You will avoid the tipical case :
I wait 2 seconds my data and my server is idle
My server is 100% CPU during 2 seconds to process the data I got and the db is idle.
With streams, in this example your total time will be ~2s instead of 2+2=4s
To load data from stream with mongoose use the .cursor() function to change your request to a nodejs stream.
Here is an example to load all your players fast
const allPlayers = [];
const cursor = Player.find({}).lean().cursor();
cursor.on('data', function(player) { allPlayers.push(player); });
cursor.on('end', function() { console.log('All players are loaded here'); });
You can achieve your objective using the following ways.
By default if you query for mongoose document, it will load with all of it's attributes and other necessary meta-data required(ie.,with lean = false). so if you use lean() function, it will load only a plain java script objects and it won't return even setter and corresponding getter methods. So that you can get all the documents very very fast. And you will get High performance. that's what the magic of lean() function on the back ground.
Another suggestion is as a thumb rule, please maintain proper indexing as per your requirement for each collection to get good performance while querying.
Hope this will help you!