I'm using aldeed:simple-schema and here's the code:
Cities = new Mongo.Collection('cities');
Cities.insert({
name: 'Oslo'
});
Cities.insert({
name: 'Helsinki'
});
Contact = new SimpleSchema({
city: {
type: String,
allowedValues: Cities.find().map((e) => e.name) // written ES6-style for readability; in fact, here goes an ES5 anonymous function definition
}
});
What it does is explicitly binds currently existing cities from Cities collection to Contact schema's certain field's allowed values, so it's then impossible to store any other value than "Oslo" or "Helsinki".
But when posting a quickForm, the field (select, actually) has no options.
If I rewrite the mapping function to
(e) => {
console.log(e);
return e.name;
}
then I get
I20150911-18:07:23.334(4)? { _id: 'GLAbPa6N4W4c9GZZh', name: 'Oslo' }
I20150911-18:07:23.333(4)? { _id: 'vb64X5mKpMbDNzCkw', name: 'Helsinki' }
in server logs, which makes me think the mapping function is correct.
At the very same time, doing all this in Mongo console returns desirable result:
production-d:PRIMARY> db.cities.find().map(function (e) { return e.name; });
[ "Oslo", "Helsinki" ]
What do I do wrong? Is it impossible to fill the simple-schema's allowedValues array at the run time?
Related
Every tutorial I have found thus far has achieved pagination in GraphQL via Apollo, Relay, or some other magic framework. I was hoping to find answers in similar asked questions here but they don't exist. I understand how to setup the queries but I'm unclear as to how I would implement the resolvers.
Could someone point me in the right direction? I am using mongoose/MongoDB and ES5, if that helps.
EDIT: It's worth noting that the official site for learning GraphQL doesn't have an entry on pagination if you choose to use graphql.js.
EDIT 2: I love that there are some people who vote to close questions before doing their research whereas others use their knowledge to help others. You can't stop progress, no matter how hard you try. (:
Pagination in vanilla GraphQL
// Pagination argument type to represent offset and limit arguments
const PaginationArgType = new GraphQLInputObjectType({
name: 'PaginationArg',
fields: {
offset: {
type: GraphQLInt,
description: "Skip n rows."
},
first: {
type: GraphQLInt,
description: "First n rows after the offset."
},
}
})
// Function to generate paginated list type for a GraphQLObjectType (for representing paginated response)
// Accepts a GraphQLObjectType as an argument and gives a paginated list type to represent paginated response.
const PaginatedListType = (ItemType) => new GraphQLObjectType({
name: 'Paginated' + ItemType, // So that a new type name is generated for each item type, when we want paginated types for different types (eg. for Person, Book, etc.). Otherwise, GraphQL would complain saying that duplicate type is created when there are multiple paginated types.
fields: {
count: { type: GraphQLInt },
items: { type: new GraphQLList(ItemType) }
}
})
// Type for representing a single item. eg. Person
const PersonType = new GraphQLObjectType({
name: 'Person',
fields: {
id: { type: new GraphQLNonNull(GraphQLID) },
name: { type: GraphQLString },
}
})
// Query type which accepts pagination arguments with resolve function
const PersonQueryTypes = {
people: {
type: PaginatedListType(PersonType),
args: {
pagination: {
type: PaginationArgType,
defaultValue: { offset: 0, first: 10 }
},
},
resolve: (_, args) => {
const { offset, first } = args.pagination
// Call MongoDB/Mongoose functions to fetch data and count from database here.
return {
items: People.find().skip(offset).limit(first).exec()
count: People.count()
}
},
}
}
// Root query type
const QueryType = new GraphQLObjectType({
name: 'QueryType',
fields: {
...PersonQueryTypes,
},
});
// GraphQL Schema
const Schema = new GraphQLSchema({
query: QueryType
});
and when querying:
{
people(pagination: {offset: 0, first: 10}) {
items {
id
name
}
count
}
}
Have created a launchpad here.
There's a number of ways you could implement pagination, but here's two simple example resolvers that use Mongoose to get you started:
Simple pagination using limit and skip:
(obj, { pageSize = 10, page = 0 }) => {
return Foo.find()
.skip(page*pageSize)
.limit(pageSize)
.exec()
}
Using _id as a cursor:
(obj, { pageSize = 10, cursor }) => {
const params = cursor ? {'_id': {'$gt': cursor}} : undefined
return Foo.find(params).limit(pageSize).exec()
}
In my Stacks schema i have a dimensions property defined as such:
dimensions: {
type: [String],
autoform: {
options: function() {
return Dimensions.find().map(function(d) {
return { label: d.name, value: d._id };
});
}
}
}
This works really well, and using Mongol I'm able to see that an attempt to insert data through the form worked well (in this case I chose two dimensions to insert)
However what I really what is data that stores the actual dimension object rather than it's key. Something like this:
[
To try to achieve this I changed type:[String] to type:[DimensionSchema] and value: d._id to value: d. The thinking here that I'm telling the form that I am expecting an object and am now returning the object itself.
However when I run this I get the following error in my console.
Meteor does not currently support objects other than ObjectID as ids
Poking around a little bit and changing type:[DimensionSchema] to type: DimensionSchema I see some new errors in the console (presumably they get buried when the type is an array
So it appears that autoform is trying to take the value I want stored in the database and trying to use that as an id. Any thoughts on the best way to do this?.
For reference here is my DimensionSchema
export const DimensionSchema = new SimpleSchema({
name: {
type: String,
label: "Name"
},
value: {
type: Number,
decimal: true,
label: "Value",
min: 0
},
tol: {
type: Number,
decimal: true,
label: "Tolerance"
},
author: {
type: String,
label: "Author",
autoValue: function() {
return this.userId
},
autoform: {
type: "hidden"
}
},
createdAt: {
type: Date,
label: "Created At",
autoValue: function() {
return new Date()
},
autoform: {
type: "hidden"
}
}
})
According to my experience and aldeed himself in this issue, autoform is not very friendly to fields that are arrays of objects.
I would generally advise against embedding this data in such a way. It makes the data more difficult to maintain in case a dimension document is modified in the future.
alternatives
You can use a package like publish-composite to create a reactive-join in a publication, while only embedding the _ids in the stack documents.
You can use something like the PeerDB package to do the de-normalization for you, which will also update nested documents for you. Take into account that it comes with a learning curve.
Manually code the specific forms that cannot be easily created with AutoForm. This gives you maximum control and sometimes it is easier than all of the tinkering.
if you insist on using AutoForm
While it may be possible to create a custom input type (via AutoForm.addInputType()), I would not recommend it. It would require you to create a template and modify the data in its valueOut method and it would not be very easy to generate edit forms.
Since this is a specific use case, I believe that the best approach is to use a slightly modified schema and handle the data in a Meteor method.
Define a schema with an array of strings:
export const StacksSchemaSubset = new SimpleSchema({
desc: {
type: String
},
...
dimensions: {
type: [String],
autoform: {
options: function() {
return Dimensions.find().map(function(d) {
return { label: d.name, value: d._id };
});
}
}
}
});
Then, render a quickForm, specifying a schema and a method:
<template name="StacksForm">
{{> quickForm
schema=reducedSchema
id="createStack"
type="method"
meteormethod="createStack"
omitFields="createdAt"
}}
</template>
And define the appropriate helper to deliver the schema:
Template.StacksForm.helpers({
reducedSchema() {
return StacksSchemaSubset;
}
});
And on the server, define the method and mutate the data before inserting.
Meteor.methods({
createStack(data) {
// validate data
const dims = Dimensions.find({_id: {$in: data.dimensions}}).fetch(); // specify fields if needed
data.dimensions = dims;
Stacks.insert(data);
}
});
The only thing i can advise at this moment (if the values doesnt support object type), is to convert object into string(i.e. serialized string) and set that as the value for "dimensions" key (instead of object) and save that into DB.
And while getting back from db, just unserialize that value (string) into object again.
I am trying to update my collection which has an array field(initially blank) and for this I am trying this code
Industry.update({_id:industryId},
{$push:{categories: id:categoryId,
label:newCategory,
value:newCategory }}}});
No error is shown, but in my collection just empty documents({}) are created.
Note: I have both categoryId and newCategory, so no issues with that.
Thanks in advance.
This is the schema:
Industry = new Meteor.Collection("industry");
Industry.attachSchema(new SimpleSchema({
label:{
type:String
},
value:{
type:String
},
categories:{
type: [Object]
}
}));
I am not sure but maybe the error is occuring because you are not validating 'categories' in your schema. Try adding a 'blackbox:true' to your 'categories' so that it accepts any types of objects.
Industry.attachSchema(new SimpleSchema({
label: {
type: String
},
value: {
type: String
},
categories: {
type: [Object],
blackbox:true // allows all objects
}
}));
Once you've done that try adding values to it like this
var newObject = {
id: categoryId,
label: newCategory,
value: newCategory
}
Industry.update({
_id: industryId
}, {
$push: {
categories: newObject //newObject can be anything
}
});
This would allow you to add any kind of object into the categories field.
But you mentioned in a comment that categories is also another collection.
If you already have a SimpleSchema for categories then you could validate the categories field to only accept objects that match with the SimpleSchema for categories like this
Industry.attachSchema(new SimpleSchema({
label: {
type: String
},
value: {
type: String
},
categories: {
type: [categoriesSchema] // replace categoriesSchema by name of SimpleSchema for categories
}
}));
In this case only objects that match categoriesSchema will be allowed into categories field. Any other type would be filtered out. Also you wouldnt get any error on console for trying to insert other types.(which is what i think is happening when you try to insert now as no validation is specified)
EDIT : EXPLANATION OF ANSWER
In a SimpleSchema when you define an array of objects you have to validate it,ie, you have to tell it what objects it can accept and what it can't.
For example when you define it like
...
categories: {
type: [categoriesSchema] // Correct
}
it means that objects that are similar in structure to those in another SimpleSchema named categoriesSchema only can be inserted into it. According to your example any object you try to insert should be of this format
{
id: categoryId,
label: newCategory,
value: newCategory
}
Any object that isn't of this format will be rejected while insert. Thats why all objects you tried to insert where rejected when you tried initially with your schema structured like this
...
categories: {
type: [Object] // Not correct as there is no SimpleSchema named 'Object' to match with
}
Blackbox:true
Now, lets say you don't what your object to be filtered and want all objects to be inserted without validation.
Thats where setting "blackbox:true" comes in. If you define a field like this
...
categories: {
type: [Object], // Correct
blackbox:true
}
it means that categories can be any object and need not be validated with respect to some other SimpleSchema. So whatever you try to insert gets accepted.
If you run this query in mongo shell, it will produce a log like matched:1, updated:0. Please check what you will get . if matched is 0, it means that your input query is not having any matching documents.
I'm using MongoDB as a log keeper for my app to then sync mobile clients. I have this models set up in NodeJS:
var UserArticle = new Schema({
date: { type: Number, default: Math.round((new Date()).getTime() / 1000) }, //Timestamp!
user: [{type: Schema.ObjectId, ref: "User"}],
article: [{type: Schema.ObjectId, ref: "Article"}],
place: Number,
read: Number,
starred: Number,
source: String
});
mongoose.model("UserArticle",UserArticle);
var Log = new Schema({
user: [{type: Schema.ObjectId, ref: "User"}],
action: Number, // O => Insert, 1 => Update, 2 => Delete
uarticle: [{type: Schema.ObjectId, ref: "UserArticle"}],
timestamp: { type: Number, default: Math.round((new Date()).getTime() / 1000) }
});
mongoose.model("Log",Log);
When I want to retrive the log I use the follwing code:
var log = mongoose.model('Log');
log
.where("user", req.session.user)
.desc("timestamp")
.populate("uarticle")
.populate("uarticle.article")
.run(function (err, articles) {
if (err) {
console.log(err);
res.send(500);
return;
}
res.json(articles);
As you can see, I want mongoose to populate the "uarticle" field from the Log collection and, then, I want to populate the "article" field of the UserArticle ("uarticle").
But, using this code, Mongoose only populates "uarticle" using the UserArticle Model, but not the article field inside of uarticle.
Is it possible to accomplish it using Mongoose and populate() or I should do something else?
Thank you,
From what I've checked in the documentation and from what I hear from you, this cannot be achieved, but you can populate the "uarticle.article" documents yourself in the callback function.
However I want to point out another aspect which I consider more important. You have documents in collection A which reference collection B, and in collection B's documents you have another reference to documents in collection C.
You are either doing this wrong (I'm referring to the database structure), or you should be using a relational database such as MySQL here. MongoDB's power relies in the fact you can embed more information in documents, thus having to make lesser queries (having your data in a single collection). While referencing something is ok, having a reference and then another reference doesn't seem like you're taking the full advantage of MongoDB here.
Perhaps you would like to share your situation and the database structure so we could help you out more.
You can use the mongoose-deep-populate plugin to do this. Usage:
User.find({}, function (err, users) {
User.deepPopulate(users, 'uarticle.article', function (err, users) {
// now each user document includes uarticle and each uarticle includes article
})
})
Disclaimer: I'm the author of the plugin.
I faced the same problem,but after hours of efforts i find the solution.It can be without using any external plugin:)
applicantListToExport: function (query, callback) {
this
.find(query).select({'advtId': 0})
.populate({
path: 'influId',
model: 'influencer',
select: { '_id': 1,'user':1},
populate: {
path: 'userid',
model: 'User'
}
})
.populate('campaignId',{'campaignTitle':1})
.exec(callback);
}
Mongoose v5.5.5 seems to allow populate on a populated document.
You can even provide an array of multiple fields to populate on the populated document
var batch = await mstsBatchModel.findOne({_id: req.body.batchId})
.populate({path: 'loggedInUser', select: 'fname lname', model: 'userModel'})
.populate({path: 'invoiceIdArray', model: 'invoiceModel',
populate: [
{path: 'updatedBy', select: 'fname lname', model: 'userModel'},
{path: 'createdBy', select: 'fname lname', model: 'userModel'},
{path: 'aircraftId', select: 'tailNum', model: 'aircraftModel'}
]});
how about something like:
populate_deep = function(type, instance, complete, seen)
{
if (!seen)
seen = {};
if (seen[instance._id])
{
complete();
return;
}
seen[instance._id] = true;
// use meta util to get all "references" from the schema
var refs = meta.get_references(meta.schema(type));
if (!refs)
{
complete();
return;
}
var opts = [];
for (var i=0; i<refs.length; i++)
opts.push({path: refs[i].name, model: refs[i].ref});
mongoose.model(type).populate(instance, opts, function(err,o){
utils.forEach(refs, function (ref, next) {
if (ref.is_array)
utils.forEach(o[ref.name], function (v, lnext) {
populate_deep(ref.ref_type, v, lnext, seen);
}, next);
else
populate_deep(ref.ref_type, o[ref.name], next, seen);
}, complete);
});
}
meta utils is rough... want the src?
or you can simply pass an obj to the populate as:
const myFilterObj = {};
const populateObj = {
path: "parentFileds",
populate: {
path: "childFileds",
select: "childFiledsToSelect"
},
select: "parentFiledsToSelect"
};
Model.find(myFilterObj)
.populate(populateObj).exec((err, data) => console.log(data) );
I'd like to run a query on a Model, but only return embedded documents where the query matches. Consider the following...
var EventSchema = new mongoose.Schema({
typ : { type: String },
meta : { type: String }
});
var DaySchema = new mongoose.Schema({
uid: mongoose.Schema.ObjectId,
events: [EventSchema],
dateR: { type: Date, 'default': Date.now }
});
function getem() {
DayModel.find({events.typ : 'magic'}, function(err, days) {
// magic. ideally this would return a list of events rather then days
});
}
That find operation will return a list of DayModel documents. But what I'd really like is a list of EventSchemas alone. Is this possible?
It's not possible to fetch the Event objects directly, but you can restrict which fields your query returns like this:
DayModel.find({events.typ : 'magic'}, ['events'], function(err, days) {
...
});
You will still need to loop through the results to extract the actual embedded fields from the documents returned by the query, however.