Using one autoForm, I need to insert data into two collections - mongodb

I am currently working on an inventory system that takes a Part Collection, and a Purchase Collection as the backbone of the application. Each part much have a corresponding purchase. I.E a Part must have a partId, serial number, and cost number associated with it. I am using Meteor.js with coffeescrip, jade, and Graphr. I can insert into each collection individually, but they do not seem connected. I have set up the linkers between the two connection but I am a little lost as to where to go next
here is a snippet of the collections
Purchase Collection
PurchaseInventory.schema = new SimpleSchema
partId:
type:String
optional:true
serialNum:
type:Number
optional:true
costNum:
type:Number
optional:true
Parts Collection/schema
Inventory.schema = new SimpleSchema
name:
type:String
optional:true
manufacturer:
type:String
optional:true
description:
type:String
optional:true
parts query
export getInventory = Inventory.createQuery('getInventory',
$filter: ({ filters, options, params }) ->
if params.filters then Object.assign(filters, params.filters)
if params.options then Object.assign(options, params.options)
return { filters, options , params }
name:1
manufacturer:1
description:1
pic:1
purchase:
partId:1
)
purchase query
export getPurchase = PurchaseInventory.createQuery('getPurchase',
$filter: ({ filters, options, params }) ->
if params.filters then Object.assign(filters, params.filters)
if params.options then Object.assign(options, params.options)
return { filters, options , params }
serial:1
cost:1
date:1
warrentyDate:1
userId:1
)
Linkers
//Parts
Inventory.addLinks
purchase:
collection:PurchaseInventory
inversedBy:"part"
//purchases
PurchaseInventory.addLinks
part:
type:'one'
collection:Inventory
field:'partId'
index: true
And finally the Jade/Pug auto form
+autoForm(class="inventoryForm" schema=schema id="inventoryInsertForm" validation="blur" type="method" meteormethod="inventory.insert")
.formGroup
+afQuickField(name="name" label="Name")
+afQuickField(name="manufacturer" label="Manufacturer")
+afQuickField(name="description" label="Description")
button#invenSub(type="submit") Submit
To reiterate my goal is to have each item in parts to have a link to its corresponding purchase data.

The most straight forward way is to use autoform form type normal and create a custom event handler for the submit event (alternatively you can use the AutoForm hooks onSubmit). From there you can use the AutoForm.getFormValues API function to get the current document.
Since I am not into Coffeescript I would provide the following as Blaze/JS code but I think it should give you the idea:
{{# autoForm type="normal" class="class="inventoryForm" schema=schema id="inventoryInsertForm" validation="blur"" schema=schema id="insertForm" validation="blur" }}
<!-- your fields -->
{{/autoForm}}
/**
* validates a form against a given schema and returns the
* related document including all form data.
* See: https://github.com/aldeed/meteor-autoform#sticky-validation-errors
**/
export const formIsValid = function formIsValid (formId, schema) {
const { insertDoc } = AutoForm.getFormValues(formId)
// create validation context
const context = schema.newContext()
context.validate(insertDoc, options)
// get possible validation errors
// and attach them directly to the form
const errors = context.validationErrors()
if (errors && errors.length > 0) {
errors.forEach(err => AutoForm.addStickyValidationError(formId, err.key, err.type, err.value))
return null
} else {
return insertDoc
}
}
Template.yourFormTempalte.events({
'submit #insertForm' (event) {
event.preventDefault() // important to prevent from reloading the page!
// validate aginst both schemas to raise validation
// errors for both instead of only one of them
const insertDoc = formIsValid('insertForm', PurchaseInventory.schema) && formIsValid('insertForm', Inventory.schema)
// call insert method if both validations passed
Meteor.call('inventory.insert', insertDoc, (err, res) => { ... })
Meteor.call('purchaseInventory.insert', insertDoc, (err, res) => { ... })
}
})
Note, that if you need both inserts to be successful on the server-side you should write a third Meteor method that explicitly inserts a single doc in both collection in one method call. If you have Mongo version >= 4 you can combine this with transactions.

Related

Trying to store SuiteScript search result as a lookup parameter

I have the following code which returns the internal ID of a sales order by looking it up from a support case record.
So the order of events is:
A support case is received via email
The free text message body field contains a reference to a sales order transaction number. This is identified by the use of the number convention of 'SO1547878'
A workflow is triggered on case creation from the email case creation feature. The sales order number is extracted and stored in a custom field.
The internal ID of the record is looked up and written to the console (log debug) using the workflow action script below:
*#NApiVersion 2.x
*#NScriptType WorkflowActionScript
* #param {Object} context
define(["N/search", "N/record"], function (search, record) {
function onAction(context) {
var recordObj = context.newRecord;
var oc_number = recordObj.getValue({ fieldId: "custevent_case_creation" });
var s = search
.create({
type: "salesorder",
filters: [
search.createFilter({
name: "tranid",
operator: search.Operator.IS,
values: [oc_number],
}),
],
columns: ["internalid"],
})
.run()
.getRange({
start: 0,
end: 1,
});
log.debug("result set", s);
return s[0];
}
return {
onAction: onAction,
};
});
I am trying to return the resulting internal ID as a parameter so I can create a link to the record on the case record.
I'm getting stuck trying to work out how I would do this?
Is there a way to store the value on the case record, of the internal ID, that is looked up? (i.e.the one currently on the debug logs)?
I am very new to JS and Suitescript so am not sure at what point in this process, this value would need to be stored in the support case record.
At the moment. the workflow action script (which is the part of the workflow the above script relates to) is set to trigger after submit
Thanks
Edit: Thanks to Bknights, I have a solution that works.
The workflow:
The new revised script is as follows:
*#NApiVersion 2.x
*#NScriptType WorkflowActionScript
* #param {Object} context
*/
define(["N/search", "N/record"], function (search, record) {
function onAction(context) {
var recordObj = context.newRecord;
var oc_number = recordObj.getValue({ fieldId: "custevent_case_creation" });
var s = search
.create({
type: "salesorder",
filters: [
search.createFilter({
name: "tranid",
operator: search.Operator.IS,
values: [oc_number],
}),
],
columns: ["internalid"],
})
.run()
.getRange({
start: 0,
end: 1,
});
log.debug("result set", s[0].id);
return s[0].id;
}
return {
onAction: onAction,
};
});
On the script record for the workflow action script, set the type of return you expect. In this case, it would be a sales order record:
This would allow you to use a list/record field to store the value from the 'search message' workflow action created by the script
the result
Edit 2: A variation of this
/**
*#NApiVersion 2.x
*#NScriptType WorkflowActionScript
* #param {Object} context
*/
define(["N/search", "N/record"], function (search, record) {
function onAction(context) {
try {
var recordObj = context.newRecord;
var oc_number = recordObj.getValue({
fieldId: "custevent_case_creation",
});
var s = search
.create({
type: "salesorder",
filters: [
search.createFilter({
name: "tranid",
operator: search.Operator.IS,
values: [oc_number],
}),
],
columns: ["internalid","department"],
})
.run()
.getRange({
start: 0,
end: 1,
});
log.debug("result set", s[0]);
recordObj.setValue({fieldId:'custevent_case_sales_order', value:s[0].id});
// return s[0]
} catch (error) {
log.debug(
error.name,
"recordObjId: " +
recordObj.id +
", oc_number:" +
oc_number +
", message: " +
error.message
);
}
}
return {
onAction: onAction,
};
});
Depending on what you want to do with the order link you can do a couple of things.
If you want to reference the Sales Order record from the Support Case record you'd want to add a custom List/Record field to support cases that references transactions. (ex custevent_case_order)
Then move this script to a beforeSubmit UserEvent script and instead of returning extend it like:
recordObj.setValue({fieldId:'custevent_case_order', value:s[0].id});
For performance you'll probably want to test whether you are in a create/update event and that the custom order field is not yet filled in.
If this is part of a larger workflow you may still want to look up the Sales Order in the user event script and then start you workflow when that field has been populated.
If you want to keep the workflow intact your current code could return s[0].id to a workflow or workflow action custom field and then apply it to the case with a Set Field Value action.

Mongo `pre` hook not firing as expected on `save()` operation

I am using pre and post hooks in my MongoDB/Node backend in order to compare a pre-save and post-save version of a document so I can generate notes via model triggers based on what's changed. In one of my models/collections this is working, but in another, it's not working as expected, and I'm not sure why.
In the problem case, some research has determined that even though I am calling a pre hook trigger on an operation that uses a save(), when I console out the doc state passed in that pre hook, it's already had the change applied. In other words, the hook is not firing before the save() operation, but after, from what I can tell.
Here is my relevant model code:
let Schema = mongoose
.Schema(CustomerSchema, {
timestamps: true
})
.pre("save", function(next) {
const doc = this;
console.log("doc in .pre: ", doc); // this should be the pre-save version of the doc, but it is the post-save version
console.log("doc.history.length in model doc: ", doc.history.length);
trigger.preSave(doc);
next();
})
.post("save", function(doc) {
trigger.postSave(doc);
})
.post("update", function(doc) {
trigger.postSave(doc);
});
module.exports = mongoose.model("Customer", Schema);
The relevant part of the save() operation that I'm doing looks like this (all I'm doing is pushing a new element to an array on the doc called "history"):
exports.updateHistory = async function(req, res) {
let request = new CentralReqController(
req,
res,
{
// Allowed Parameters
id: {
type: String
},
stageId: {
type: String
},
startedBy: {
type: String
}
},
[
// Required Parameters
"id",
"stageId",
"startedBy"
]
);
let newHistoryObj = {
stageId: request.parameters.stageId,
startDate: new Date(),
startedBy: request.parameters.startedBy,
completed: false
};
let customerToUpdate = await Customer.findOne({
_id: request.parameters.id
}).exec();
let historyArray = await customerToUpdate.history;
console.log("historyArray.length before push in update func: ", historyArray.length);
historyArray.push(newHistoryObj);
await customerToUpdate.save((err, doc) => {
if (doc) console.log("history update saved...");
if (err) return request.sendError("Customer history update failed.", err);
});
};
So, my question is, if a pre hook on a save() operation is supposed to fire BEFORE the save() happens, why does the document I look at via my console.log show a document that's already had the save() operation done on it?
You are a bit mistaken on what the pre/post 'save' hooks are doing. In pre/post hook terms, save is the actual save operation to the database. That being said, the this you have in the pre('save') hook, is the object you called .save() on, not the updated object from the database. For example:
let myCustomer = req.body.customer; // some customer object
// Update the customer object
myCustomer.name = 'Updated Name';
// Save the customer
myCustomer.save();
We just updated the customers name. When the .save() is called, it triggers the hooks, like you stated above. Only the difference is, the this in the pre('save') hook is the same object as myCustomer, not the updated object from the database. On the contrary, the doc object in the `post('save') hook IS the updated object from the database.
Schema.pre('save', function(next) {
console.log(this); // Modified object (myCustomer), not from DB
)};
Schema.post('save', function(doc) {
console.log(doc); // Modified object DIRECTLY from DB
});

Subscribing to Meteor.Users Collection

// in server.js
Meteor.publish("directory", function () {
return Meteor.users.find({}, {fields: {emails: 1, profile: 1}});
});
// in client.js
Meteor.subscribe("directory");
I want to now get the directory listings queried from the client like directory.findOne() from the browser's console. //Testing purposes
Doing directory=Meteor.subscribe('directory')/directory=Meteor.Collection('directory') and performing directory.findOne() doesn't work but when I do directory=new Meteor.Collection('directory') it works and returns undefined and I bet it CREATES a mongo collection on the server which I don't like because USER collection already exists and it points to a new Collection rather than the USER collection.
NOTE: I don't wanna mess with how Meteor.users collection handles its function... I just want to retrieve some specific data from it using a different handle that will only return the specified fields and not to override its default function...
Ex:
Meteor.users.findOne() // will return the currentLoggedIn users data
directory.findOne() // will return different fields taken from Meteor.users collection.
If you want this setup to work, you need to do the following:
Meteor.publish('thisNameDoesNotMatter', function () {
var self = this;
var handle = Meteor.users.find({}, {
fields: {emails: 1, profile: 1}
}).observeChanges({
added: function (id, fields) {
self.added('thisNameMatters', id, fields);
},
changed: function (id, fields) {
self.changed('thisNameMatters', id, fields);
},
removed: function (id) {
self.removed('thisNameMatters', id);
}
});
self.ready();
self.onStop(function () {
handle.stop();
});
});
No on the client side you need to define a client-side-only collection:
directories = new Meteor.Collection('thisNameMatters');
and subscribe to the corresponding data set:
Meteor.subscribe('thisNameDoesNotMatter');
This should work now. Let me know if you think this explanation is not clear enough.
EDIT
Here, the self.added/changed/removed methods act more or less as an event dispatcher. Briefly speaking they give instructions to every client who called
Meteor.subscribe('thisNameDoesNotMatter');
about the updates that should be applied on the client's collection named thisNameMatters assuming that this collection exists. The name - passed as the first parameter - can be chosen almost arbitrarily, but if there's no corresponding collection on the client side all the updates will be ignored. Note that this collection can be client-side-only, so it does not necessarily have to correspond to a "real" collection in your database.
Returning a cursor from your publish method it's only a shortcut for the above code, with the only difference that the name of an actual collection is used instead of our theNameMatters. This mechanism actually allows you to create as many "mirrors" of your datasets as you wish. In some situations this might be quite useful. The only problem is that these "collections" will be read-only (which totally make sense BTW) because if they're not defined on the server the corresponding `insert/update/remove' methods do not exist.
The collection is called Meteor.users and there is no need to declare a new one on neither the server nor the client.
Your publish/subscribe code is correct:
// in server.js
Meteor.publish("directory", function () {
return Meteor.users.find({}, {fields: {emails: 1, profile: 1}});
});
// in client.js
Meteor.subscribe("directory");
To access documents in the users collection that have been published by the server you need to do something like this:
var usersArray = Meteor.users.find().fetch();
or
var oneUser = Meteor.users.findOne();

Meteor Publish Distinct Values of Field in Collection

I'm stuck on a pretty simple scenario in Meteor:
I have a huge collection of things with many fields, some of them containing quite a bit of text.
I want to create a page for searching that collection.
One of the fields that each item in the collection has is "category".
I'd like to give the user the ability to filter by that category.
For that, I need to publish just the distinct values of the category field in the collection.
I can't figure out a way to do that without publishing the whole collection which takes way too long. How can I publish just the distinct categories and use them to fill a dropdown?
Bonus question and somewhat related: How do I publish a count of all items in the collection without publishing the whole collection?
A good starting point to make this easier would be to normalize your categories into a separate database collection.
However assuming that is not possible or practical, the best (though imperfect) solution will be to publish two separate versions of your collection, one which returns only the categories field of the entire collection and another which returns all fields of the collection for the selected category only. That would look like the following:
// SERVER
Meteor.startup(function(){
Meteor.publish('allThings', function() {
// return only id and categories field for all your things
return Things.find({}, {fields: {categories: 1}});
});
Meteor.publish('thingsByCategory', function(category) {
// return all fields for things having the selected category
// you can then subscribe via something like a client-side Session variable
// e.g., Meteor.subscribe("thingsByCategory", Session.get("category"));
return Things.find({category: category});
});
});
Note that you will still need to assemble your array of categories client side from the Things cursor (for example, by using underscore's _.pluck and _.uniq methods to grab the categories and remove any dups). But the data set will be much smaller as you are only working with single-field documents now.
(Note that ideally, you would want to use Mongo's distinct() method in your publish function to publish only the distinct categories, but that is not possible directly as it returns an array which cannot be published).
You could use the internal this._documents.collectionName to only send new categories down to the client. Tracking which categories to remove becomes a bit ugly so you probably will still end up maintaining a separate 'categories' collection eventually.
Example:
Meteor.publish( 'categories', function(){
var self = this;
largeCollection.find({},{fields: {category: 1}).observeChanges({
added: function( id, doc ){
if( ! self._documents.categories[ doc.category ] )
self.added( 'categories', doc.category, {category: doc.category});
},
removed: function(){
_.keys( self._documents.categories ).forEach( category ){
if ( largeCollection.find({category: category},{limit: 1}).count() === 0 )
self.removed( 'categories', category );
}
}
});
self.ready();
};
Re: the bonus question, publishing counts: take a look at the meteorite package publish-counts. I think that does what you want.
These patterns might be helpful to you. Here is a publication that publishes counts:
/*****************************************************************************/
/* Counts Publish Function
/*****************************************************************************/
// server: publish the current size of a collection
Meteor.publish("countsByProject", function (arguments) {
var self = this;
if (this.userId) {
var roles = Meteor.users.findOne({_id : this.userId}).roles;
if ( _.contains(roles, arguments.projectId) ) {
//check(arguments.video_id, Integer);
// observeChanges only returns after the initial `added` callbacks
// have run. Until then, we don't want to send a lot of
// `self.changed()` messages - hence tracking the
// `initializing` state.
Videos.find({'projectId': arguments.projectId}).forEach(function (video) {
var count = 0;
var initializing = true;
var video_id = video.video_id;
var handle = Observations.find({video_id: video_id}).observeChanges({
added: function (id) {
//console.log(video._id);
count++;
if (!initializing)
self.changed("counts", video_id, {'video_id': video_id, 'observations': count});
},
removed: function (id) {
count--;
self.changed("counts", video_id, {'video_id': video_id, 'observations': count});
}
// don't care about changed
});
// Instead, we'll send one `self.added()` message right after
// observeChanges has returned, and mark the subscription as
// ready.
initializing = false;
self.added("counts", video_id, {'video_id': video_id, 'observations': count});
self.ready();
// Stop observing the cursor when client unsubs.
// Stopping a subscription automatically takes
// care of sending the client any removed messages.
self.onStop(function () {
handle.stop();
});
}); // Videos forEach
} //if _.contains
} // if userId
return this.ready();
});
And here is one that creates a new collection from a specific field:
/*****************************************************************************/
/* Tags Publish Functions
/*****************************************************************************/
// server: publish the current size of a collection
Meteor.publish("tags", function (arguments) {
var self = this;
if (this.userId) {
var roles = Meteor.users.findOne({_id : this.userId}).roles;
if ( _.contains(roles, arguments.projectId) ) {
var observations, tags, initializing, projectId;
initializing = true;
projectId = arguments.projectId;
observations = Observations.find({'projectId' : projectId}, {fields: {tags: 1}}).fetch();
tags = _.pluck(observations, 'tags');
tags = _.flatten(tags);
tags = _.uniq(tags);
var handle = Observations.find({'projectId': projectId}, {fields : {'tags' : 1}}).observeChanges({
added: function (id, fields) {
if (!initializing) {
tags = _.union(tags, fields.tags);
self.changed("tags", projectId, {'projectId': projectId, 'tags': tags});
}
},
removed: function (id) {
self.changed("tags", projectId, {'projectId': projectId, 'tags': tags});
}
});
initializing = false;
self.added("tags", projectId, {'projectId': projectId, 'tags': tags});
self.ready();
self.onStop(function () {
handle.stop();
});
} //if _.contains
} // if userId
return self.ready();
});
I have not tested it on Meteor, and according to the replies, I'm getting skeptical that it will work but using a mongoDB distinct would do the trick.
http://docs.mongodb.org/manual/reference/method/db.collection.distinct/

Sailsjs and Associations

Getting into sails.js - enjoying the cleanliness of models, routes, and the recent addition of associations. My dilemma:
I have Users, and Groups. There is a many-many relationship between the two.
var User = {
attributes: {
username: 'string',
groups: {
collection: 'group',
via: 'users'
}
}
};
module.exports = User;
...
var Group = {
attributes: {
name: 'string',
users: {
collection: 'user',
via: 'groups',
dominant: true
}
}
};
module.exports = Group;
I'm having difficulty understanding how I would save a user and it's associated groups.
Can I access the 'join table' directly?
From an ajax call, how should I be sending in the list of group ids to my controller?
If via REST URL, is this already accounted for in blueprint functions via update?
If so - what does the URL look like? /user/update/1?groups=1,2,3 ?
Is all of this just not supported yet? Any insight is helpful, thanks.
Documentation for these blueprints is forthcoming, but to link two records that have a many-to-many association, you can use the following REST url:
POST /user/[userId]/groups
where the body of the post is:
{id: [groupId]}
assuming that id is the primary key of the Group model. Starting with v0.10-rc5, you can also simultaneously create and a add a new group to a user by sending data about the new group in the POST body, without an id:
{name: 'myGroup'}
You can currently only add one linked entity at a time.
To add an entity programmatically, use the add method:
User.findOne(123).exec(function(err, user) {
if (err) {return res.serverError(err);}
// Add group with ID 1 to user with ID 123
user.groups.add(1);
// Add brand new group to user with ID 123
user.groups.add({name: 'myGroup'});
// Save the user, committing the additions
user.save(function(err, user) {
if (err) {return res.serverError(err);}
return res.json(user);
});
});
Just to answer your question about accessing the join tables directly,
Yes you can do that if you are using Model.query function. You need to check the namees of the join tables from DB itself. Not sure if it is recommended or not but I have found myself in such situations sometimes when it was unavoidable.
There have been times when the logic I was trying to implement involved a lot many queries and it was required to be executed as an atomic transaction.
In those case, I encapsulated all the DB logic in a stored function and executed that using Model.query
var myQuery = "select some_db_function(" + <param> + ")";
Model.query(myQuery, function(err, result){
if(err) return res.json(err);
else{
result = result.rows[0].some_db_function;
return res.json(result);
}
});
postgres has been a great help here due to json datatype which allowed me to pass params as JSON and also return values as JSON