I have the following code which returns the internal ID of a sales order by looking it up from a support case record.
So the order of events is:
A support case is received via email
The free text message body field contains a reference to a sales order transaction number. This is identified by the use of the number convention of 'SO1547878'
A workflow is triggered on case creation from the email case creation feature. The sales order number is extracted and stored in a custom field.
The internal ID of the record is looked up and written to the console (log debug) using the workflow action script below:
*#NApiVersion 2.x
*#NScriptType WorkflowActionScript
* #param {Object} context
define(["N/search", "N/record"], function (search, record) {
function onAction(context) {
var recordObj = context.newRecord;
var oc_number = recordObj.getValue({ fieldId: "custevent_case_creation" });
var s = search
.create({
type: "salesorder",
filters: [
search.createFilter({
name: "tranid",
operator: search.Operator.IS,
values: [oc_number],
}),
],
columns: ["internalid"],
})
.run()
.getRange({
start: 0,
end: 1,
});
log.debug("result set", s);
return s[0];
}
return {
onAction: onAction,
};
});
I am trying to return the resulting internal ID as a parameter so I can create a link to the record on the case record.
I'm getting stuck trying to work out how I would do this?
Is there a way to store the value on the case record, of the internal ID, that is looked up? (i.e.the one currently on the debug logs)?
I am very new to JS and Suitescript so am not sure at what point in this process, this value would need to be stored in the support case record.
At the moment. the workflow action script (which is the part of the workflow the above script relates to) is set to trigger after submit
Thanks
Edit: Thanks to Bknights, I have a solution that works.
The workflow:
The new revised script is as follows:
*#NApiVersion 2.x
*#NScriptType WorkflowActionScript
* #param {Object} context
*/
define(["N/search", "N/record"], function (search, record) {
function onAction(context) {
var recordObj = context.newRecord;
var oc_number = recordObj.getValue({ fieldId: "custevent_case_creation" });
var s = search
.create({
type: "salesorder",
filters: [
search.createFilter({
name: "tranid",
operator: search.Operator.IS,
values: [oc_number],
}),
],
columns: ["internalid"],
})
.run()
.getRange({
start: 0,
end: 1,
});
log.debug("result set", s[0].id);
return s[0].id;
}
return {
onAction: onAction,
};
});
On the script record for the workflow action script, set the type of return you expect. In this case, it would be a sales order record:
This would allow you to use a list/record field to store the value from the 'search message' workflow action created by the script
the result
Edit 2: A variation of this
/**
*#NApiVersion 2.x
*#NScriptType WorkflowActionScript
* #param {Object} context
*/
define(["N/search", "N/record"], function (search, record) {
function onAction(context) {
try {
var recordObj = context.newRecord;
var oc_number = recordObj.getValue({
fieldId: "custevent_case_creation",
});
var s = search
.create({
type: "salesorder",
filters: [
search.createFilter({
name: "tranid",
operator: search.Operator.IS,
values: [oc_number],
}),
],
columns: ["internalid","department"],
})
.run()
.getRange({
start: 0,
end: 1,
});
log.debug("result set", s[0]);
recordObj.setValue({fieldId:'custevent_case_sales_order', value:s[0].id});
// return s[0]
} catch (error) {
log.debug(
error.name,
"recordObjId: " +
recordObj.id +
", oc_number:" +
oc_number +
", message: " +
error.message
);
}
}
return {
onAction: onAction,
};
});
Depending on what you want to do with the order link you can do a couple of things.
If you want to reference the Sales Order record from the Support Case record you'd want to add a custom List/Record field to support cases that references transactions. (ex custevent_case_order)
Then move this script to a beforeSubmit UserEvent script and instead of returning extend it like:
recordObj.setValue({fieldId:'custevent_case_order', value:s[0].id});
For performance you'll probably want to test whether you are in a create/update event and that the custom order field is not yet filled in.
If this is part of a larger workflow you may still want to look up the Sales Order in the user event script and then start you workflow when that field has been populated.
If you want to keep the workflow intact your current code could return s[0].id to a workflow or workflow action custom field and then apply it to the case with a Set Field Value action.
Related
I am new to MongoDB/Mongoose, and I am having an issue with PUT request to update the phone number of name with duplicate entry.
Objective: My front end can take in person's name and its phone number. Whenever it takes in a name that already exist in our record, it will ask user if they want to replace the number. Furthermore, name input should be case insensitive, meaning as long as spelling is right, it should update the phone number.
e.g. If {name: Test, number: 123} exist in our record, inputting {name: TEST, number: 456} will provide a pop-up menu confirming if user want to change their name. If selected Ok, the record would change to {name: Test, number:456} while reflecting its change on front end side and on backend DB.
Currently, my schema is defined as following (on a different file).
const personSchema = new mongoose.Schema({
name: String,
number: String,
})
Current code for updating functionality is following, which does work for most part:
app.put('/api/persons/:id', (req, res, next) => {
const body = req.body
const person = {
name: body.name,
number: body.number,
}
Person.findByIdAndUpdate(req.params.id, person, { new: true })
.then(updatedPerson => res.json(updatedPerson))
.catch(err => next(err))
// catch is for error handling
})
Main issue: However, above code does not work when I input a name with same name but with different casing. For instance, Name "Test" and "TEST" is considered differently. Primary source of error, based on what I printed out on console, is that the ID is different and thus my code can't find the same entry.
i.e. ID for storing record with name "Test" is different from the ID of new entry {name: TEST, number: 123} and hence my input entry ID doesn't exit in my database.
Above is bit weird sense whenever I input the name with same case, it does work.
Based on some searching, I found a different stackoverflow suggestion that uses regex and findoneandupdate, so I tried the following:
app.put('/api/persons/:id', (req, res, next) => {
const body = req.body
const person = {
name: body.name,
number: body.number,
}
// for testing purpose,
// this prints out the correct name
Person.find({ name: new RegExp(`^${body.name}$`, `i`) })
.then(result => {
console.log(result[0])
})
Person.findOneAndUpdate({ name: new RegExp(`^${body.name}$`, `i`) }, person, { new: true })
.then(updatePerson => {
res.json(updatePerson)
})
.catch(err => next(err))
})
There was few issue with this:
Casing of person's name changes (so if the new input has name of "TEST", it will change it all caps when it is supposed to preserve the casing of initial entry)
Above worked via REST Client extension on VS code, which is similar to Postman. However actually testing on frontend had a same issue of not finding ID.
I was wondering what is the correct way to update the entry with either findByIdAndUpdate (preferably) or findOneAndUpdate while taking case insensitive entry and preserving the name.
For reference, following is what my front end looks like:
You can add a field query_name : String that is the lowercase version of the name.
I would use mongoose hooks.
const personSchema = new mongoose.Schema({
name: String,
number: String,
original_name : String
})
personSchema.pre('save', function(next) {
this.query_name = this.name.toLowerCase();
next();
});
You can read more about it and maybe find another solution here
https://mongoosejs.com/docs/middleware.html
Note:
*Don't use arrow function because it does't have 'this' property.
I am trying to get CloudWatchLogs filterLogEvents by sending parameters.
some times it work and sends me the exact logs in that time, but for the different lambda function it returns empty events with next token.
parameters = {
'logGroupName' : metricFilter.logGroupName,
'filterPattern' : metricFilter.filterPattern ? metricFilter.filterPattern : "",
'startTime' : timestamp - offset,
'endTime' : timestamp
};
i am retrying by sending parameters with next token but its still getting empty events. Does anyone has idea about it?
{
"events": [],
"searchedLogStreams": [],
"nextToken": "long text"
}
You might have been previously testing with a query from Cloudwatch Log Insights, then via the StartQuery API and noted that you can not currently paginate over the results GetQueryResults.
Looking to overcome the limitation of 10000 records then you might have tried the FilterLogEvents, that supports pagination. If that is the case, please note that the parameters startTime and endTime from FilterLogEvents are specified in miliseconds while the parameters startTime and endTime from StartQuery must be specified in seconds.
So in order to use the same code used to calculate startTime and endTime from startQuery you need to multiply the timestamp value * 1000.
I had the same problem when I tried to capture the aws clould watch logs using aws sdk.
I finally solved this issue when I used logStreamNamePrefix, which in my case is the uuid of the device.
So I created the function below and when I run it the first time I do it without the nextToken and the subsequent times I take the token in the request and call it again using the token to continue the search.
loadCloudWatchLogs(nextToken?: string) {
// Set the region
AWS.config.update({
region: 'sa-east-1',
credentials: {
accessKeyId: environment.awsAccessKeyId,
secretAccessKey: environment.awsSecretAccessKey,
},
});
// Create the CloudWatchLogs service object
const cloudwatchlogs = new AWS.CloudWatchLogs({ apiVersion: '2014-03-28' });
// Defines the params attributes pattern.
let params: AWSCloudWatchParams;
// Check if token was provided to permorm a next query to AWSCloudWatchLogs.
if (nextToken) {
params = {
logGroupName: 'group-name' /* required */,
startTime: this.startTime,
endTime: this.endTime,
logStreamNamePrefix: this.device,
filterPattern: `{ ($.device="${this.device}") }`,
nextToken,
};
} else {
params = {
logGroupName: 'group-name' /* required */,
startTime: this.startTime,
endTime: this.endTime,
logStreamNamePrefix: this.device,
filterPattern: `{ ($.device="${this.device}") }`,
};
}
// Execute a filter for logs.
cloudwatchlogs.filterLogEvents(params, (err, data) => {
if (err) {
console.log(err, err.stack);
} else {
if (data.searchedLogStreams.length > 0) {
// Chegk if exists more logs for query.
const loadMoreLogs = !data.searchedLogStreams[
data.searchedLogStreams.length - 1
].searchedCompletely;
// Update the token for the next query.
this.nextToken = data.nextToken;
data.events.forEach(log => {
// log proccessing...
});
}
}
});
}
This function I did in an Angular project, but it will work even with javascript vanila making some adjustments.
I am currently working on an inventory system that takes a Part Collection, and a Purchase Collection as the backbone of the application. Each part much have a corresponding purchase. I.E a Part must have a partId, serial number, and cost number associated with it. I am using Meteor.js with coffeescrip, jade, and Graphr. I can insert into each collection individually, but they do not seem connected. I have set up the linkers between the two connection but I am a little lost as to where to go next
here is a snippet of the collections
Purchase Collection
PurchaseInventory.schema = new SimpleSchema
partId:
type:String
optional:true
serialNum:
type:Number
optional:true
costNum:
type:Number
optional:true
Parts Collection/schema
Inventory.schema = new SimpleSchema
name:
type:String
optional:true
manufacturer:
type:String
optional:true
description:
type:String
optional:true
parts query
export getInventory = Inventory.createQuery('getInventory',
$filter: ({ filters, options, params }) ->
if params.filters then Object.assign(filters, params.filters)
if params.options then Object.assign(options, params.options)
return { filters, options , params }
name:1
manufacturer:1
description:1
pic:1
purchase:
partId:1
)
purchase query
export getPurchase = PurchaseInventory.createQuery('getPurchase',
$filter: ({ filters, options, params }) ->
if params.filters then Object.assign(filters, params.filters)
if params.options then Object.assign(options, params.options)
return { filters, options , params }
serial:1
cost:1
date:1
warrentyDate:1
userId:1
)
Linkers
//Parts
Inventory.addLinks
purchase:
collection:PurchaseInventory
inversedBy:"part"
//purchases
PurchaseInventory.addLinks
part:
type:'one'
collection:Inventory
field:'partId'
index: true
And finally the Jade/Pug auto form
+autoForm(class="inventoryForm" schema=schema id="inventoryInsertForm" validation="blur" type="method" meteormethod="inventory.insert")
.formGroup
+afQuickField(name="name" label="Name")
+afQuickField(name="manufacturer" label="Manufacturer")
+afQuickField(name="description" label="Description")
button#invenSub(type="submit") Submit
To reiterate my goal is to have each item in parts to have a link to its corresponding purchase data.
The most straight forward way is to use autoform form type normal and create a custom event handler for the submit event (alternatively you can use the AutoForm hooks onSubmit). From there you can use the AutoForm.getFormValues API function to get the current document.
Since I am not into Coffeescript I would provide the following as Blaze/JS code but I think it should give you the idea:
{{# autoForm type="normal" class="class="inventoryForm" schema=schema id="inventoryInsertForm" validation="blur"" schema=schema id="insertForm" validation="blur" }}
<!-- your fields -->
{{/autoForm}}
/**
* validates a form against a given schema and returns the
* related document including all form data.
* See: https://github.com/aldeed/meteor-autoform#sticky-validation-errors
**/
export const formIsValid = function formIsValid (formId, schema) {
const { insertDoc } = AutoForm.getFormValues(formId)
// create validation context
const context = schema.newContext()
context.validate(insertDoc, options)
// get possible validation errors
// and attach them directly to the form
const errors = context.validationErrors()
if (errors && errors.length > 0) {
errors.forEach(err => AutoForm.addStickyValidationError(formId, err.key, err.type, err.value))
return null
} else {
return insertDoc
}
}
Template.yourFormTempalte.events({
'submit #insertForm' (event) {
event.preventDefault() // important to prevent from reloading the page!
// validate aginst both schemas to raise validation
// errors for both instead of only one of them
const insertDoc = formIsValid('insertForm', PurchaseInventory.schema) && formIsValid('insertForm', Inventory.schema)
// call insert method if both validations passed
Meteor.call('inventory.insert', insertDoc, (err, res) => { ... })
Meteor.call('purchaseInventory.insert', insertDoc, (err, res) => { ... })
}
})
Note, that if you need both inserts to be successful on the server-side you should write a third Meteor method that explicitly inserts a single doc in both collection in one method call. If you have Mongo version >= 4 you can combine this with transactions.
I'm new to meteor,I have two collections named Employee and Visitors.and two templates Home and Print. i can get the id of the record from the button like this.
printVisitor:function(){
return Visitor.findOne({_id:Session.get('UpdateVisitorId')});
}
now when i click button that redirects to another page and i need to print those values say name, phone number using the particular id of record which i could get from the above code.
my route code looks like this
Router.route('/print/:_id', {
name: 'print',
controller: 'PrintController',
action: 'action',
where: 'client'
});
and my print html is this
<template name="Print">
This is: {{VisitorName}}
Visiting:{{EmployeeName}}
Phone:{{PhoneNumber}}
how can i publish and subscribe and print those certain values of that id
Publish and subscribe can take arguments. You also want to use this.params to get the URL parameters. This is the general pattern
Router.route('/foo/:_id', {
name: 'foo',
waitOn: function(){
// this grabs the ID from the URL
var someId = this.params._id;
// this subscribes to myPublication while sending in the ID as a parameter to the publication function
return [
Meteor.subscribe('myPublication', someId)
]
}
});
/server/publications.js
Meteor.publish('myPublication', function(id){
check(id, String);
return MyCollection.findOne({_id: id});
});
You'll now have access to the data you subscribed to on this route.
I am making a analytics system, the API call would provide a Unique User ID, but it's not in sequence and too sparse.
I need to give each Unique User ID an auto increment id to mark a analytics datapoint in a bitarray/bitset. So the first user encounters would corresponding to the first bit of the bitarray, second user would be the second bit in the bitarray, etc.
So is there a solid and fast way to generate incremental Unique User IDs in MongoDB?
As selected answer says you can use findAndModify to generate sequential IDs.
But I strongly disagree with opinion that you should not do that. It all depends on your business needs. Having 12-byte ID may be very resource consuming and cause significant scalability issues in future.
I have detailed answer here.
You can, but you should not
https://web.archive.org/web/20151009224806/http://docs.mongodb.org/manual/tutorial/create-an-auto-incrementing-field/
Each object in mongo already has an id, and they are sortable in insertion order. What is wrong with getting collection of user objects, iterating over it and use this as incremented ID? Er go for kind of map-reduce job entirely
I know this is an old question, but I shall post my answer for posterity...
It depends on the system that you are building and the particular business rules in place.
I am building a moderate to large scale CRM in MongoDb, C# (Backend API), and Angular (Frontend web app) and found ObjectId utterly terrible for use in Angular Routing for selecting particular entities. Same with API Controller routing.
The suggestion above worked perfectly for my project.
db.contacts.insert({
"id":db.contacts.find().Count()+1,
"name":"John Doe",
"emails":[
"john#doe.com",
"john.doe#business.com"
],
"phone":"555111322",
"status":"Active"
});
The reason it is perfect for my case, but not all cases is that as the above comment states, if you delete 3 records from the collection, you will get collisions.
My business rules state that due to our in house SLA's, we are not allowed to delete correspondence data or clients records for longer than the potential lifespan of the application I'm writing, and therefor, I simply mark records with an enum "Status" which is either "Active" or "Deleted". You can delete something from the UI, and it will say "Contact has been deleted" but all the application has done is change the status of the contact to "Deleted" and when the app calls the respository for a list of contacts, I filter out deleted records before pushing the data to the client app.
Therefore, db.collection.find().count() + 1 is a perfect solution for me...
It won't work for everyone, but if you will not be deleting data, it works fine.
Edit
latest versions of pymongo:
db.contacts.count() + 1
First Record should be add
"_id" = 1 in your db
$database = "demo";
$collections ="democollaction";
echo getnextid($database,$collections);
function getnextid($database,$collections){
$m = new MongoClient();
$db = $m->selectDB($database);
$cursor = $collection->find()->sort(array("_id" => -1))->limit(1);
$array = iterator_to_array($cursor);
foreach($array as $value){
return $value["_id"] + 1;
}
}
I had a similar issue, namely I was interested in generating unique numbers, which can be used as identifiers, but doesn't have to. I came up with the following solution. First to initialize the collection:
fun create(mongo: MongoTemplate) {
mongo.db.getCollection("sequence")
.insertOne(Document(mapOf("_id" to "globalCounter", "sequenceValue" to 0L)))
}
An then a service that return unique (and ascending) numbers:
#Service
class IdCounter(val mongoTemplate: MongoTemplate) {
companion object {
const val collection = "sequence"
}
private val idField = "_id"
private val idValue = "globalCounter"
private val sequence = "sequenceValue"
fun nextValue(): Long {
val filter = Document(mapOf(idField to idValue))
val update = Document("\$inc", Document(mapOf(sequence to 1)))
val updated: Document = mongoTemplate.db.getCollection(collection).findOneAndUpdate(filter, update)!!
return updated[sequence] as Long
}
}
I believe that id doesn't have the weaknesses related to concurrent environment that some of the other solutions may suffer from.
// await collection.insertOne({ autoIncrementId: 1 });
const { value: { autoIncrementId } } = await collection.findOneAndUpdate(
{ autoIncrementId: { $exists: true } },
{
$inc: { autoIncrementId: 1 },
},
);
return collection.insertOne({ id: autoIncrementId, ...data });
I used something like nested queries in MySQL to simulate auto increment, which worked for me. To get the latest id and increment one to it you can use:
lastContact = db.contacts.find().sort({$natural:-1}).limit(1)[0];
db.contacts.insert({
"id":lastContact ?lastContact ["id"] + 1 : 1,
"name":"John Doe",
"emails": ["john#doe.com", "john.doe#business.com"],
"phone":"555111322",
"status":"Active"
})
It solves the removal issue of Alex's answer. So no duplicate id will appear if any record is removed.
More explanation: I just get the id of the latest inserted document, add one to it, and then set it as the id of the new record. And ternary is for cases that we don't have any records yet or all of the records are removed.
this could be another approach
const mongoose = require("mongoose");
const contractSchema = mongoose.Schema(
{
account: {
type: mongoose.Schema.Types.ObjectId,
required: true,
},
idContract: {
type: Number,
default: 0,
},
},
{ timestamps: true }
);
contractSchema.pre("save", function (next) {
var docs = this;
mongoose
.model("contract", contractSchema)
.countDocuments({ account: docs.account }, function (error, counter) {
if (error) return next(error);
docs.idContract = counter + 1;
next();
});
});
module.exports = mongoose.model("contract", contractSchema);
// First check the table length
const data = await table.find()
if(data.length === 0){
const id = 1
// then post your query along with your id
}
else{
// find last item and then its id
const length = data.length
const lastItem = data[length-1]
const lastItemId = lastItem.id // or { id } = lastItem
const id = lastItemId + 1
// now apply new id to your new item
// even if you delete any item from middle also this work
}