Please could somebody confirm the following..
I am using Mirth Connect 3.5.08232.
My Source Connector is a Database Reader.
Say, I am using a query that returns multiple rows, and return the result (via JavaScript), as documentation suggests, so that Mirth would treat each row as a separate message. I also use a couple of mappers as source transformers, and save the mapped fields in my channel map (which ends up to contain only those fields that I define in transformers)
In the destination, and specifically, in destination response transformer (or destination body, if it is a JavaScript writer), how do I access the source fields?
the only way I found by trial and error is
var rawMsg = connectorMessage.getRawData();
var xmlMsg = new XML(rawMsg);
logger.info(xmlMsg.some_field); // ignore the root element of rawMsg
Is this the right way to do this? I thought that maybe the fields that were nicely automatically detected would be put in some kind of a map, like sourceMap - but that doesn't seem to be the case, right?
Thank you
If you are using Mapper steps in your transformer to extract the data and put it into a variable map (like the channel map), then you can use any of the following methods to retrieve it from a subsequent JavaScript context (including a JavaScript Writer, and your response transformer):
var value = channelMap.get('key');
var value = $c('key');
var value = $('key');
Look at the Variable Maps section of the User Guide for more information.
So to recap, say you're selecting a column "mycolumn" with a Database Reader. The XML sent to the channel will be something like this:
<result>
<mycolumn>value</mycolumn>
</result>
Then you can choose to extract pieces of that message into specific variables for later use. The transformer allows you to easily drag-and-drop pieces of the sample inbound message.
Finally in your JavaScript Writer (or in any subsequent filter, transformer, or response transformer), just drag the value into the field you want:
And the corresponding JavaScript code will automatically be inserted:
One last note, if you are selecting a lot of variables and don't want to make Mapper steps for each one individually, you can use a JavaScript Step to iterate through the message and extract each column into a separate map variable:
for each (child in msg.children()) {
channelMap.put(child.localName(), child.toString());
}
Or, you can just reference the columns directly from within the JavaScript Writer:
var msg = new XML(connectorMessage.getEncodedData());
var column1 = msg.column1.toString();
var column2 = msg.column2.toString();
...
Related
How do I take a list of values, iterate through it to create the needed objects then pass that "list" of objects to the API to create multiple rows?
I have been successful in adding a new row with a value using the API example. In that example, two objects are created.
row_a = ss_client.models.Row()
row_b = ss_client.models.Row()
These two objects are passed in the add row function. (Forgive me if I use the wrong terms. Still new to this)
response = ss_client.Sheets.add_rows(
2331373580117892, # sheet_id
[row_a, row_b])
I have not been successful in passing an unknown amount of objects with something like this.
newRowsToCreate = []
for row in new_rows:
rowObject = ss.models.Row()
rowObject.cells.append({
'column_id': PM_columns['Row ID Master'],
'value': row
})
newRowsToCreate.append(rowObject)
# Add rows to sheet
response = ss.Sheets.add_rows(
OH_MkrSheetId, # sheet_id
newRowsToCreate)
This returns this error:
{"code": 1062, "errorCode": 1062, "message": "Invalid row location: You must
use at least 1 location specifier.",
Thank you for any help.
From the error message, it looks like you're missing the location specification for the new rows.
Each row object that you create needs to have a location value set. For example, if you want your new rows to be added to the bottom of your sheet, then you would add this attribute to your rowObject.
rowObject.toBottom=True
You can read about this location specific attribute and how it relates to the Python SDK here.
To be 100% precise here I had to set the attribute differently to make it work:
rowObject.to_bottom = True
I've found the name of the property below:
https://smartsheet-platform.github.io/smartsheet-python-sdk/smartsheet.models.html#module-smartsheet.models.row
To be 100% precise here I had to set the attribute differently to make it work:
Yep, the documentation isn't super clear about this other than in the examples, but the API uses camelCase in Javascript, but the same terms are always in snake_case in the Python API (which is, after all, the Pythonic way to do it!)
I am using a row id to obtain the cells for a single row. However, the response returns the column id but not the title of the column. In an attempt to make the code readable for others it would be helpful to also obtain the column title. I was thinking of doing this by using the column id that is obtained in the getRow function but I am not entirely sure how to catch it. Below is the basic getRow function for reference. I appreciate any assistance. Thank you in advance all.
smartsheet.sheets.getRow(options)
.then(function(row) {
console.log(row);
})
.catch(function(error) {
console.log(error);
});
My preferred way of addressing this is to dynamically create a column map on my first GET /sheets/{sheetId} request.
Let's say we have a sheet with three columns: Japan, Cat, and Cafe. Here is one way to make a column map.
const columnMap = makeColumnMap(<your sheet data>);
function makeColumnMap(sheetData){
const colMap = {};
sheetData.columns.map( column => colMap[column.title] = column.id);
return colMap;
}
Now, you can reference your specific columns like this: columnMap["Japan"], columnMap["Cat"], and columnMap["Cafe"] or you can use dot notation if you prefer.
Basically, what we're doing is creating a dictionary to map the column titles to the corresponding column id.
Posting this as a separate answer based on your response (and for easier formatting).
I have a couple specific recommendations that will help you.
Try to consolidate your API calls.
I then want to use that columnID to call getColumns(columnId) to obtain the title.
This is 'work' that you don't need to do. A single GET /sheets/{sheetId} will include all the data you need in one call. It's just a matter of parsing the JSON response.
Use this as an opportunity to improve your ability to work with JSON.
I do not know how to catch the columnId once getRow() is called.
The response is a single object with nested arrays and objects. Learning to navigate the JSON in a way that makes sense to you will come in really handy.
I would recommend saving the response from a GET sheet call as it's own JSON file. From there, you can bring it into your script and play with your logic to reference the values you want.
How to read a list of values from Mirth Channel XML's <mapping> element? I can use msg to read one value. But what if there are list of values? Example:
<patient>
<name>names</name>
<patient>
If there is one value fornames defined, then simply performing <mapping>msg['patient']['name']</mapping> will return the value. But how to get only values if the names return more than one name? How to iterate and display in the same XML? I am doing Mirth for the first time and any help is appreciated.
I understand your question in this way.. so you mean if you receive the XML in this fashion
<patient>
<name>names</name>
<name>name1</name>
</patient>
then how to iterate and fetch only 'name' tags value. If my understanding is correct then place the below code in your source transformer.
var nameLen = msg['name'].length();
for(i=0;i<nameLen;i++){
// Your Mapping Logic
logger.debug(msg['name'][i].toString());
}
I have a spark streaming application that needs to take these steps:
Take a string, apply some map transformations to it
Map again: If this string (now an array) has a specific value in it, immediately send an email (or do something OUTSIDE the spark environment)
collect() and save in a specific directory
apply some other transformation/enrichment
collect() and save in another directory.
As you can see this implies to lazily activated calculations, which do the OUTSIDE action twice. I am trying to avoid caching, as at some hundreds lines per second this would kill my server.
Also trying to mantaining the order of operation, though this is not as much as important: Is there a solution I do not know of?
EDIT: my program as of now:
kafkaStream;
lines = take the value, discard the topic;
lines.foreachRDD{
splittedRDD = arg.map { split the string };
assRDD = splittedRDD.map { associate to a table };
flaggedRDD = assRDD.map { add a boolean parameter under a if condition + send mail};
externalClass.saveStaticMethod( flaggedRDD.collect() and save in file);
enrichRDD = flaggedRDD.map { enrich with external data };
externalClass.saveStaticMethod( enrichRDD.collect() and save in file);
}
I put the saving part after the email so that if something goes wrong with it at least the mail has been sent.
The final 2 methods I found were these:
In the DStream transformation before the side-effected one, make a copy of the Dstream: one will go on with the transformation, the other will have the .foreachRDD{ outside action }. There are no major downside in this, as it is just one RDD more on a worker node.
Extracting the {outside action} from the transformation and mapping the already sent mails: filter if mail has already been sent. This is a almost a superfluous operation as it will filter out all of the RDD elements.
Caching before going on (although I was trying to avoid it, there was not much to do)
If trying to not caching, solution 1 is the way to go
I would like to make a call into the ServiceNow SOAP webservice to start an instance of a specific web service.
I can find the WSDL for functions like incident.do but seem to be missing the step needed to find the proper table/endpoint for workflows to start.
If you want to start a Workflow via SOAP I think the only way to do this is to create a Scripted Web-Service or a Custom Processor.
In there you will have to define a script which starts your Workflow.
var w = new Workflow();
var context = w.startFlow(id, current, current.operation(), getVars());
In this wiki article you can find API Methods for Workflows.
The tricky bit is getting the variables into the Workflow.
While this sounds easy, in fact it isn't.
If your workflow runs on the table sc_req_item (which is likely if you are dealing with Request Fulfillment), you first need to set the Property (sys_properties) glide.workflow.enable_input_variables to true, because otherwise, you will not be able to add normal Input variables to your workflow.
Then, add the Input variables to the workflow. Note that you have some nifty datatypes available there. Note for example the "Data Structure" type.
All Input variables are treated like custome columns (in fact they are columns of a workflw-specific table). That is why the names start with u_.
Lets say, you define an input variable called u_dynamic_vars (Datatype "Data Structure").
Here is how to call the workflow:
var wf_name = "Name of your workflow";
// Instantiate JSON machinery
var parser = new JSON();
//Declare an instance of workflow.js
var wf = new Workflow ();
//Get the workflow id
var wfId = wf.getWorkflowFromName (wf_name) ;
//Start workflow, passing along object containing name/value pairs mapping to inputs expected by the workflow
var vars = { } ;
// Prepare the JSON Datastructure
var obj ={"name":"George",
"lastname":"Washington"};
// Encode the data
vars.u_dynamic_vars = parser.encode(obj);
vars.u_new_email = "inject#new.com";
// Get a specific RITM
var gr = GlideRecord("sc_req_item");
gr.get("18d8e9740f4013002f504c6be1050e48");
gs.print(gr.number);
// Start the Workflow with a "current" record
wf.startFlow(wfId , gr , "update" , vars ) ;
// You may also pass null, then current is null.
wf.startFlow(wfId , null , "update" , vars ) ;
In the workflow, you then unpack the data like so:
// Let's unpack it. For some reason, intantiating the parse won't work here...
payload = JSON.parse(workflow.variables.u_dynamic_vars);
gs.print("payload.first_name:" + payload.name);
Also note that a workflow does not necessarily need to run on a table.
To achieve this, choose "global" as table name when defining the workflow.