How can I inject agents in a source block from an uploaded database table? - anylogic

I am trying to inject agents from a database into a specific source block. The database consists of 2 columns of "OrderType" & "OrderAmount". I wish to inject "OrderAmount" of agents of their corresponding "OrderType" into this source, while retaining the differentiation (I.E. by storing a parameter attribute ID of each entry/agent in the corresponding agenttype of the source block).
I have saved the entries from the database in collections and constructed a table as such (For both arrays; type = double):
double [][] ArrayCustomerOrders = new double [coll_CustomerOrderType.size()][2];
for (int i = 0; i < coll_CustomerOrderType.size(); i++) {
ArrayCustomerOrders[i][0] = coll_CustomerOrderType.get(i);
ArrayCustomerOrders[i][1] = coll_CustomerOrderAmount.get(i);
}
I tried playing around with the source block calls of inject() function in the same event as I constructed the order table in, but was unable to inject eligible arguments.
Does anyone have a suggestion on how to go about this?

In your case, you can simply replace the Source block with an Enter block (named "myEnterBlock"):
Create an empty agent population pop_Orders with an agent type that has 2 parameters p_Type and p_Amount.
In your for-loop code, when you have the current order type and amount, create such an agent and push it into the Enter block directly:
myEnterBlock.take(add_pop_Orders(currentType, currentAmount))

Related

anylogic selectblock on 'true' returns same agent

I am simulating last-mile delivery and I want to iteratively evaluate the co2 emissions between stops. The numbers of the first route evaluation add up i.e. it correctly counts/removes each stop embedded within the route. The problem is that when the first route is finished, the model should consider the next route embedded in a new agent of type 'Order', in 'toConsumer' but it does not. From what I can see, is that the order agent is not updated after the condition in the select block has been met. I am not sure however, why it does this. When the condition is satisfied the agent does continue to the sink. Does anybody know how i can ensure this updating of the agent?
The anylogic model looks as follows:
In the source block I create agents following optimization results via:
int r = 0;
agent.routeVeh = (int) parVehicle.get(r);
agent.route = (List) parRoute.get(r);
agent.routeDep = (int) parDepot.get(r);
r++;
And in the select block i have:
The to consumer block:
If you are doing this operation in the Main agent, you need to define routeCount and consumerCount inside your Order agent (or if you have a Vehicle agent defined, inside that). Because they are defined for each order. Then in the routeFinished you need update the variable as agent.consumerCount++;

mirth connect Database Reader automatic column mapping

Please could somebody confirm the following..
I am using Mirth Connect 3.5.08232.
My Source Connector is a Database Reader.
Say, I am using a query that returns multiple rows, and return the result (via JavaScript), as documentation suggests, so that Mirth would treat each row as a separate message. I also use a couple of mappers as source transformers, and save the mapped fields in my channel map (which ends up to contain only those fields that I define in transformers)
In the destination, and specifically, in destination response transformer (or destination body, if it is a JavaScript writer), how do I access the source fields?
the only way I found by trial and error is
var rawMsg = connectorMessage.getRawData();
var xmlMsg = new XML(rawMsg);
logger.info(xmlMsg.some_field); // ignore the root element of rawMsg
Is this the right way to do this? I thought that maybe the fields that were nicely automatically detected would be put in some kind of a map, like sourceMap - but that doesn't seem to be the case, right?
Thank you
If you are using Mapper steps in your transformer to extract the data and put it into a variable map (like the channel map), then you can use any of the following methods to retrieve it from a subsequent JavaScript context (including a JavaScript Writer, and your response transformer):
var value = channelMap.get('key');
var value = $c('key');
var value = $('key');
Look at the Variable Maps section of the User Guide for more information.
So to recap, say you're selecting a column "mycolumn" with a Database Reader. The XML sent to the channel will be something like this:
<result>
<mycolumn>value</mycolumn>
</result>
Then you can choose to extract pieces of that message into specific variables for later use. The transformer allows you to easily drag-and-drop pieces of the sample inbound message.
Finally in your JavaScript Writer (or in any subsequent filter, transformer, or response transformer), just drag the value into the field you want:
And the corresponding JavaScript code will automatically be inserted:
One last note, if you are selecting a lot of variables and don't want to make Mapper steps for each one individually, you can use a JavaScript Step to iterate through the message and extract each column into a separate map variable:
for each (child in msg.children()) {
channelMap.put(child.localName(), child.toString());
}
Or, you can just reference the columns directly from within the JavaScript Writer:
var msg = new XML(connectorMessage.getEncodedData());
var column1 = msg.column1.toString();
var column2 = msg.column2.toString();
...

Can I start a Service Now workflow via an external SOAP call?

I would like to make a call into the ServiceNow SOAP webservice to start an instance of a specific web service.
I can find the WSDL for functions like incident.do but seem to be missing the step needed to find the proper table/endpoint for workflows to start.
If you want to start a Workflow via SOAP I think the only way to do this is to create a Scripted Web-Service or a Custom Processor.
In there you will have to define a script which starts your Workflow.
var w = new Workflow();
var context = w.startFlow(id, current, current.operation(), getVars());
In this wiki article you can find API Methods for Workflows.
The tricky bit is getting the variables into the Workflow.
While this sounds easy, in fact it isn't.
If your workflow runs on the table sc_req_item (which is likely if you are dealing with Request Fulfillment), you first need to set the Property (sys_properties) glide.workflow.enable_input_variables to true, because otherwise, you will not be able to add normal Input variables to your workflow.
Then, add the Input variables to the workflow. Note that you have some nifty datatypes available there. Note for example the "Data Structure" type.
All Input variables are treated like custome columns (in fact they are columns of a workflw-specific table). That is why the names start with u_.
Lets say, you define an input variable called u_dynamic_vars (Datatype "Data Structure").
Here is how to call the workflow:
var wf_name = "Name of your workflow";
// Instantiate JSON machinery
var parser = new JSON();
//Declare an instance of workflow.js
var wf = new Workflow ();
//Get the workflow id
var wfId = wf.getWorkflowFromName (wf_name) ;
//Start workflow, passing along object containing name/value pairs mapping to inputs expected by the workflow
var vars = { } ;
// Prepare the JSON Datastructure
var obj ={"name":"George",
"lastname":"Washington"};
// Encode the data
vars.u_dynamic_vars = parser.encode(obj);
vars.u_new_email = "inject#new.com";
// Get a specific RITM
var gr = GlideRecord("sc_req_item");
gr.get("18d8e9740f4013002f504c6be1050e48");
gs.print(gr.number);
// Start the Workflow with a "current" record
wf.startFlow(wfId , gr , "update" , vars ) ;
// You may also pass null, then current is null.
wf.startFlow(wfId , null , "update" , vars ) ;
In the workflow, you then unpack the data like so:
// Let's unpack it. For some reason, intantiating the parse won't work here...
payload = JSON.parse(workflow.variables.u_dynamic_vars);
gs.print("payload.first_name:" + payload.name);
Also note that a workflow does not necessarily need to run on a table.
To achieve this, choose "global" as table name when defining the workflow.

using Joblets in talend with tMemorize and tJavaFlex

I am trying to create some joblets in Talend that will speed up some processes.
I have an input from a MSSQLInput, the results are then sorted and filtered a little. Then I have a tMemorizeRows and a tJavaFlex, the purpose of this is to memorize the rows in a column to preform a count. The count is based on a customer ID, once the the id changes the count starts back to 1 and the proccess begine again and continues to the end. I have refactored this as a joblet but it does not work, the error is:
ID_tMemorizeRows_1 cannot be resolved to a variable
I have a tJavaFlex which starts with
int counte = 1;
The Main code is
if(ID_tMemorizeRows_1[0].equals(ID_tMemorizeRows_1[1]))
{
counte = counte + 1;
}
else
{
counte = 1;
}
context.Enqnum = counte;
The Enqnum variable and is created correctly and added into a tMaps component.
Does anyone know why this is happening, one person told me it is because when you move something to a joblet it gets a new/different name so it has to be specifically called in the Java, if this is the case how do I find the name out?
Thank you
Rich
I do have a resolution. I have tried to add images however my reputation is not high enough.
When using joblets we know that Talend essentially recycles the code used in the joblet by inserting it into the code for the main job.
This is the joblet I have created, i know it works because I have refactored it to a joblet instead of building it from sctatch. What its doing is simply memorises row 0 and row 1 in an ordered data set, the java performs a count and the tMap appends the result to the job (as Mentioned above).
(I will try it inser image in my question, I do not have enough reputation point to insert it into a question).
When the job is run it runs fine. But problems occur when I want to reuse the same joblet in another part of the job. What Talend does is it assigns names within the source code to each component depending on the name of the joblet.
For example, if the Joblet was called ThisJob, then tMemorizeRows_1 would be called ThisJob_1_tMemorizeRows_1.
The row within the component (in this example ReferenceID) would renamed as:
ReferenceID_ThisJob_1_tMemorizeRows_1.
But when you add a second joblet to your job it gives it a new name, eg ThisJob_2. This name will be different depending on how much you have been altering your job before you add the second joblet. Therefore the number within the name will depend on this activity.
If you add the joblet into your job immediately then the joblet would be called ThisJob_2, if you have added 5 other components before you add it in then the joblet is likely to be called ThisJob_6 etc. (I'm not 100% sure how talend renames components)
When you add a joblet, You can see the name of the joblet on the joblet component, this then reverts back the the original joblet name when you create any links/joins to other components.
Its also important that each component within the code is assigned to a variable called currentComponent.
Resolution
What I did was used the Java code to split the name using the code below. This way I can get the current name of the of the joblet and use this name in my Java.
String string = currentComponent;
String[] parts = string.split("_");
String part1 = parts[0];
String part2 = parts[1];
String joblet = part1+'_'+part2;
String newrow = "ReferenceID_"+joblet+"_tMemorizeRows_1";
I hope this makes sense.
Thanks

Matlab - using a function mulitple times in the same workspace, to add values and fields to a structure

I have a structure such as:
specimen.trial1 = 1
I now want to add another trial to the specimen, so that
specimen.trial1 = 1
specimen.trial2 = 2
I can do this without a problem within the workspace and command window. But, if I'm using a function to calculate the numbers for each trial (with dynamic fields), the new field and value erases the previous one. Eg:
function [specimen] = dummy(trial,value)
specimen.(trial) = value
end
run the function:
[specimen] = dummy('trial1',1)
then run the function again with different inputs, but keeping the structure intact in the workspace
[specimen] = dummy('trial2',2)
Instead of getting a structure with 2 fields, I get just one with Trial2 being the only field. Does that make any sense? What would like is to use the outputs of a function to progressively add to a structure.
Thank you,
Chris
Yes it makes sense, because you're creating a new struct specimen within your function.
Solution: pass the the previous specimen to the function as well.
function [specimen] = dummy(specimen,trial,value)
specimen.(trial) = value
end
and call:
[specimen] = dummy(specimen,'trial1',1)
or alternativly leave out the assignment at all and use the following
function [output] = dummy(value)
output = value
end
and call:
[specimen.trail1] = dummy(1)
which really depends on what you actually want to do. Put passing a name to a function which uses this name to define a struct is a little pointless unless you "use" that name otherwise. Also if you want to have input-dependent dynamic names you'd also go with the first alternative