Read function are not called in the ExampleNamespace code - opc-ua

I ran the ExampleNamespace sample. I can browse the node and return all the nodes correctly.
I can run the client read example okay.
But when I run the client to read value of HelloWorld.Dynamic.Double, it timeout, and the override read function in ExampleNamespace is not called.
// synchronous read request via VariableNode
NodeId nodeId = new NodeId(2, "HellowWorld.Dynamic.Double");
VariableNode node= client.getAddressSpace().createVariableNode(nodeId);
CompletableFuture<DataValue> datavalue = client.readValue(1.0, TimestampsToReturn.Source, nodeId);
DataValue value = datavalue.get();
Did I forget to do anything?

At the second line of your code, you set an instance for the node variable as VariableNode type by making a call via the client object, which looks correct.
However, you never use the new node variable later. The next line of code still tries to readValue through the same client object. I can recommend you to replace the client with the node variable as follows:
CompletableFuture<DataValue> datavalue = node.readValue(1.0, TimestampsToReturn.Source, nodeId);
A simpler way of making the same read operation might be as follows:
// synchronous read request via VariableNode object (node)
VariableNode node = client.getAddressSpace().createVariableNode(nodeId);
DataValue datavalue = node.readValue().get();

Related

Connect to a node with a String identifier

I'm trying to write a generic OPC-UA connector with Eclipse Milo.
Reading data from nodes already works fine when I'm using numeric nodeIDs, such as ns=0;i=2258. In milo I can simple construct the nodeID like this for example:
NodeId nodeIdentifier = new NodeId(Unsigned.ushort(nameSpaceID), uint(nodeID));
and it works fine.
But when I'm trying to connect to a note with a string identifier of a production node that only have a string identifier like shown in this image
the process fails with a StatusCode{name=Bad_NodeIdUnknown, value=0x80340000, quality=bad} exception.
I create the nodeIdentifier like this NodeId nodeIdentifier = NodeId.parse(nodeIDString);
and the parsed value looked like this:
ns=1;s=t|023_Messwert
First things first, you can’t just decide to use a string-based NodeId because you feel like it. If the server is exposing it as an integer-based NodeId then that’s what you have to use, as is the case with the CurrentTime Node being identified by ns=0;i=2258.
Parsing a string-based NodeId via NodeId.parse will work fine as long as it’s in the right format. What value are you trying to parse?

mirth connect Database Reader automatic column mapping

Please could somebody confirm the following..
I am using Mirth Connect 3.5.08232.
My Source Connector is a Database Reader.
Say, I am using a query that returns multiple rows, and return the result (via JavaScript), as documentation suggests, so that Mirth would treat each row as a separate message. I also use a couple of mappers as source transformers, and save the mapped fields in my channel map (which ends up to contain only those fields that I define in transformers)
In the destination, and specifically, in destination response transformer (or destination body, if it is a JavaScript writer), how do I access the source fields?
the only way I found by trial and error is
var rawMsg = connectorMessage.getRawData();
var xmlMsg = new XML(rawMsg);
logger.info(xmlMsg.some_field); // ignore the root element of rawMsg
Is this the right way to do this? I thought that maybe the fields that were nicely automatically detected would be put in some kind of a map, like sourceMap - but that doesn't seem to be the case, right?
Thank you
If you are using Mapper steps in your transformer to extract the data and put it into a variable map (like the channel map), then you can use any of the following methods to retrieve it from a subsequent JavaScript context (including a JavaScript Writer, and your response transformer):
var value = channelMap.get('key');
var value = $c('key');
var value = $('key');
Look at the Variable Maps section of the User Guide for more information.
So to recap, say you're selecting a column "mycolumn" with a Database Reader. The XML sent to the channel will be something like this:
<result>
<mycolumn>value</mycolumn>
</result>
Then you can choose to extract pieces of that message into specific variables for later use. The transformer allows you to easily drag-and-drop pieces of the sample inbound message.
Finally in your JavaScript Writer (or in any subsequent filter, transformer, or response transformer), just drag the value into the field you want:
And the corresponding JavaScript code will automatically be inserted:
One last note, if you are selecting a lot of variables and don't want to make Mapper steps for each one individually, you can use a JavaScript Step to iterate through the message and extract each column into a separate map variable:
for each (child in msg.children()) {
channelMap.put(child.localName(), child.toString());
}
Or, you can just reference the columns directly from within the JavaScript Writer:
var msg = new XML(connectorMessage.getEncodedData());
var column1 = msg.column1.toString();
var column2 = msg.column2.toString();
...

Can I start a Service Now workflow via an external SOAP call?

I would like to make a call into the ServiceNow SOAP webservice to start an instance of a specific web service.
I can find the WSDL for functions like incident.do but seem to be missing the step needed to find the proper table/endpoint for workflows to start.
If you want to start a Workflow via SOAP I think the only way to do this is to create a Scripted Web-Service or a Custom Processor.
In there you will have to define a script which starts your Workflow.
var w = new Workflow();
var context = w.startFlow(id, current, current.operation(), getVars());
In this wiki article you can find API Methods for Workflows.
The tricky bit is getting the variables into the Workflow.
While this sounds easy, in fact it isn't.
If your workflow runs on the table sc_req_item (which is likely if you are dealing with Request Fulfillment), you first need to set the Property (sys_properties) glide.workflow.enable_input_variables to true, because otherwise, you will not be able to add normal Input variables to your workflow.
Then, add the Input variables to the workflow. Note that you have some nifty datatypes available there. Note for example the "Data Structure" type.
All Input variables are treated like custome columns (in fact they are columns of a workflw-specific table). That is why the names start with u_.
Lets say, you define an input variable called u_dynamic_vars (Datatype "Data Structure").
Here is how to call the workflow:
var wf_name = "Name of your workflow";
// Instantiate JSON machinery
var parser = new JSON();
//Declare an instance of workflow.js
var wf = new Workflow ();
//Get the workflow id
var wfId = wf.getWorkflowFromName (wf_name) ;
//Start workflow, passing along object containing name/value pairs mapping to inputs expected by the workflow
var vars = { } ;
// Prepare the JSON Datastructure
var obj ={"name":"George",
"lastname":"Washington"};
// Encode the data
vars.u_dynamic_vars = parser.encode(obj);
vars.u_new_email = "inject#new.com";
// Get a specific RITM
var gr = GlideRecord("sc_req_item");
gr.get("18d8e9740f4013002f504c6be1050e48");
gs.print(gr.number);
// Start the Workflow with a "current" record
wf.startFlow(wfId , gr , "update" , vars ) ;
// You may also pass null, then current is null.
wf.startFlow(wfId , null , "update" , vars ) ;
In the workflow, you then unpack the data like so:
// Let's unpack it. For some reason, intantiating the parse won't work here...
payload = JSON.parse(workflow.variables.u_dynamic_vars);
gs.print("payload.first_name:" + payload.name);
Also note that a workflow does not necessarily need to run on a table.
To achieve this, choose "global" as table name when defining the workflow.

Enterprise Autoscale Application Block (WASABi) <scale> up by a variable amount

I was looking at the WASABi documentation and I am confused about a particular aspect of this library.
I need to create a custom reactive rule. Say, this rule runs every minute and the "scale" action of this rule should be to scale up by "x" amount. It seems that as though I can set the "scale" action to a particular number (say 1 or 2), but not pass in a variable computed by, say my custom operand.
I understand that I can create a custom operand to check my condition, but I want the custom operand to compute how much the "scale" action should scale the target Worker Role by and then pass this value to the "scale" action.
Is there someway to define these rules outside the XML to achieve this?
Any help would be greatly appreciated!
Actions can increment or decrement the count by a number or by a proportion. So if you want a dynamic increment or decrement I think you will need to create a custom action. I think you could pull out the info you need from the IRuleEvaluationContext.
To change the instance count you will need to change the deployment configuration. See https://social.msdn.microsoft.com/forums/azure/en-US/dbbf14d1-fd40-4aa3-8c65-a2424702816b/few-question-regarding-changing-instance-count-programmatically?forum=windowsazuredevelopment&prof=required for some discussion.
You should be able to do that using the Azure Management Libraries for .NET and the ComputeManagementClient. Something like:
using (ComputeManagementClient client = new ComputeManagementClient(credentials))
{
var response = await client.Deployments.GetBySlotAsync(serviceName, slot);
XDocument config = XDocument.Parse(response.Configuration);
// Change the config
StringBuilder builder = new StringBuilder();
using (TextWriter writer = new StringWriter(builder))
{
config.Save(writer);
}
string newConfig = builder.ToString();
await client.Deployments.BeginChangingConfigurationBySlotAsync(serviceName, slot, new DeploymentChangeConfigurationParameters(newConfig));
}

Squeryl: Run query explicitly

When I create a query in squeryl, it returns a Query[T] object. The query was not yet executed and will be, when I iterate over the Query object (Query[T] extends Iterable[T]).
Around the execution of a query there has to be either a transaction{} or a inTransaction{} block.
I'm just speaking of SELECT queries and transactions wouldn't be necessary, but the squeryl framework needs them.
I'd like to create a query in the model of my application and pass it directly to the view where a view helper in the template iterates over it and presents the data.
This is only possible when putting the transaction{} block in the controller (the controller includes the call of the template, so the template which does the iteration is also inside). It's not possible to put the transaction{} block in the model, because the model doesn't really execute the query.
But in my understanding the transaction has nothing to do with the controller. It's a decision of the model which database framework to use, how to use it and where to use transactions. So I want the transaction{} block to be in the model.
I know that I can - instead of returning the Query[T] instance - call Iterable[T].toList on this Query[T] object and then return the created list. Then the whole query is executed in the model and everything is fine. But I don't like this approach, because all the data requested from the database has to be cached in this list. I'd prefer a way where this data is directly passed to the view. I like the MySql feature of streaming the result set when it's large.
Is there any possibility? Maybe something like a function Query[T].executeNow() which sends the request to the database, is able to close the transaction, but still uses the MySQL streaming feature and receives the rest of the (selected and therefore fixed) result set when it's accessed? Because the result set is fixed in the moment of the query, closing the transaction shouldn't be a problem.
The general problem that I see here is that you try to combine the following two ideas:
lazy computation of data; here: database results
hiding the need for a post-processing action that must be triggered when the computation is done; here: hiding from your controller or view that the database session must be closed
Since your computation is lazy and since you are not obliged to perform it to the very end (here: to iterate over the whole result set), there is no obvious hook that could trigger the post-processing step.
Your suggestion of invoking Query[T].toList does not exhibit this problem, since the computation is performed to the very end, and requesting the last element of the result set can be used as a trigger for closing the session.
That said, the best I could come up with is the following, which is an adaptation of the code inside org.squeryl.dsl.QueryDsl._using:
class IterableQuery[T](val q: Query[T]) extends Iterable[T] {
private var lifeCycleState: Int = 0
private var session: Session = null
private var prevSession: Option[Session] = None
def start() {
assert(lifeCycleState == 0, "Queries may not be restarted.")
lifeCycleState = 1
/* Create a new session for this query. */
session = SessionFactory.newSession
/* Store and unbind a possibly existing session. */
val prevSession = Session.currentSessionOption
if(prevSession != None) prevSession.get.unbindFromCurrentThread
/* Bind newly created session. */
session.bindToCurrentThread
}
def iterator = {
assert(lifeCycleState == 1, "Query is not active.")
q.toStream.iterator
}
def stop() {
assert(lifeCycleState == 1, "Query is not active.")
lifeCycleState = 2
/* Unbind session and close it. */
session.unbindFromCurrentThread
session.close
/* Re-bind previous session, if it existed. */
if(prevSession != None) prevSession.get.bindToCurrentThread
}
}
Clients can use the query wrapper as follows:
var manualIt = new IterableQuery(booksQuery)
manualIt.start()
manualIt.foreach(println)
manualIt.stop()
// manualIt.foreach(println) /* Fails, as expected */
manualIt = new IterableQuery(booksQuery) /* Queries can be reused */
manualIt.start()
manualIt.foreach(b => println("Book: " + b))
manualIt.stop()
The invocation of manualIt.start() could already be done when the object is created, i.e., inside the constructor of IterableQuery, or before the object is passed to the controller.
However, working with resources (files, database connections, etc.) in such a way is very fragile, because the post-processing is not triggered in case of exceptions. If you look at the implementation of org.squeryl.dsl.QueryDsl._using you will see a couple of try ... finally blocks that are missing from IterableQuery.