Node Opcua / QtOpcUa - Method Calls - opc-ua

I have a Node OPC Server which I connect to with a Qt application using the QtOpcUa client library.
On my server I define a method that's basically a crude historic access request as HDA support is not yet available, it takes in a start_date and end_date then queries a database for the relevant values which it returns in an array.
It looks a bit like this:
const deviceTrends = namespace.addObject({
organizedBy: deviceObject,
browseName: strings.TREND_NODE
})
const method = namespace.addMethod(deviceTrends,{
nodeId: strings.NSI + part.name + "-Trend",
browseName: part.name + "-Trend",
inputArguments: [
{
name:"start_date",
description: { text: "Trend Start Date" },
dataType: opcua.DataType.DateTime
},{
name:"end_date",
description: { text: "Trend End Date" },
dataType: opcua.DataType.DateTime
}
],
outputArguments: [{
name:"Trend",
description:{ text: "Trend Data from start_date to end_date" },
dataType: opcua.DataType.String ,
valueRank: 1
}]});
method.bindMethod(function(inputArguments,context,callback) {
console.log("called")
const start = inputArguments[0].value;
const end = inputArguments[1].value;
console.log("Start: ", start);
console.log("End: ", end);
let sql = `SELECT Date date,
Name name,
Value value
FROM Trends
WHERE DateTime >= ? AND DateTime <= ?`;
var result = []
db.each(sql, [start, end], (err, row) =>
{
result.push(`${row.date}: ${row.name} - ${row.value}`)
})
console.log(result)
const callMethodResult = {
statusCode: opcua.StatusCodes.Good,
outputArguments: [{
dataType: opcua.DataType.String,
arrayType: opcua.VariantArrayType.Array,
value :result
}]
};
callback(null,callMethodResult);});}
I can see this in a client such as Prosys and call the method which works okay:
However I can't seem to call this method from Qt, I've cut out the packaging of arguments and the result handler (it just lists out the received params):
QOpcUaNode* n = devices[deviceName].client->node("ns=1;s=Speed-Trend");
connect(n, &QOpcUaNode::methodCallFinished, [this, deviceName](QString methodNodeId, QVariant result, QOpcUa::UaStatusCode status)
{
qDebug() << " Response received ";
this->handleNodeTrendResponse(deviceName, methodNodeId, result, status);
});
n->callMethod(n->nodeId(), args);
Trace:
Requesting Trend: From QDateTime(2018-10-07 13:13:56.766 BST Qt::TimeSpec(LocalTime)) TO QDateTime(2018-10-07 13:14:05.390 BST Qt::TimeSpec(LocalTime))
qt.opcua.plugins.open62541: Could not call method: BadNodeIdInvalid
Response received [Output from method result handler]
Device Name: "speed-device"
Method Node Id: "ns=1;s=Speed-Trend"
Result: QVariant(Invalid)
Result to List: << ()
Status: QOpcUa::UaStatusCode(BadNodeIdInvalid)
I also can't seem to find the method on other clients too, this is from an OPC UA Client application on my phone which shows nothing under the Trends object:
Everything else seems accessible, I can request variables, setup monitoring all fine.
Is there something I'm just missing here or is it an issue with QtOpcUa and other clients?
I can work around this by creating variables instead to capture input and output arguments and a boolean to represent a method call but it's a lot neater to tie everything up in a single method.
Thanks

Related

How to use botkit with facebook and wit.ai

I am a novice in chatbot development and I would like some help.
While it seems quite simple to connect botkit with facebook messenger and wit.ai in orger to use NLP. I haven't managed to do so. My initial goal is to have a simple conversation like hello-hello but using wit.ai as middleware.
Below I attach the code. What it should do is receive a "hello" message, pass it to wit.ai and then respond "I heard hello!" as a reply (without using wit at this stage). Instead I just receive
debug: RECEIVED MESSAGE
debug: CUSTOM FIND CONVO XXXXXXXXXXXXXX XXXXXXXXXXXXXX
debug: No handler for message_received
after every message I send to facebook messenger bot. In wit it seems like I am getting the messages since I receive messages in my inbox to update the intents.
If there is any code much simpler than the one below I would be very happy to have it so that I can start with something much simpler :).
Thanks
<pre><code>
if (!process.env.page_token) {
console.log('Error: Specify page_token in environment');
process.exit(1);
}
if (!process.env.page_token) {
console.log('Error: Specify page_token in environment');
process.exit(1);
}
if (!process.env.verify_token) {
console.log('Error: Specify verify_token in environment');
process.exit(1);
}
if (!process.env.app_secret) {
console.log('Error: Specify app_secret in environment');
process.exit(1);
}
var Botkit = require('./lib/Botkit.js');
var wit = require('./node_modules/botkit-middleware-witai')({
token: process.env.wit,
minConfidence: 0.6,
logLevel: 'debug'
});
var os = require('os');
var commandLineArgs = require('command-line-args');
var localtunnel = require('localtunnel');
const ops = commandLineArgs([
{name: 'lt', alias: 'l', args: 1, description: 'Use localtunnel.me to make your bot available on the web.',
type: Boolean, defaultValue: false},
{name: 'ltsubdomain', alias: 's', args: 1,
description: 'Custom subdomain for the localtunnel.me URL. This option can only be used together with --lt.',
type: String, defaultValue: null},
]);
if(ops.lt === false && ops.ltsubdomain !== null) {
console.log("error: --ltsubdomain can only be used together with --lt.");
process.exit();
}
var controller = Botkit.facebookbot({
debug: true,
log: true,
access_token: process.env.page_token,
verify_token: process.env.verify_token,
app_secret: process.env.app_secret,
validate_requests: true, // Refuse any requests that don't come from FB on your receive webhook, must provide FB_APP_SECRET in environment variables
});
var bot = controller.spawn({
});
controller.setupWebserver(process.env.port || 3000, function(err, webserver) {
controller.createWebhookEndpoints(webserver, bot, function() {
console.log('ONLINE!');
if(ops.lt) {
var tunnel = localtunnel(process.env.port || 3000, {subdomain: ops.ltsubdomain}, function(err, tunnel) {
if (err) {
console.log(err);
process.exit();
}
console.log("Your bot is available on the web at the following URL: " + tunnel.url + '/facebook/receive');
});
tunnel.on('close', function() {
console.log("Your bot is no longer available on the web at the localtunnnel.me URL.");
process.exit();
});
}
});
});
controller.middleware.receive.use(wit.receive);
controller.hears(['hello'], 'direct_message', wit.hears, function(bot, message) {
bot.reply(message, 'I heard hello!');
});
function formatUptime(uptime) {
var unit = 'second';
if (uptime > 60) {
uptime = uptime / 60;
unit = 'minute';
}
if (uptime > 60) {
uptime = uptime / 60;
unit = 'hour';
}
if (uptime != 1) {
unit = unit + 's';
}
uptime = uptime + ' ' + unit;
return uptime;
}
Make sure you have a few conversations in Wit.ai beforehand so for example hello there and highlight the hello in that statement as something like, greetings.
Now i'm not sure what your intents are called in wit.ai but in your statement controller.hears(['hello'] you're actually listening to the wit.ai intents. So in the example i mentioned above, we'd be using hears(['greetings']) since that's the intent in wit.ai.
Also, instead of using direct_message use message_received this is what it should look like:
controller.hears(['hello'], 'message_received', wit.hears, function(bot, message) {
bot.reply(message, 'I heard hello!');
});
If you're struggling tracking down the problem you can stick a console statement in your controller so something like console.log("Wit.ai detected entities", message.entities); and see what you get back from that.
Let me know if you're still having any issues :)

OrientJS: How to get standard JSON (serialized) from query

I don't understand how to get standard JSON back from an orientjs query. I see people talking about "serializing" the result, but I don't understand why or how to do that. There is a toJSON() method, but i only see it being used with fetchplans etc...
I am trying to pipe a stream to a csv file and it isn't working properly because of the incorrect JSON format.
I would love an explanation of how and when to serialize. :-)
My Query:
return db.query(
`SELECT
id,
name,
out('posted_to').name as page,
out('posted_to').id as page_id,
out('posted_to').out('is_language').name as language,
out('posted_to').out('is_network').name as network
FROM post
WHERE posted_at
BETWEEN
'${since}'
AND
'${until}'
UNWIND
page,
page_id,
language,
network
`
My Result:
[ { '#type': 'd',
id: '207109605968597_1053732754639607',
name: '10 maneiras pelas quais você está ferindo seus relacionamentos',
page: 'Eu Amo o Meu Irmão',
page_id: '207109605968597',
language: 'portuguese',
network: 'facebook',
'#rid': { [String: '#-2:1'] cluster: -2, position: 1 },
'#version': 0 },
{ '#type': 'd',
id: '268487636604575_822548567865143',
name: '10 maneiras pelas quais você está ferindo seus relacionamentos',
page: 'Amo meus Filhos',
page_id: '268487636604575',
language: 'portuguese',
network: 'facebook',
'#rid': { [String: '#-2:3'] cluster: -2, position: 3 },
'#version': 0 }]
This is my dataset:
Query:
db.select('id','code').from('tablename').where({deleted:true}).all()
.then(function (vertex) {
console.log('Vertexes found: ');
console.log(vertex);
});
Output:
Vertexes found:
[ { '#type': 'd',
id: '6256650b-f5f2-4b55-ab79-489e8069b474',
code: '4b7d99fa-16ed-4fdb-9baf-b33771c37cf4',
'#rid': { [String: '#-2:0'] cluster: -2, position: 0 },
'#version': 0 },
{ '#type': 'd',
id: '2751c2a0-6b95-44c8-966a-4af7e240752b',
code: '50356d95-7fe7-41b6-b7d9-53abb8ad3e6d',
'#rid': { [String: '#-2:1'] cluster: -2, position: 1 },
'#version': 0 } ]
If I add the instruction JSON.stringify():
Query:
db.select('id','code').from('tablename').where({deleted:true}).all()
.then(function (vertex) {
console.log('Vertexes found: ');
console.log(JSON.stringify(vertex));
});
Output:
Vertexes found:
[{"#type":"d","id":"6256650b-f5f2-4b55-ab79-489e8069b474","code":"4b7d99fa-16ed-
4fdb-9baf-b33771c37cf4","#rid":"#-2:0","#version":0},{"#type":"d","id":"2751c2a0
-6b95-44c8-966a-4af7e240752b","code":"50356d95-7fe7-41b6-b7d9-53abb8ad3e6d","#ri
d":"#-2:1","#version":0}]
Hope it helps
I found a way that worked for me. instead of using :
db.query()
i used http request in node to query on database. on OrientDB Document also said you get only JSON format in result. this way if you query in database you will always get a valid JSON.
for making a http request i used request module.
this is a sample that worked for me :
var request = require("request");
var auth = "Basic " + new Buffer("root" + ":" + "root").toString("base64")
request(
{
url : encodeURI('http://localhost:2480/query/tech_graph/sql/'+queryInput+'/20'),
headers : {
"Authorization" : auth
}
},
function (error, response, body) {
console.log(body);
return body;
}
);

How to change http status codes in Strongloop Loopback

I am trying to modify the http status code of create.
POST /api/users
{
"lastname": "wqe",
"firstname": "qwe",
}
Returns 200 instead of 201
I can do something like that for errors:
var err = new Error();
err.statusCode = 406;
return callback(err, info);
But I can't find how to change status code for create.
I found the create method:
MySQL.prototype.create = function (model, data, callback) {
var fields = this.toFields(model, data);
var sql = 'INSERT INTO ' + this.tableEscaped(model);
if (fields) {
sql += ' SET ' + fields;
} else {
sql += ' VALUES ()';
}
this.query(sql, function (err, info) {
callback(err, info && info.insertId);
});
};
In your call to remoteMethod you can add a function to the response directly. This is accomplished with the rest.after option:
function responseStatus(status) {
return function(context, callback) {
var result = context.result;
if(testResult(result)) { // testResult is some method for checking that you have the correct return data
context.res.statusCode = status;
}
return callback();
}
}
MyModel.remoteMethod('create', {
description: 'Create a new object and persist it into the data source',
accepts: {arg: 'data', type: 'object', description: 'Model instance data', http: {source: 'body'}},
returns: {arg: 'data', type: mname, root: true},
http: {verb: 'post', path: '/'},
rest: {after: responseStatus(201) }
});
Note: It appears that strongloop will force a 204 "No Content" if the context.result value is falsey. To get around this I simply pass back an empty object {} with my desired status code.
You can specify a default success response code for a remote method in the http parameter.
MyModel.remoteMethod(
'create',
{
http: {path: '/', verb: 'post', status: 201},
...
}
);
For loopback verion 2 and 3+: you can also use afterRemote hook to modify the response:
module.exports = function(MyModel) {
MyModel.afterRemote('create', function(
context,
remoteMethodOutput,
next
) {
context.res.statusCode = 201;
next();
});
};
This way, you don't have to modify or touch original method or its signature. You can also customize the output along with the status code from this hook.

Problems with security policy for Filepicker Convert?

I use Filepicker to "read" then "store" an image from clients' computer. Now I want to resize the image using Filepicker but always get a 403 error:
POST https://www.filepicker.io/api/file/w11b6aScR1WRXKFbcXON/convert?_cacheBust=1380818787693 403 (FORBIDDEN)
I am using the same security policy and signature for the "read", "store", and "convert" calls. Is this wrong? Because when "read" and "store" are called there is no file handle yet (e.g. the last string part in InkBlob.url). But it seems the "convert" policy/signature must be generated using the file handle returned with the "store" InkBlob? And if this is the case, what's a more convenient way to do in javascript? Because in "convert" I have no access to the Python function that generates security policies unless I write an API call for that.
My code snippet as below (initialFpSecurityObj was pre-generated in Python using an empty handle):
filepicker.store(thumbFile, {
policy: initialFpSecurityObj.policy,
signature: initialFpSecurityObj.signature,
location: "S3",
path: 'thumbs/' + initialFpSecurityObj.uniqueName + '/',
},function(InkBlob){
console.log("Store successful:", JSON.stringify(InkBlob));
processThumb(InkBlob);
}, function(FPError){
console.error(FPError.toString());
});
var processThumb = function(InkBlob){
filepicker.convert(InkBlob, {
width: 800,
height: 600,
format: "jpg",
policy: initialFpSecurityObj.policy,
signature: initialFpSecurityObj.signature,
}, function(InkBlob){
console.log("thumbnail converted and stored at:", InkBlob);
}, function(FPError){
console.error(FPError);
};
}
Thanks a lot for the help.
--- EDIT ---
Below is the snippet for the Python code that generates initialFpSecurityObj
def generateFpSecurityOptions(handle, userId, policyLife=DEFAULT_POLICY_LIFE):
expiry = int(time() + policyLife)
json_policy = json.dumps({'handle': handle, 'expiry': expiry})
policy = base64.urlsafe_b64encode(json_policy)
secret = 'XXXXXXXXXXXXXX'
signature = hmac.new(secret, policy, hashlib.sha256).hexdigest()
uniqueName = hashlib.md5()
uniqueName.update(signature + repr(time()))
uniqueName = uniqueName.hexdigest() + str(userId)
return {'policy':policy, 'signature':signature, 'expiry':expiry, 'uniqueName':uniqueName}
fp_security_options = generateFpSecurityOptions(None, request.user.id)
Then in the django template fp_security_options is retrieved:
var initialFpSecurityObj = {{fp_security_options|as_json|safe}};
The way that generates fp_security_options is suspicious to me (former colleague's code) because the handle is None.
My recommendation would be to create two policies: one that is handle-bound and allows storing of the file, and another that is not handle-bound for the convert. In this case, you can set a shorter expiry time to increase the level of security, given that you are not specifying a handle.
Your problem is probably that your policy does not contain any "call" specifications. I suggest:
json_policy = json.dumps({'handle': handle, 'expiry': expiry, 'call':['pick','store','read','convert']})
but as our (very busy ;) brettcvz suggests, for conversion only, this is already enough:
json_policy = json.dumps({'handle': handle, 'expiry': expiry, 'call':'convert'})
You can find this in the security docs https://developers.inkfilepicker.com/docs/security/
If you still have issues, use a REST call, it's free. The following method is JavaScript and returns an url to the REST endpoint of filepicker which can be used to retrieve the converted image. The _options object looks like this
var myOptions = {
w: 150,
h: 150,
fit: "crop",
align: "faces",
format: "jpg",
quality: 86
};
and will work with all parameters specified of file pickers REST-API (check out https://developers.inkfilepicker.com/docs/web/#inkblob-images).
function getConvertedURL(_handle, _options, _policy, _signature) {
// basic url piece
var url = "https://www.filepicker.io/api/file/" + _handle + "/convert?";
// appending options
for (var option in _options) {
if (_options.hasOwnProperty(option)) {
url += option + "=" + _options[option] + "&";
}
}
// appending signed policy
url += "signature=" + _signature + "&policy=" + _policy;
return url;
}
So I finally figured it out myself, although I saw brettcvz's suggestion afterwards. The key is for 'convert' to work, I have to specify the exact handle of the uploaded file (i.e. the last bit of the string in InkBlob's url property returned from the 'store' or 'pickAndStore' call.
First thing I did was to edit the Python function generating the security policy and signature:
def generateFpSecurityOptions(handle, userId, policyLife=DEFAULT_POLICY_LIFE):
expiry = int(time() + policyLife)
json_policy = json.dumps({'handle': handle, 'expiry': expiry})
policy = base64.urlsafe_b64encode(json_policy)
secret = 'XXXXXXXXXXXXXX'
signature = hmac.new(secret, policy, hashlib.sha256).hexdigest()
if not handle == None:
uniqueName = handle
else:
uniqueName = hashlib.md5()
uniqueName.update(signature + repr(time()))
uniqueName = uniqueName.hexdigest() + str(userId)
return {'policy':policy, 'signature':signature, 'expiry':expiry, 'uniqueName':uniqueName}
fp_security_options = generateFpSecurityOptions(None, request.user.id)
Then I have to established the API call in our Django framework to get this security policy object dynamically via AJAX. I am fortunate that my colleague has previously written it. So I just call the API function in Javascript to retrieve the file-specific security policy object:
var initialFpSecurityObj = {{fp_security_options|as_json|safe}};
filepicker.store(thumbFile, {
policy: initialFpSecurityObj.policy,
signature: initialFpSecurityObj.signature,
access: "public"
}, function(InkBlob) {
processThumb(InkBlob);
}, function(FPError) {
console.error(FPError.toString());
}, function(progress) {
console.log("Loading: " + progress + "%");
});
var processThumb = function(InkBlob) {
var fpHandle = InkBlob.url.split('/').pop();
$.ajax({
url: API_BASE + 'file_picker_policy',
type: 'GET',
data: {
'filename': fpHandle
},
dataType: 'json',
success: function(data) {
var newFpSecurityObj = data.data;
filepicker.convert(InkBlob, {
width: 800,
height: 600,
format: "jpg",
policy: newFpSecurityObj.policy,
signature: newFpSecurityObj.signature,
}, {
location: "S3",
path: THUMB_FOLDER + '/' + newFpSecurityObj.uniqueName + '/',
}, function(fp) { // onSuccess
console.log("successfully converted and stored!");
// do what you want with the converted file
}, function(FPError) { // onError
console.error(FPError);
});
},
failure: function() {
alert("There was an error converting the thumbnail! Please try again.");
}
});
};

How one could use server side sorting and paging with Azure Mobile Services

I am using jqGrid (inlineNav) with data from azure service and interested in learning how one could use server side sorting and paging with Azure Mobile Services.
Please share thoughts around this.
Windows Azure Mobile Services provides REST API which can be used to get/insert/edit/delete data of the the tables which you configured for the corresponding access (see the documentation). Query records operation request uses HTTP GET verb. It supports Open Data Protocol (OData) URI options $orderby, $skip, $top and $inlinecount which could be used to fill jqGrid.
$("#list4").jqGrid({
url : 'https://mohit.azure-mobile.net/tables/Schedules',
datatype: "json",
height: "auto",
colModel: [
{ name: "RouteId", width: 50 },
{ name: "Area", width: 130 }
],
cmTemplate: {editable: true, editrules: { required: true}},
rowList: [10, 20, 30],
rowNum: 10,
prmNames: { search: null, nd: null },
ajaxGridOptions: {
contentType: "application/json",
headers: {
"X-ZUMO-APPLICATION": "myKey"
}
},
serializeGridData: function (postData) {
if (postData.sidx) {
return {
$top: postData.rows,
$skip: (parseInt(postData.page, 10) - 1) * postData.rows,
$orderby: postData.sidx + " " + postData.sord,
$inlinecount: "allpages"
};
} else {
return {
$top: postData.rows,
$skip: (parseInt(postData.page, 10) - 1) * postData.rows,
$inlinecount: "allpages"
};
}
},
beforeProcessing: function (data, textStatus, jqXHR) {
var rows = parseInt($(this).jqGrid("getGridParam", "rowNum"), 10);
data.total = Math.ceil(data.count/rows);
},
jsonReader: {
repeatitems: false,
root: "results",
records: "count"
},
loadError: function (jqXHR, textStatus, errorThrown) {
alert('HTTP status code: ' + jqXHR.status + '\n' +
'textStatus: ' + textStatus + '\n' +
'errorThrown: ' + errorThrown);
alert('HTTP message body (jqXHR.responseText): ' + '\n' + jqXHR.responseText);
},
pager: "#pager1",
sortname: "Area",
viewrecords: true,
caption: "Schedule Data",
gridview: true
});
Some comments to the above code:
I removed sortable: false to allow sorting of grid by click on the column header
with respect of prmNames option one can remove sending of unneeded parameters to the server or rename it. I used prmNames: { search: null, nd: null } to deny sending of _search and nd options. One could use sort: "$orderby", rows: "$top" to rename two other parameters, but because we need to calculate $skip and append sord after sidx we need to use serializeGridData. So the renaming of other parameters are not needed in the case.
using serializeGridData we construct the list of options which will be send to the server.
ajaxGridOptions will be used to set additional parameters of jQuery.ajax request which do jqGrid internally for access to the server. The options which I use in the example set Content-Type: application/json and X-ZUMO-APPLICATION: myKey in the HTTP headers
the response from the server don't contains total (the total number of pages), so we use beforeProcessing callback to fill the property based on other information before the response will be processed.
because we use $inlinecount=allpages options in the URL the response from the server will contains information about the total number of records and the page of data will be wrapped in the results part of the answer. So we use jsonReader: {repeatitems: false, root: "results", records: "count"} to read the response.
we have to remove loadonce: true option because the server returns only the requested page of data instead of the whole set of data.