Create an operation with CEP using a custom fragment - complex-event-processing

I'm trying to write a custom CEP rule that creates an operation for an agent to gather the measurement that caused the rule to trigger. The CEP rule looks like this:
insert into CreateOperation
select
OperationStatus.PENDING as status,
"5345" as deviceId,
{
"c8y_GetData", {
"name": "get measurement data",
"measurementID": measurementEvent.measurement.id.value,
"measurementType": measurementEvent.measurement.type
}
} as fragments
from MeasurementCreated measurementEvent
where measurementEvent.measurement.type = "c8y_TemperatureMeasurement";
When I'm using simple strings for the measurementEvent.measurement... fields (e.g. "testString") the rule works. But those lines cause errors when typed like in this example. Changing the ":" to "," like in examples from the documentation makes the rule work but the result is
"name",
"get measurement data",
"measurementID",
"176438",
"measurementType",
"c8y_TemperatureMeasurement"
which doesn't work as key,value pair like "name": "get measurement data" would have.
Trying to encapsulate fragments within fragments doesn't seem to work either.

The fragments parameter in CEL is not an object. It is a list of key/value where the key is a JSONPath. Everything separated by commas (I know it looks weird). The curly braces in Esper actually indicate an array not an object.
Your statement should look like this:
insert into CreateOperation
select
"5345" as deviceId,
{
"c8y_GetData.name", "get measurement data",
"c8y_GetData.measurementID", measurementEvent.measurement.id.value,
"c8y_GetData.measurementType", measurementEvent.measurement.type
} as fragments
from MeasurementCreated measurementEvent
where measurementEvent.measurement.type = "c8y_TemperatureMeasurement";
Also note that I dropped the line with OperationStatus. On POST of an operation you cannot set the status. It will be automatically in status PENDING. Only on PUT you can change it. Keeping the line will result in an error on the API when trying to POST it.

Related

What is the correct way to express "select all when nothing is specified in parameter"?

Let's say we have an HTTP endpoint to get all elements by name?
GET /elements?name={name}
{name} can have a value of CSV or be absent
valid:
GET /elements?name=Bill,Mary,Ann
GET /elements?name=Mike
GET /elements
invalid:
GET /elements?name=
Somehow we find out in controller that name is not passed. We know that the contract implies to return all values for elements. Possible decisions on further actions (I've seen in different projects) are:
using a NULL or a "dummy" substitution like a secret char sequence "!#$#%#$" and juggling them in database while building a query
using if (present) { executeQueryA } else { executeQueryB } logic
I am not sure I like either of these approaches, because when there is more than one optional filter these designs become unmaintainable. Something makes me believe that there is a better way to handle the situation.
What would be a proper design on back-end and in database query to handle the case "select all" when nothing is given? Just a general idea and some pseudo-code will be much appreciated.

IBM Chatbot Assistant: Handling Multiple Entities

I have an entity called #spare_part and this entity has 4 values with the following example synonyms each:
both with synonyms filter, oil level indicator
not_defined with a synonym spare part
only_gear with synonyms valve, seal
whole_gear_box with a synonym complete set of gearbox
I want to be able to handle multiple entities given in the same input and address them later on, if needed. With this purpose I have coded the following in JSON editor:
{
"context": {
"sparepartrequest": "#spare_part.values"
},
"output": {
"generic": [
{
"values": [
{
"text": "You want an offer for the following parts: <?
$sparepartrequest.join(', ') ?>."
}
],
"response_type": "text",
"selection_policy": "sequential"
}
]
}
}
I have created a context variable called sparepartrequest as can be seen from the code lines above. For instance when the user says "I want an offer for a filter and a seal", the output of the bot is the following sentence:
You want an offer for the following parts: both, only_gear.
I donĀ“t want the bot to prompt back the names of the values of the entity #spare_part, I rather want it to store the exact input of the user, for our case which would be filter and seal. So if the bot worked as I wanted it to, the output would look like the following:
You want an offer for the following parts: filter, valve.
Again, I believe that this can be handled with JSON Editor. Thank you !
Use two context variables. sparepartrequest as already done and sparepartrequest_literals as follows:
"sparepartrequest_literals":"<? entities['spare_part'].![literal].join(', ') ?>".
Then, in your text response call it by $sparepartrequest_literals to print the mentioned parts or use $sparepartrequest to refer to the detected values.

Sensenet length filter Not working

I want to query the Field which are empty and which are not empty using Sensenet Odata Rest API. Their documentation mentions a filter function called 'length'. I have tried to query the field with the length operation but it fails with the error.
This is the filter I have used
$filter=length(Name) eq 2
Sense/Net 6.5.4.9496
Exception
"code": "NotSpecified",
"exceptiontype": "SnNotSupportedException",
"message": {
"lang": "en-us",
"value": "Unknown method: length"
},
Wiki Link http://wiki.sensenet.com/OData_REST_API
The length operation was included in the list of supported methods incorrectly, we apologise for that. SenseNet compiles these filters to Lucene queries and it is not possible to compose such a query in Lucene that performs an operation on a field.
(the remaining methods, like substringof or startswith can be compiled to a wildcard expression easily, so that should work)
Unfortunately 'empty' expressions are also not supported by Lucene, because of their document/term structure. So the following expression does not work either:
Description eq ''
Edit: as a workaround, developers may create a custom field index handler.
For every field you want to check for emptiness (e.g. Description), you may create a technical hidden bool field (IsDescriptionEmpty) in the content type definition. The only thing you have to create and define is a custom field index handler class. In your case it would inherit from the built-in bool field index handler and you could return a boolean index value based on whether the target field (in this case Description) is empty or not.
After this you would be able to define search exressions like the following:
+Type:File +IsDescriptionEmpty:true
Please check the wiki article below and the source code for index handler examples.
How to create a field indexhandler

How to escape some characters in postgresql

I have this data in one column in postgresql
{
"geometry":{
"status":"Point",
"coordinates":[
-122.421583,
37.795027
]
},
and i using his query
select * from students where data_json LIKE '%status%' ;
Above query return results but this one does not
select * from students where data_json LIKE '%status:%' ;
How can fix that
Of course the 2nd one doesn't find a match, there's no status: text in the value. I think you wanted:
select * from students where data_json LIKE '%"status":%'
... however, like most cases where you attempt text pattern matching on structured data this is in general a terrible idea that will bite you. Just a couple of problem examples:
{
"somekey": "the value is \"status\": true"
}
... where "status": appears as part of the text value and will match even though it shouldn't, and:
{
status : "blah"
}
where status has no quotes and a space between the quotes and colon. As far as JavaScript is concerned this is the same as "status": but it won't match.
If you're trying to find fields within json or extract fields from json, do it with a json parser. PL/V8 may be of interest, or the json libraries available for tools like pl/perl, pl/pythonu, etc. Future PostgreSQL versions will have functions to get a json key by path, test if a json value exists, etc, but 9.2 does not.
At this point you might be thinking "why don't I use regular expressions". Don't go there, you do not want to try to write a full JSON parser in regex. this blog entry is somewhat relevant.

Text input through SSRS parameter including a Field name

I have a SSRS "statement" type report that has general layout of text boxes and tables. For the main text box I want to let the user supply the value as a parameter so the text can be customized, i.e.
Parameters!MainText.Value = "Dear Mr.Doe, Here is your statement."
then I can set the text box value to be the value of the parameter:
=Parameters!MainText.Value
However, I need to be able to allow the incoming parameter value to include a dataset field, like so:
Parameters!MainText.Value = "Dear Mr.Doe, Here is your [Fields!RunDate.Value] statement"
so that my report output would look like:
"Dear Mr.Doe, Here is your November statement."
I know that you can define it to do this in the text box by supplying the static text and the field request, but I need SSRS to recognize that inside the parameter string there is a field request that needs to be escaped and bound.
Does anyone have any ideas for this? I am using SSRS 2008R2
Have you tried concatenating?
Parameters!MainText.Value = "Dear Mr.Doe, Here is your" & [Fields!RunDate.Value] & "statement"
There are a few dramatically different approaches. To know which is best for you will require more information:
Embedded code in the report. Probably the quickest to
implement would be embedded code in the report that returned the
parameter, but called String.Replace() appropriately to substitute
in dynamic values. You'll need to establish some code for the user for which strings will be replaced. Embedded code will get you access to many objects in the report. For example:
Public Function TestGlobals(ByVal s As String) As String
Return Report.Globals.ExecutionTime.ToString
End Function
will return the execution time. Other methods of accessing parameters for the report are shown here.
1.5 If this function is getting very large, look at using a custom assembly. Then you can have a better authoring experience with Visual Studio
Modify the XML. Depending on where you use
this, you could directly modify the .rdl/.rdlc XML.
Consider other tools, such as ReportBuilder. IF you need to give the user
more flexibility over report authoring, there are many tools built
specifically for this purpose, such as SSRS's Report Builder.
Here's another approach: Display the parameter string with the dataset value already filled in.
To do so: create a parameter named RunDate for example and set Default value to "get values from a query" and select the first dataset and value field (RunDate). Now the parameter will hold the RunDate field and you can use it elsewhere. Make this parameter hidden or internal and set the correct data type. e.g. Date/Time so you can format its value later.
Now create the second parameter which will hold the default text you want:
Parameters!MainText.Value = "Dear Mr.Doe, Here is your [Parameters!RunDate.Value] statement"
Not sure if this syntax works but you get the idea. You can also do formatting here e.g. only the month of a Datetime:
="Dear Mr.Doe, Here is your " & Format(Parameters!RunDate.Value, "MMMM") & " statement"
This approach uses only built-in methods and avoids the need for a parser so the user doesn't have to learn the syntax for it.
There is of course one drawback: the user has complete control over the parameter contents and can supply a value that doesn't match the report content - but that is also the case with the String Replace method.
And just for the sake of completeness there's also the simplistic option: append multiple parameters: create 2 parameters named MainTextBeforeRunDate and MainTextAfterRunDate.
The Textbox value expression becomes:
=Parameters!MainTextBeforeRunDate.Value & Fields!RunDate.Value & Parameters!MainTextAfterRunDate.Value.
This should explain itself. The simplest solution is often the best, but in this case I have my doubts. At least this makes sure your RunDate ends up in the final report text.