We are trying to integrate Orion, Cygnus and Ckan together.
I have followed these steps in order to make this happen:
Install and configure Cygnus with the Fiware Ckan info(Cygnus up and running)
Login in Ckan and get the API key and configure this in the Cygnus settings
Orion steps:
queryUpdate = APPEND data
{
"contextElements": [{
"type": "Room",
"isPattern": "false",
"id": "26JanRoom",
"attributes": [{
"name": "temperature",
"type": "float",
"value": "888"
}]
}],
"updateAction": "APPEND"
}
subscribeContext = subscribe with the entity id created above(our Cygnus host is given as reference "reference": "CYGNUS HOST", )
{
"entities": [{
"type": "Room",
"isPattern": "false",
"id": "26JanRoom"
}],
"attributes": ["temperature"],
"reference": "CYGNUS HOST",
"duration": "P1M",
"notifyConditions": [{
"type": "ONCHANGE",
"condValues": ["temperature"]
}],
"throttling": "PT5S"
}
queryUpdate = UPDATE data
{
"contextElements": [{
"type": "Room",
"isPattern": "false",
"id": "26JanRoom",
"attributes": [{
"name": "temperature",
"type": "float",
"value": "111"
}]
}],
"updateAction": "UPDATE"
}
What we expect is to receive some notifications in the Cygnus side, but there is nothing sent from the Orion (orion.lab.fi-ware.org:1026/)
Could you please help us on this topic?
Thanks kr
Omer Ozdemir
Your are using
"condValues": ["pressure"]
which means that every time the attribute named pressure change, Orion will trigger a notification. However, your update is modifiygin temperature.
Please, have a look to the subscribe context operation section at Orion documentation.
Related
I configured a Kafka JDBC Source connector in order to push on a Kafka topic the record changed (insert or update) from a PostgreSQL database.
I use "timestamp+incrementing" mode. Seems to work fine.
I didnt't configure the JDBC Sink connector because I'm using a Kafka Consumer that listen on the topic.
The message on the topic is a JSON. This is an example:
{
"schema": {
"type": "struct",
"fields": [
{
"type": "int64",
"optional": false,
"field": "id"
},
{
"type": "int64",
"optional": true,
"name": "org.apache.kafka.connect.data.Timestamp",
"version": 1,
"field": "entity_create_date"
},
{
"type": "int64",
"optional": true,
"name": "org.apache.kafka.connect.data.Timestamp",
"version": 1,
"field": "entity_modify_date"
},
{
"type": "int32",
"optional": true,
"field": "entity_version"
},
{
"type": "string",
"optional": true,
"field": "firstname"
},
{
"type": "string",
"optional": true,
"field": "lastname"
}
],
"optional": false,
"name": "author"
},
"payload": {
"id": 1,
"entity_create_date": 1600287236682,
"entity_modify_date": 1600287236682,
"entity_version": 1,
"firstname": "George",
"lastname": "Orwell"
}
}
As you can see there is no info about if this change is captured by Source connector because of an insert or an update.
I need this information. How can solve?
You can't get that information using the JDBC Source connector, unless you do something bespoke in the source schema and triggers.
This is one of the reasons why log-based CDC is generally a better way to get events from the source database, and for other reasons including:
capturing deletes
capturing the type of operation
capturing all events, not just what's there at the time when the connector polls.
For more details on the nuances of this see this blog or a talk based on the same.
Using a CDC based approach as suggested by #Robin Moffatt may be the proper way to handle your requirement. Checkout https://debezium.io/
However, looking at your table data you could use "entity_create_date" and "entity_modify_date" in your consumer to determine if the message in an insert or update. If "entity_create_date" = "entity_modify_date" then it's an insert else it's an update.
I initially successfully created the following linked service in ADFv2 of type AzureDataExplorer for accessing my database in ADX called CustomerDB:-
{
"name": "ls_AzureDataExplorer",
"properties": {
"type": "AzureDataExplorer",
"annotations": [],
"typeProperties": {
"endpoint": "https://mycluster.xxxxmaskingregionxxxx.kusto.windows.net",
"tenant": "xxxxmaskingtenantidxxxx",
"servicePrincipalId": "xxxxmaskingspxxxx",
"servicePrincipalKey": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "ls_AzureKeyVault_MyKeyVault",
"type": "LinkedServiceReference"
},
"secretName": "MySecret"
},
"database": "CustomerDB"
}
},
"type": "Microsoft.DataFactory/factories/linkedservices"
}
This worked smoothly. Some values I had to mask for obvious reasons but just wanted to say that there is no issue with this connection. Now inspired from this Microsoft documentation I am trying to create a generic version of this linked service, which makes sense because otherwise if I have 10 databases in the cluster, I will have to create 10 different linked services.
So I tried to create the parameterized version in the following manner:-
{
"name": "ls_AzureDataExplorer_Generic",
"properties": {
"type": "AzureDataExplorer",
"annotations": [],
"typeProperties": {
"endpoint": "https://mycluster.xxxxmaskingregionxxxx.kusto.windows.net",
"tenant": "xxxxmaskingtenantidxxxx",
"servicePrincipalId": "xxxxmaskingspxxxx",
"servicePrincipalKey": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "ls_AzureKeyVault_MyKeyVault",
"type": "LinkedServiceReference"
},
"secretName": "MySecret"
},
"database": "#{linkedService().DBName}"
}
},
"type": "Microsoft.DataFactory/factories/linkedservices"
}
But while publishing the changes I keep getting the following error:-
Is there any solution to this?
The article clearly says that:-
For all other data stores, you can parameterize the linked service by selecting the Code icon on the Connections tab and using the JSON editor
So as per that my changes should have been published successfully. But I keep getting the error.
It appears I need to specify the parameter elsewhere in the same JSON. The followed worked:-
{
"name": "ls_AzureDataExplorer_Generic",
"properties": {
"parameters": {
"DBName": {
"type": "string"
}
},
"type": "AzureDataExplorer",
"annotations": [],
"typeProperties": {
"endpoint": "https://mycluster.xxxxmaskingregionxxxx.kusto.windows.net",
"tenant": "xxxxmaskingtenantidxxxx",
"servicePrincipalId": "xxxxmaskingspxxxx",
"servicePrincipalKey": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "ls_AzureKeyVault_MyKeyVault",
"type": "LinkedServiceReference"
},
"secretName": "MySecret"
},
"database": "#{linkedService().DBName}"
}
},
"type": "Microsoft.DataFactory/factories/linkedservices"
}
According to this Map Provider Configuration Changes, I use this configuration to add HERE maps in GeoMap:
var oMapConfig = {
"MapProvider": [{
"name": "HEREMAPS",
"type": "HERETerrainMap",
"description": "",
"tileX": "256",
"tileY": "256",
"maxLOD": "20",
"copyright": "Tiles Courtesy of HERE Maps",
"Source": [{
"id": "s1",
"url": "https://1.base.maps.cit.api.here.com/maptile/2.1/maptile/newest/reduced.day/{LOD}/{X}/{Y}/256/png8?app_id=MY_ID&app_code=MY_CODE"
}, {
"id": "s2",
"url": "https://2.base.maps.cit.api.here.com/maptile/2.1/maptile/newest/reduced.day/{LOD}/{X}/{Y}/256/png8?app_id=MY_ID&app_code=MY_CODE"
}
]
}],
"MapLayerStacks": [{
"name": "DEFAULT",
"MapLayer": {
"name": "layer1",
"refMapProvider": "HEREMAPS",
"opacity": "1.0",
"colBkgnd": "RGB(255,255,255)"
}
}]
};
this.oMap.setMapConfiguration(oMapConfig);
this.oMap.setRefMapLayerStack("DEFAULT");
But my map is in black and white style:
What I want is standard map:
In Configuring HERE (formerly Nokia, NAVTEQ) maps, new server URL is provided, I've tried this, but not working.
{
"id": "s1",
"url": http://1.maps.nlp.nokia.com/maptile/2.1/maptile/newest/normal.day/{LOD}/{X}/{Y}/256/png?app_id=YOUR_APP_ID&app_code=YOUR_APP_CODE"
}, {
"id": "s2",
"url": "http://2.maps.nlp.nokia.com/maptile/2.1/maptile/newest/normal.day/{LOD}/{X}/{Y}/256/png?app_id=MY_APP_ID&app_code=MY_APP_CODE"
}
And failed to find MapProvider configuration documentation in setMapConfiguration of GeoMap
Just change reduced.day to normal.day in your map URL, and you'll get colored map:)
edit:
Please refer to https://developer.here.com/documentation/map-tile/topics/examples.html for detailed APIs
I have a JSON formatted as this one:
{
"contextElements": [
{
"type": "environment",
"isPattern": "false",
"id": "labMax",
"attributes": [
{
"name": "users",
"type": "vector",
"value": [{"userId":"0001", "status":"0"},{"userId":"0002", "status":"0"}]
},
{
"name": "rooms",
"type": "vector",
"value": [{"room1": [ {"id":"room1"}, {"owner":"1"}]},{"id":"room2"}, {"owner":"2"}]
},
{
"name": "sensors",
"type": "vector",
"value": [
{"sensor1": [ {"id":"1"}, {"location":"room1"},{"value":"11"},{"status":"ok"}]},
{"sensor2": [ {"id":"2"}, {"location":"room1"},{"value":"22"},{"status":"update"}]}
]
}
]
}
],
"updateAction": "APPEND"
}
I have also a subscription ONCHANGE on the attribute "sensors" and when I update it, without changing any value inside the vector, it causes a notification. Probably this is a wrong behaviour because a subscriber should be only notified when a value changes. On the other side, if I use strings or integer as attributes values, it works correctly.
Up to Orion 0.16.0 at least, this is a known behaviour. An issue has been openend in the Orion github.com repository about it.
I am using the swagger tool for documenting my Jersey based REST API (the swaggerui I am using was downloaded on June 2014 don't know if this issue has been fixed in later versions but as I made a lot of customization to its code so I don't have the option to download the latest without investing lot of time to customize it again).
So far and until now, all my transfer objects have one level deep properties (no embedded pojos). But now that I added some rest paths that are returning more complex objects (two levels of depth) I found that SwaggerUI is not expanding the JSON model schema when having embedded objects.
Here is the important part of the swagger doc:
...
{
"path": "/user/combo",
"operations": [{
"method": "POST",
"summary": "Inserts a combo (user, address)",
"notes": "Will insert a new user and a address definition in a single step",
"type": "UserAndAddressWithIdSwaggerDto",
"nickname": "insertCombo",
"consumes": ["application/json"],
"parameters": [{
"name": "body",
"description": "New user and address combo",
"required": true,
"type": "UserAndAddressWithIdSwaggerDto",
"paramType": "body",
"allowMultiple": false
}],
"responseMessages": [{
"code": 200,
"message": "OK",
"responseModel": "UserAndAddressWithIdSwaggerDto"
}]
}]
}
...
"models": {
"UserAndAddressWithIdSwaggerDto": {
"id": "UserAndAddressWithIdSwaggerDto",
"description": "",
"required": ["user",
"address"],
"properties": {
"user": {
"$ref": "UserDto",
"description": "User"
},
"address": {
"$ref": "AddressDto",
"description": "Address"
}
}
},
"UserDto": {
"id": "UserDto",
"properties": {
"userId": {
"type": "integer",
"format": "int64"
},
"name": {
"type": "string"
},...
},
"AddressDto": {
"id": "AddressDto",
"properties": {
"addressId": {
"type": "integer",
"format": "int64"
},
"street": {
"type": "string"
},...
}
}
...
The embedded objects are User and Address, their models are being created correctly as shown in the json response.
But when opening the SwaggerUI I can only see:
{
"user": "UserDto",
"address": "AddressDto"
}
But I should see something like:
{
"user": {
"userId": "integer",
"name": "string",...
},
"address": {
"addressId": "integer",
"street": "string",...
}
}
Something may be wrong in the code that expands the internal properties, the javascript console doesn't show any error so I assume this is a bug.
I found the solution, there is a a line of code that needs to be modified to make it work properly:
In the swagger.js file there is a getSampleValue function with a conditional checking for undefined:
SwaggerModelProperty.prototype.getSampleValue = function(modelsToIgnore) {
var result;
if ((this.refModel != null) && (modelsToIgnore[this.refModel.name] === 'undefined'))
...
I updated the equality check to (removing quotes):
modelsToIgnore[this.refModel.name] === undefined
After that, SwaggerUI is able to show the embedded models.