Using NGSI Adapter to post updates to Orion Context Broker - fiware-monitoring

I’m trying to use the Monitoring GE / Sextant tool called NGSI Adapter. I am trying to run the command line suggestion from https://github.com/telefonicaid/fiware-monitoring/blob/develop/README.rst#api-overview
I always get this response:
time=2016-03-16T11:06:18.794Z | lvl=INFO | trans=ciluqsqi1000077m2u9zwb8x5 | op=POST | msg=Request on resource /check_load with params id=host_1&type=host
time=2016-03-16T11:06:18.800Z | lvl=INFO | trans=ciluqsqi1000077m2u9zwb8x5 | op=POST | msg=Response status 200 OK
time=2016-03-16T11:05:07.004Z | lvl=INFO | trans=ciluqr73e0000umm2nir549ts | op=UpdateContext | msg=Request to ContextBroker at http://orion:1026...
time=2016-03-16T11:05:07.013Z | lvl=INFO | trans=ciluqr73e0000umm2nir549ts | op=UpdateContext | msg=Response status 415 Unsupported Media Type
From looking at the release notes to Orion 3.4.1, "Unsupported Media Type" indicates that the Content-Type in the request header is unacceptable. From browsing the code in lib/parsers/common/base.js it seems like NGSI Adapter only currently supports xml. I think that Orion now only supports JSON.
Am I correct that this incompatibility between NGSI Adapter and Orion exists?
When is a fix expected?

You're right.
Taking a look at the release notes, you’ll find that Orion 0.28.0 is the last version which includes XML support (deprecated since 0.23.0).
For that reason, a new NGSI Adapter v1.4.0 (included as part of FIWARE Monitoring 5.2.3) has been released. Please follow these instructions to install it and check documentation at ReadTheDocs.
Thank you for using FIWARE Monitoring.

Related

Fuse Karaf rest-dsl-simple missing

Trying the Fuse Karaf quickstarts "rest-dsl-simple" build and install in Fuse appears successful, however the tests do not. The both just say site cannot be reached.
Looking in Fuse.log I see...
2021-03-28 08:34:34,680 | INFO | FelixStartLevel | o.a.k.s.i.a.o.CommandExtension | 152 - org.apache.karaf.shell.core - 4.2.9.fuse-780023-redhat-00001 | Command registration delayed for bundle org.apache.karaf.http.core/4.2.9.fuse-780023-redhat-00001. Missing service: [org.apache.karaf.http.core.ServletService, org.apache.karaf.http.core.ProxyService]
Then later I see my bundle set to failure...
2021-03-28 08:34:43,066 | ERROR | Event Dispatcher: 1 | o.a.c.b.BlueprintCamelContext | 62 - org.apache.camel.camel-blueprint - 2.23.2.fuse-780036-redhat-00001 | Error occurred during starting CamelContext: fusequickstart-restdsl-simple-camel
org.apache.camel.FailedToCreateRouteException: Failed to create route route3: Route(route3)[[From[rest://get:/simplerest:/get?componentNam... because of No bean could be found in the registry for: restlet of type: org.apache.camel.spi.RestConsumerFactory
at
What feature(s) do I need to install to rectify this error?
I fixed it by going to the http://localhost:8181/hawtio/osgi/features and filtering on REST features and installing them. Yes I know shotgun approach but it will ensure whatever technology I choose down the road will likely be covered.
Now rest-dsl-simple fuse karaf quickstart is working and now I can move to my code, knowing the fuse-karaf engine is running normally.
:-)

FIWARE Orion Runtime Error

I am using FIWARE Orion (in a docker image) and I am facing with the possibility of losing some records. I looked in the log and came with a number of errors like the following:
time=Sunday 17 Dec 21:03:13 2017.743Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=safeMongo.cpp[287]:setStringVector | msg=Runtime Error (element 0 in array was supposed to be an string but type=3 from caller mongoSubCacheItemInsert:225)
According to http://fiware-orion.readthedocs.io/en/0.26.1/admin/logs/ these kind of errors (Runtime) "may cause the Context Broker to fail" and "should be reported to the Orion development team using the appropriate channel" and this is exactly what I am doing.
Any help, will be highly appreciated.
Thank you very much in advance.
EDIT: Orion version is 1.5.0-next
EDIT: It has been upgraded to 1.10.0
EDIT: After executing ps ax | grep contextBroker I receive the following results:
23470 ? Ssl 4:24 /usr/bin/contextBroker -fg -multiservice -dbhost mongodb
EDIT: The problem occurs periodically. Actually, it takes place exactly every minute:
time=Wednesday 20 Dec 20:50:27 2017.235Z
time=Wednesday 20 Dec 20:51:27 2017.237Z
etc.
Orion 1.5.0-next means some running version between 1.5.0 (released in October 2016) and 1.6.0 (released in December 2016). In the best case, your version is one year old, which is pretty much time.
Thus, I recommend you to upgrade to the newest available Orion version (in the moment of writting this, that version is 1.10.0, released in December 2017). We have solved some "overlogging" problems in the delta of changes between 1.6.0 and 1.10.0 and the one you mention could be one of them.
If the problem stills after upgrading, tell about it in a comment to the answer and we'll keep debuging.
Diagnosis
The 60 seconds periodicity is exactly the subscriptions cache refresh interval with default configuration (your CLI confirms your are not using different setting for subscriptions cache).
Looking in detail to the line refered by the log trace in Orion 1.10.0 source code:
setStringVectorF(sub, CSUB_CONDITIONS, &(cSubP->notifyConditionV));
The log error means that Orion expects an array of strings for the CSUB_CONDITIONS field in a document of the subscription collection at database, but some of the elements in the array (or all) aren't strings but a objects (type 3 means object, as BSON specification details).
CSUB_CONDITIONS constant corresponds to conditions field at DB. Note this field changed at Orion 1.3.0. Before 1.3.0, for instance 1.2.0, it was an array of objects:
"conditions" : [
{
"type" : "ONCHANGE",
"value" : [ "temperature " ]
}
]
From 1.3.0 on, it was simplified to an array of strings:
"conditions" : [ "temperature" ]
So my hypothesis is that in some moment in the past that Orion instance was updated crossing the 1.3.0 boundary but without applying the procedure to migrate data (or the procedure was applied but failed in some way).
Solution
Given that you are in a situtation in which your data at Orion database is probably inconsistent, the cleanest solution would be to remove your database. Or, at least, the csubscollection.
However, this is possible only in the case you can regenerate the data to be deleted in an easy way. If that is not feasible, you can try with the procedure to migrate data. In particular, the csub_merge_condvalues.py script should fix the problem although I'd recommend to apply the full procedure in order to fix other potential inconsistencies.
Take into account that the migration procedure was designed to be applied before start using the new Orion version. It seems you have been using post-1.3.0 Orion with pre-1.3.0 data for a time, so your data can have evolved in some unexpected way the procedure couldn't fix. Anyway, even in this case the procedure is better than nothing :)
Note that if you are using multiple services (it seems so for the -multiservice CLI parameter) you have to apply the clean/migration procedure to every per-service database.

Creating PostgreSQL DataSource via pax-jdbc config file on karaf 4

On my karaf 4.0.8 I've installed the feature pax-jdbc-postgresql. The DataFactory for PostgreSQL is installed:
org.osgi.service.jdbc.DataSourceFactory]
osgi.jdbc.driver.class org.postgresql.Driver
osgi.jdbc.driver.name PostgreSQL JDBC Driver
osgi.jdbc.driver.version PostgreSQL 9.4 JDBC4.1 (build 1203)
service.bundleid 204
service.scope singleton
Using Bundles com.eclipsesource.jaxrs.publisher (184)
I've create the file etc/org.ops4j.datasource-psql-sandbox.cfg:
osgi.jdbc.driver.class=org.postgresql.Driver
osgi.jdbc.driver.name=PostgreSQL
url=jdbc:postgresql://localhost:5432/sandbox
dataSourceName=psql-sandbox
user=sandbox
password=sandbox
After that, I see the confirmation in karaf.log that the file was processed:
2017-02-10 14:54:17,468 | INFO | 41-88b277ae0921) |
DataSourceRegistration | 154 - org.ops4j.pax.jdbc.config -
0.9.0 | Detected config for DataSource psql-sandbox. Tracking DSF with filter
(&(objectClass=org.osgi.service.jdbc.DataSourceFactory)(osgi.jdbc.driver.class=org.postgresql.Driver)(osgi.jdbc.driver.name=PostgreSQL))
However, I see no new DataSource in services list in console. What went wrong? I see no exceptions in log ....
The log message tell you that the config was processed and it is now searching for a suitable DataSourceFactory OSGi service.
The problem in your case is that it does not find such a service. So to debug this you should list all DataSourceFactory services and check their properties.
service:list DataSourceFactory
In my case it shows this:
[org.osgi.service.jdbc.DataSourceFactory]
-----------------------------------------
osgi.jdbc.driver.class = org.postgresql.Driver
osgi.jdbc.driver.name = PostgreSQL JDBC Driver
...
As you see it does not match the filter you see in the log. Generally you should only provide either osgi.jdbc.driver.class or osgi.jdbc.driver.name not both. If you remove the osgi.jdbc.driver.name line the config will work.
There is no error message as the system can not know if the error is transient or not. Basically as soon as you install a matching OSGi service the DataSource will be created.

502 Bad Gateway error throw in cloundfoundry tunneling Mongo Database

I used the grails cloudfoundry plugin and tunnel to remote Mongo DB service. The connection is fine since I can do search for the first time, but after couple seconds, the terminal starts print out 502 Bad Gateway error and I am not able to execute any mongo db commands.
| Run cf-tunnel-disconnect to close the current tunnel
|
Error Exception in thread "ThreadPoolTaskExecutor-3"
| Error org.cloudfoundry.caldecott.TunnelException: Error while reading from tunnel
| Error at org.cloudfoundry.caldecott.client.TunnelHandler$Reader.run(TunnelHandler.java:172)
| Error at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
| Error at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
| Error at java.lang.Thread.run(Thread.java:680)
| Error Caused by: org.springframework.web.client.HttpServerErrorException: **502 Bad Gateway
| Error** at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:92)
| Error at org.springframework.web.client.RestTemplate.handleResponseError(RestTemplate.java:494)
| Error at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:451)
| Error at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:409)
| Error at org.cloudfoundry.caldecott.client.HttpTunnel.receiveDataBuffered(HttpTunnel.java:150)
| Error at org.cloudfoundry.caldecott.client.HttpTunnel.receiveBytes(HttpTunnel.java:140)
| Error at org.cloudfoundry.caldecott.client.HttpTunnel.read(HttpTunnel.java:83)
| Error at org.cloudfoundry.caldecott.client.TunnelHandler$Reader.run(TunnelHandler.java:148)
| Error ... 3 more
That looks like an error handling problem that was fixed in later releases of the cloudfoundry-caldecott-lib. The latest is 0.1.3 and is available from the Spring Source milestone repo (http://repo.springsource.org/libs-milestone/org/cloudfoundry/cloudfoundry-caldecott-lib/).
I'm not sure what version the Grails plugin uses, but if it's an older version, that would explain why you are seeing this.
Thanks #trisberg 's explanation and #scott 's finding, now I am able to use VMC to tunnel to my remote DB. Problem solved.

How to scrape Facebook advertising data?

Facebook provides data about demographics via their advertising platform. How to scrape it (using Python)?
1.) go to http://www.facebook.com/ads/create/
2.) fill in the forms
3.) now, there is data
See sample image: http:// www.webdistortion.com/wp-content/uploads/2010/10/fb4.jpg
(i am a new user, so I can't post a image)
Problem: how to scrape it?
My ideas:
1.) use mechanize - maybe it is possible to fill in the forms, but the estimated number (112,960 in the example) is not visible in the source code and therefore you cannot parse it => we should do some other tricks, but what?
2.) use selenium (or windmill) - my recording was: open facebook.com --> click advertising --> click create ad --> ...
Unfortunately, this already failed. Log:
[info] Executing: |open | / | |
[info] Executing: |clickAndWait | link=Advertising | |
[error] isNewPageLoaded found an old pageLoadError: Error: Permission denied for >> to get property Location.href
[error] Permission denied for to get property Location.href
[info] Executing: |clickAndWait | css=span.uiButtonText | |
[error] Unexpected Exception: fileName -> chrome://selenium-ide/content/selenium-core/scripts/selenium-browserbot.js, lineNumber -> 840
There is evidence that it is possible to scrape this data: http://www.checkfacebook.com/
Solving the problem is more interesting than the data itself (ofc, this data is certainly interesting). I know that there are solutions, but I cannot come up with any. It is killing me, please help.
I'm not quite sure what you mean by scrape data. Do you mean using the public Ads API (https://developers.facebook.com/docs/reference/ads-api/) and calling the reach estimate function (https://developers.facebook.com/docs/reference/ads-api/reachestimate/)?