Cygnus is updating the wrong table - fiware-cygnus

I have a Cygnus instance running with MySQL sink and I wanted to save every report in the same row, using the same table for every sensor report. So I created the MySql table and configurate Cygnus with:
cygnusagent.sources.http-source.handler.default_service = $DATABASE
cygnusagent.sources.http-source.handler.default_service_path = $TABLE
cygnusagent.sinks.mysql-sink.attr_persistence = column
cygnusagent.sinks.mysql-sink.table_type = table-by-service-path
Evry $ is a replacement of the real name for what it means.
Whenever there is an Orion notification I get this log:
WARN
sinks.OrionSink: Bad context data (Table
'$DATABASE.$TABLE_$ENTITYID_$ENTITYTYPE' doesn't
exist)
After reading this description I had the idea that Cygnus would try to save the data in a table with the name in default_service_path, which is not the case. What do I need to change to make it that way?

Related

What could cause Firebird to silently turn calculated fields into "normal" fields?

I'm using Firebird 2.5.8 to store information for a software I designed.
A customer contacted me today to inform me of multiple errors that I couldn't understand, and I used the "IBExpert" tool to inspect its database.
To my surprise, all the calculated fields had been transformed into "standard" fields. This is clearly visible in the "DDL" tab of the database tool, which displays tables definition as SQL code.
For instance, the following table definition:
CREATE TABLE TVERSIONS (
...
PARENTPATH COMPUTED BY (((SELECT TFILES.FILEPATH FROM TFILES WHERE ID = TVERSIONS.FILEID))),
....
ISCOMPLETE COMPUTED BY ((((SELECT TBACKUPVERSIONS.ISCOMPLETE FROM TBACKUPVERSIONS WHERE ID = TVERSIONS.CVERSION)))),
CDATE COMPUTED BY (((SELECT TBACKUPVERSIONS.SERVERSTARTDATE FROM TBACKUPVERSIONS WHERE ID = TVERSIONS.CVERSION))),
DDATE COMPUTED BY (((SELECT TBACKUPVERSIONS.SERVERSTARTDATE FROM TBACKUPVERSIONS WHERE ID = TVERSIONS.DVERSION))),
...
);
has been "changed" in the client database into this:
CREATE TABLE TVERSIONS (
...
PARENTPATH VARCHAR(512) CHARACTER SET UTF8 COLLATE UNICODE,
...
ISCOMPLETE SMALLINT,
CDATE TIMESTAMP,
DDATE TIMESTAMP,
...
);
How can such a thing be possible?
I've been using Firebird for more than 10 years, and I've never seen such a behavior until now. Is it possible that it could be a corruption of RDB$FIELDS.RDB$COMPUTED_SOURCE fields?
What would you advise?
To summarize the discussion on firebird-support (and comments above):
The likely cause of this happening is that the database was backed up and restored using gbak, and the restore did not complete successfully. If this happens, gbak will have ended in an error, and the database is in single shutdown state (which means only SYSDBA or the database owner is allowed to create one connection). If the database is not currently in single shutdown mode, someone used gfix to bring the database online again in normal state.
When a database is restored using gbak, calculated fields are initially created as normal fields (though their values are not part of the backup). After data is restored successfully, those fields are altered to be calculated fields. If there are any errors before or during redefinition of the calculated fields, the restore will fail, and the database will be in single shutdown state, and the calculated fields will still be "normal" fields.
I recommend doing a structural comparison of the database to check if calculated fields are the only problem, or if other things (e.g. constraints) are missing. A simple way to do this is to export the DDL of the database and a "known-good" database, for example using ISQL (command line option -extract), and comparing them with a diff tool.
Then either fix the existing database by executing the necessary DDL to restore calculated fields (and other things), or create a new empty database, and move the data from the old to the new (using a datapump tool).
Also check if any data is missing. By default, gbak restores the data in a single transaction, so in that case either all data is present or all data is missing. However, gbak also has a "transaction-per-table" mode (-ONE_AT_A_TIME or -O), which could mean some tables have data, and others have no data.

Loopback (DB2) - Can not create an instance of PersistedModel that uses a schema other than the userid

I am trying to define a model that is based on the PersistedModel to access a table in DB2, call it MY_SCHEMA.MY_TABLE.
I created the model MY_TABLE, based on PersistedModel, with a Data Source (datasources.json) where the definition includes the attribute "schema": "MY_SCHEMA". The data source also contains the userid my_userid, used for the connection.
Current Behavior
When I try to call the API for this model, it tries to access the table my_userid.MY_TABLE.
Expected Behavior
It should access MY_SCHEMA.MY_TABLE.
The DB2 instance happens to be on a System Z. I have created a table called my_userid.MY_TABLE and that will work, however for the solution we are trying to build, there are multiple schemas required.
Note that this only appears to be an issue with Db2 on System Z. I can change schemas on Db2 LUW.
What LoopBack connector are you using? What version? Can you also check what version of loopback-ibmdb is installed in your node_modules folder?
AFAICT, LoopBack's DB2-related connectors support schema field, see https://github.com/strongloop/loopback-ibmdb/blob/master/lib/ibmdb.js#L96-L100
self.schema = this.username;
if (settings.schema) {
self.schema = settings.schema.toUpperCase();
}
self.connStr += ';CurrentSchema=' + self.schema;
Have you considered configuring the database connection using DSN instead of individual fields like hostname and username?
In your datasource config JSON:
"dsn": "DATABASE={dbname};HOSTNAME={hostname};UID={username};PWD={password};CurrentSchema=MY_SCHEMA"

Oracle GoldenGate adapter for Kafka - JSON message contents

In My golden gate big data for kafka. when i try to update the record am getting only updated column and primary key column in after part in json file
{"table":"MYSCHEMATOPIC.PASSPORTS","op_type":"U","op_ts":"2018-03-17 13:57:50.000000","current_ts":"2018-03-17T13:57:53.901000","pos":"00000000030000010627","before":{"PASSPORT_ID":71541893,"PPS_ID":71541892,"PASSPORT_NO":"1234567","PASSPORT_NO_NUMERIC":241742,"PASSPORT_TYPE_ID":7,"ISSUE_DATE":null,"EXPIRY_DATE":"0060-12-21 00:00:00","ISSUE_PLACE_EN":"UN-DEFINED","ISSUE_PLACE_AR":"?????? ????????","ISSUE_COUNTRY_ID":203,"ISSUE_GOV_COUNTRY_ID":203,"IS_ACTIVE":1,"PREV_PASSPORT_ID":null,"CREATED_DATE":"2003-06-08 00:00:00","CREATED_BY":-9,"MODIFIED_DATE":null,"MODIFIED_BY":null,"IS_SETTLED":0,"MAIN_PASSPORT_PERSON_INFO_ID":34834317,"NATIONALITY_ID":590},
"after":{"PASSPORT_ID":71541893,"NATIONALITY_ID":589}}
In After part in my json out i want to show all columns
How to get all columns in after part?
gg.handlerlist = kafkahandler
gg.handler.kafkahandler.type=kafka gg.handler.kafkahandler.KafkaProducerConfigFile=custom_kafka_producer.properties
#The following resolves the topic name using the short table name
gg.handler.kafkahandler.topicMappingTemplate=passports
gg.handler.kafkahandler.format=json
gg.handler.kafkahandler.BlockingSend =false
gg.handler.kafkahandler.includeTokens=false
gg.handler.kafkahandler.mode=op
#gg.handler.kafkahandler.format.insertOpKey=I
#gg.handler.kafkahandler.format.updateOpKey=U
#gg.handler.kafkahandler.format.deleteOpKey=D
#gg.handler.kafkahandler.format.truncateOpKey=T
#gg.handler.kafkahandler.format.includeColumnNames=TRUE
goldengate.userexit.timestamp=utc
goldengate.userexit.writers=javawriter
javawriter.stats.display=TRUE
javawriter.stats.full=TRUE
gg.log=log4j
gg.log.level=info
gg.report.time=30sec
Try using the Kafka Connect handler instead - this includes the full payload. This article goes through the setup process.
Hi This issue is fixed by added below change in golden gate side
ADD TRANDATA table_name ALLCOLS

trying to send data from thingworx composer to Postgres database

]4[]5
I created one thing which access my database table name is sensordata from PostgreSQL. Now I have to send data to these table how. How can I do this?
I did the connection part of thingworx composer and PostgreSQL db on local setup.
I am trying to send sensor data from thingworx to PostgreSQL db but i am not able to sent it
You must to do two things:
1 Create a service to insert row in postgresql_conn thing;
Select 'SQL (Command)' as script type.
Put somthing like this into script area
INSERT INTO sensordata
(Temperature, Humidity, Vibration)
VALUES ([[TemperatureField]], [[HumidityField]], [[VibrationField]]);
TemperatureField, HumidityField, VibrationField are input fields of the service.
2 Create Subscriptions to sensordata thing.
As event set AnyDataChange;
Put something like this in the script area:
var params = {
TemperatureField: me.Temperature,
HumidityField: me.Humidity,
VibrationField: me.Vibration
};
var result = Things["postgresql_conn"].InsertRecord(params);
Now when data of sensordata change one row is add to the postgress table.
Sorry for my english

Spring store data in jdbcTemlate(h2 db) permanently

I am starting to learn Spring and faced with some issues regarding spring-jdbc.
First, I tried run the example from this: https://spring.io/guides/gs/relational-data-access/ and it worked. Then, I commented lines with droping and creating new tables(http://pastebin.com/zcJHsL1P), in order to not override data, but just get it from db and show it. However, spring showed me error:
Table "CUSTOMERS" not found; SQL statement: ...
So, my question is: What should I do to store my database permanently? I don't want to recreate all time new database, I want create it once and update it.
P.S. I used H2 database. Maybe problem exists in tis db?
That piece of code looks like you are "prototyping" something; so it's easier to automatically create a new database (schema, tables, data) on the fly, execute and/or test whatever you want to...and finish the execution.
If you want to persist your data and only modify/update it, either use H2 with the "file layout" or use MySQL, PostreSQL, etcetera.
By the way, the reason you are getting Table "CUSTOMERS" not found; SQL statement: ... is because you are using H2 as an in-memory database and every time you start your application you need to re-create the tables and populate them with data.