Convert 'connections.json' file to 'tnsnames.ora' - oracle-sqldeveloper

Is there a way to convert 'connections.json' file to 'tnsnames.ora'?
Well i have approx 50 database connections and i plan not to write them up 1 by 1 in a notepad or something. Is there a much simpler way?
Im using Sqldeveloper Version: 20.2.0.175.1842
Below is the quarter of my "connections.json" file. The file is very huge.
{
"connections": [{
"info": {
"role": "",
"SavePassword": "true",
"OracleConnectionType": "BASIC",
"RaptorConnectionType": "Oracle",
"serviceName": "ABC",
"Connection-Color-For-Editors": "-12417793",
"customUrl": "jdbc:oracle:thin:#//a.b.c.d:1526/BDNPDBP",
"oraDriverType": "thin",
"NoPasswordConnection": "TRUE",
"password": "aaa/bbb/ccc=",
"hostname": "a.b.c.d",
"driver": "oracle.jdbc.OracleDriver",
"port": "1526",
"subtype": "oraJDBC",
"OS_AUTHENTICATION": "false",
"ConnName": "BDAPDPBP-SGP-PRD_FAB",
"KERBEROS_AUTHENTICATION": "false",
"user": "ABC"
},
"name": "BDAPDPBP-SGP-PRD_FAB",
"type": "jdbc"
}, ...

Related

Kafka jdbc sink connector to Postgres failing with "cross-database references are not implemented"

My setup is based on docker containers - 1 oracle db, kafka, kafka connect and postgres. I first use oracle CDC connector to feed kafka which works fine. Then I am trying to read that topic and feed it into Postgres.
When I start the connector I am getting:
"trace": "org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:614)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:329)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\nCaused by: org.apache.kafka.connect.errors.ConnectException: java.sql.SQLException: Exception chain:\norg.postgresql.util.PSQLException: ERROR: cross-database references are not implemented: "ORCLCDB.C__MYUSER.EMP"\n Position: 14\n\n\tat io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:122)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:586)\n\t... 10 more\nCaused by: java.sql.SQLException: Exception chain:\norg.postgresql.util.PSQLException: ERROR: cross-database references are not implemented: "ORCLCDB.C__MYUSER.EMP"\n Position: 14\n\n\tat io.confluent.connect.jdbc.sink.JdbcSinkTask.getAllMessagesException(JdbcSinkTask.java:150)\n\tat io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:102)\n\t... 11 more\n"
My config json looks like :
{
"name": "SimplePostgresSink",
"config":{
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"name": "SimplePostgresSink",
"tasks.max":1,
"topics": "ORCLCDB.C__MYUSER.EMP",
"key.converter": "io.confluent.connect.avro.AvroConverter",
"key.converter.schema.registry.url": "http://schema-registry:8081",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://schema-registry:8081",
"confluent.topic.bootstrap.servers":"kafka:29092",
"connection.url": "jdbc:postgresql://postgres:5432/postgres",
"connection.user": "postgres",
"connection.password": "postgres",
"insert.mode": "upsert",
"pk.mode": "record_value",
"pk.fields": "I",
"auto.create": "true",
"auto.evolve": "true"
}
}
And the topic schema is:
{
"type": "record",
"name": "ConnectDefault",
"namespace": "io.confluent.connect.avro",
"fields": [
{
"name": "I",
"type": {
"type": "bytes",
"scale": 0,
"precision": 64,
"connect.version": 1,
"connect.parameters": {
"scale": "0"
},
"connect.name": "org.apache.kafka.connect.data.Decimal",
"logicalType": "decimal"
}
},
{
"name": "NAME",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "table",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "scn",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "op_type",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "op_ts",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "current_ts",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "row_id",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "username",
"type": [
"null",
"string"
],
"default": null
}
]
}
I am interested in the columns I and Name
This can also be caused by the db name in your connection.url not matching the db name in your table.name.format. I recently experienced this and thought it might help others.
Looking at the log I saw the connector has problem creating a table with such name:
[2022-01-07 23:56:11,737] INFO JdbcDbWriter Connected (io.confluent.connect.jdbc.sink.JdbcDbWriter)
[2022-01-07 23:56:11,759] INFO Checking PostgreSql dialect for existence of TABLE "ORCLCDB"."C__MYUSER"."EMP" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)
[2022-01-07 23:56:11,764] INFO Using PostgreSql dialect TABLE "ORCLCDB"."C__MYUSER"."EMP" absent (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)
[2022-01-07 23:56:11,764] INFO Creating table with sql: CREATE TABLE "ORCLCDB"."C__MYUSER"."EMP" (
"I" DECIMAL NOT NULL,
"NAME" TEXT NOT NULL,
PRIMARY KEY("I","NAME")) (io.confluent.connect.jdbc.sink.DbStructure)
[2022-01-07 23:56:11,765] WARN Create failed, will attempt amend if table already exists (io.confluent.connect.jdbc.sink.DbStructure)
org.postgresql.util.PSQLException: ERROR: cross-database references are not implemented: "ORCLCDB.C__MYUSER.EMP"
Extending on Steven's comment
This can also occur if your topic name contains full stops
If your table.name.format it defaults to ${topicName}

Linked Service parameterization not working for Linked Service of type Azure Data Explorer (Kusto)

I initially successfully created the following linked service in ADFv2 of type AzureDataExplorer for accessing my database in ADX called CustomerDB:-
{
"name": "ls_AzureDataExplorer",
"properties": {
"type": "AzureDataExplorer",
"annotations": [],
"typeProperties": {
"endpoint": "https://mycluster.xxxxmaskingregionxxxx.kusto.windows.net",
"tenant": "xxxxmaskingtenantidxxxx",
"servicePrincipalId": "xxxxmaskingspxxxx",
"servicePrincipalKey": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "ls_AzureKeyVault_MyKeyVault",
"type": "LinkedServiceReference"
},
"secretName": "MySecret"
},
"database": "CustomerDB"
}
},
"type": "Microsoft.DataFactory/factories/linkedservices"
}
This worked smoothly. Some values I had to mask for obvious reasons but just wanted to say that there is no issue with this connection. Now inspired from this Microsoft documentation I am trying to create a generic version of this linked service, which makes sense because otherwise if I have 10 databases in the cluster, I will have to create 10 different linked services.
So I tried to create the parameterized version in the following manner:-
{
"name": "ls_AzureDataExplorer_Generic",
"properties": {
"type": "AzureDataExplorer",
"annotations": [],
"typeProperties": {
"endpoint": "https://mycluster.xxxxmaskingregionxxxx.kusto.windows.net",
"tenant": "xxxxmaskingtenantidxxxx",
"servicePrincipalId": "xxxxmaskingspxxxx",
"servicePrincipalKey": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "ls_AzureKeyVault_MyKeyVault",
"type": "LinkedServiceReference"
},
"secretName": "MySecret"
},
"database": "#{linkedService().DBName}"
}
},
"type": "Microsoft.DataFactory/factories/linkedservices"
}
But while publishing the changes I keep getting the following error:-
Is there any solution to this?
The article clearly says that:-
For all other data stores, you can parameterize the linked service by selecting the Code icon on the Connections tab and using the JSON editor
So as per that my changes should have been published successfully. But I keep getting the error.
It appears I need to specify the parameter elsewhere in the same JSON. The followed worked:-
{
"name": "ls_AzureDataExplorer_Generic",
"properties": {
"parameters": {
"DBName": {
"type": "string"
}
},
"type": "AzureDataExplorer",
"annotations": [],
"typeProperties": {
"endpoint": "https://mycluster.xxxxmaskingregionxxxx.kusto.windows.net",
"tenant": "xxxxmaskingtenantidxxxx",
"servicePrincipalId": "xxxxmaskingspxxxx",
"servicePrincipalKey": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "ls_AzureKeyVault_MyKeyVault",
"type": "LinkedServiceReference"
},
"secretName": "MySecret"
},
"database": "#{linkedService().DBName}"
}
},
"type": "Microsoft.DataFactory/factories/linkedservices"
}

loopback w/ table schemas, not identifying schema in options

Nature of the issue
My db2 database makes wide use of table schemas for organization, so the table in question is LIVE.TBLADDRESS -
My model uses the "options" to specify the table schema
"options": {
"idInjection": false,
"db2": {
"schema": "LIVE",
"table": "TBLADDRESS"
}
}
the model is in the model-config.json using
,"Tbladdress": {
"dataSource": "x3",
"public": true
}
I get an error when I try to use the explorer to do a simple 'get' or any other API call.
"statusCode": 500,
"name": "Error",
"message": "[IBM][CLI Driver][DB2/LINUXX8664] SQL0204N "DB2X.TBLADDRESS" is an undefined name. SQLSTATE=42704\r\n",
Expected behavior
Once I specified the schema - I'd expect the API to resolve correctly
Actual behavior
The default schema for db user is used at all times...regardless of specified schema in options.
Suggested resolution
Maybe I set it in the wrong place, I will continue to look for the information, It is possible I am missing something.
This is what I "see" using DB Viewer...so you have an idea what I'm referring to.
DEV - host:50000/DEV
-schemas
|-AAA
|-BBB
|-DB2X (this is the schema that the error is referring to...but NOT the one specified in the model)
|-DDD
|-LIVE (this is the correct schema)
|--Tables
|--|-TBLA
|--|-TBLADDRESS
|-ZZZ
If it helps - this happens with manually create models or models generated by discovery scripts.
These are my config files, and model
/common/models/Tbladdress.json
{
"name": "Tbladdress",
"options": {
"idInjection": false,
"db2": {
"schema": "LIVE",
"table": "TBLADDRESS"
}
},
"properties": {
...
}
}
/datasources.json
{
"db": {
"name": "db",
"connector": "memory"
},
"x3": {
"name": "x3",
"connector": "db2",
"username": "...",
"password": "...",
"database": "...",
"hostname": "...",
"port": 50000
}
}
/model-config.json
{
"_meta": {
...
},
"User": {
"dataSource": "db"
},
"AccessToken": {
"dataSource": "db",
"public": false
},
"ACL": {
"dataSource": "db",
"public": false
},
"RoleMapping": {
"dataSource": "db",
"public": false,
"options": {
"strictObjectIDCoercion": true
}
},
"Role": {
"dataSource": "db",
"public": false
}
,"Tbladdress": {
"dataSource": "x3",
"public": true
}
}
http://localhost:3000/explorer/#!/Tbladdress/Tbladdress_findById
{
"error": {
"statusCode": 500,
"name": "Error",
"message": "[IBM][CLI Driver][DB2/LINUXX8664] SQL0204N \"DB2X.TBLADDRESS\" is an undefined name. SQLSTATE=42704\r\n",
"errors": [],
"error": "[node-ibm_db] SQL_ERROR",
"state": "42S02",
"stack": "Error: [IBM][CLI Driver][DB2/LINUXX8664] SQL0204N \"DB2X.TBLADDRESS\" is an undefined name. SQLSTATE=42704\r\n"
}
}
...Headers...
{
"date": "Sun, 18 Feb 2018 05:20:36 GMT",
"x-content-type-options": "nosniff",
"x-download-options": "noopen",
"x-frame-options": "DENY",
"content-type": "application/json; charset=utf-8",
"transfer-encoding": "chunked",
"connection": "keep-alive",
"access-control-allow-credentials": "true",
"vary": "Origin, Accept-Encoding",
"x-xss-protection": "1; mode=block"
}
USING:
loopback-cli v3 to generate express app
loopback-connector-db2 to connect to DB2 v10
Node v8.9.2
Package.JSON dependencies looks like this (as mentioned it's a default install, with one model added - to see if I could get it to work)
"dependencies": {
"compression": "^1.0.3",
"cors": "^2.5.2",
"helmet": "^1.3.0",
"loopback": "^3.0.0",
"loopback-boot": "^2.6.5",
"loopback-component-explorer": "^5.0.0",
"loopback-connector-db2": "^2.1.1",
"serve-favicon": "^2.0.1",
"strong-error-handler": "^2.0.0"
},
Yes - the DB2 connector worked fine when I specified the "LIVE" schema on data discovery - but it does NOT seem to be working when I use the API. I don't know if it's the connector or the loopback app.
For loopback-connector-db2, you must define SCHEMA in the datasources.json config file.
{
"x3": {
"name": "x3",
"connector": "db2",
"username": "...",
"password": "...",
"database": "...",
"hostname": "...",
"port": 50000
},
"x3Live": {
"name": "x3Live",
"connector": "db2",
"schema": "LIVE",
"username": "...",
"password": "...",
"database": "...",
"hostname": "...",
"port": 50000
}
}
Unfortunately, you will need to create a new datasource (e.g. x3Live). Use the old x3 datasource for the models using the DB2X schema, and the new x3Live datasource for the models using the LIVE schema.

How to use script arguments in AWS DataPipeline SQLActivity?

I am trying to execute an unload command on Redshift via Data Pipeline. The script looks something like:
unload ($$ SELECT *, count(*) FROM (SELECT APP_ID, CAST(record_date AS DATE) WHERE len(APP_ID)>0 AND CAST(record_date as DATE)=$1) GROUP BY APP_ID $$) to 's3://test/unload/' iam_role 'arn:aws:iam::xxxxxxxxxxx:role/Test' delimiter ',' addquotes;
The pipeline looks something like this:
{
"objects": [
{
"role": "DataPipelineDefaultRole",
"subject": "SuccessNotification",
"name": "SNS",
"id": "ActionId_xxxxx”,
"message": "SUCCESS: #{format(minusDays(node.#scheduledStartTime,1),'MM-dd-YYYY')}",
"type": "SnsAlarm",
"topicArn": "arn:aws:sns:us-west-2:xxxxxxxxxx:notification"
},
{
"connectionString": “connection-url”,
"password": “password”,
"name": “Test”,
"id": "DatabaseId_xxxxx”,
"type": "RedshiftDatabase",
"username": “username”
},
{
"subnetId": "subnet-xxxxxx”,
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"name": "EC2",
"id": "ResourceId_xxxxx”,
"type": "Ec2Resource"
},
{
"failureAndRerunMode": "CASCADE",
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"pipelineLogUri": "s3://test/logs/",
"scheduleType": "ONDEMAND",
"name": "Default",
"id": "Default"
},
{
"database": {
"ref": "DatabaseId_xxxxxx”
},
"scriptUri": "s3://test/script.sql",
"name": "SqlActivity",
"scriptArgument": "#{format(minusDays(node.#scheduledStartTime,1),"MM-dd-YYYY”)}”,
"id": "SqlActivityId_xxxxx”,
"runsOn": {
"ref": "ResourceId_xxxx”
},
"type": "SqlActivity",
"onSuccess": {
"ref": "ActionId_xxxxx”
}
}
],
"parameters": []
}
However, I keep getting the error: The column index is out of range: 1, number of columns: 0.
I just can't get it to work. I have tried using ?, $1 and I even tried putting the expression #{format(minusDays(node.#scheduledStartTime,1),"MM-dd-YYYY”)} directly in the script. None of them works.
I have looked at the answers to Amazon Data Pipline: How to use a script argument in a SqlActivity? but none of them are helpful.
Does anyone has idea how to use script argument in SQL Script in AWS Data Pipeline?

Orion - cygnus integration

We are trying to integrate Orion, Cygnus and Ckan together.
I have followed these steps in order to make this happen:
Install and configure Cygnus with the Fiware Ckan info(Cygnus up and running)
Login in Ckan and get the API key and configure this in the Cygnus settings
Orion steps:
queryUpdate = APPEND data
{
"contextElements": [{
"type": "Room",
"isPattern": "false",
"id": "26JanRoom",
"attributes": [{
"name": "temperature",
"type": "float",
"value": "888"
}]
}],
"updateAction": "APPEND"
}
subscribeContext = subscribe with the entity id created above(our Cygnus host is given as reference "reference": "CYGNUS HOST", )
{
"entities": [{
"type": "Room",
"isPattern": "false",
"id": "26JanRoom"
}],
"attributes": ["temperature"],
"reference": "CYGNUS HOST",
"duration": "P1M",
"notifyConditions": [{
"type": "ONCHANGE",
"condValues": ["temperature"]
}],
"throttling": "PT5S"
}
queryUpdate = UPDATE data
{
"contextElements": [{
"type": "Room",
"isPattern": "false",
"id": "26JanRoom",
"attributes": [{
"name": "temperature",
"type": "float",
"value": "111"
}]
}],
"updateAction": "UPDATE"
}
What we expect is to receive some notifications in the Cygnus side, but there is nothing sent from the Orion (orion.lab.fi-ware.org:1026/)
Could you please help us on this topic?
Thanks kr
Omer Ozdemir
Your are using
"condValues": ["pressure"]
which means that every time the attribute named pressure change, Orion will trigger a notification. However, your update is modifiygin temperature.
Please, have a look to the subscribe context operation section at Orion documentation.