With apache drill 1.2, we can query over RDBMS data https://drill.apache.org/blog/2015/10/16/drill-1.2-released/
I downloaded the JDBC PostgreSQL driver here:
https://jdbc.postgresql.org/download.html
I took the JDBC4 I didn't know which to take.
I put the jar file in this folder 'apache-drill-1.2.0\jars\3rdparty'
And now, I'm trying to add a plugin for postgres. I am doing it using the web console (http://127.0.0.1:8047). I created a plugin with name pgplugin and added following configurations:
{
"type": "jdbc",
"driver": "org.postgresql.Driver",
"url": "jdbc:postgresql://IP:port/myschema",
"username": "root",
"password": "root",
"enabled": true
}
It is showing error:
error(Unable to create / update storage)
Even with just the following it's not working (same error):
{
"type": "jdbc"
}
I know that I should add the jar (jdbc postgres driver) to apache drill classpath somewhere in the configuration files but I can't figure it out...
I tried to add this: drill.exec.sys.store.provider.local.path = "/mypath"
to the drill-override.conf -> result is:
drill.exec: {
cluster-id: "drillbits1",
zk.connect: "localhost:2181",
drill.exec.sys.store.provider.local.path = "/mypath"
}
But it's not working... any ideas ?
Thanks a lot !
I experienced this same issue. The cause of my problem was using the "JDBC42 Postgresql Driver, Version 9.4-1205" JDBC driver. I fixed this issue by using the "JDBC4 Postgresql Driver, Version 9.4-1205" driver.
For reference, the Postgres JDBC downloads page: https://jdbc.postgresql.org/download.html
Related
Context:
i've installed a Kafka Cluster with the confluent helm chart on AWS Kubernetes.
And i've configured a Oracle Server so I can connect to it with Kafka Connect.
My Kafka connect configuration
{
"name": "oracle-debez",
"config": {
"connector.class" : "io.debezium.connector.oracle.OracleConnector",
"tasks.max" : "1",
"database.server.name" : "servername",
"database.hostname" : "myserver",
"database.port" : "1521",
"database.user" : "myuser",
"database.password" : "mypass",
"database.dbname" : "KAFKAPOC",
"database.out.server.name" : "dbzxout",
"database.history.kafka.bootstrap.servers" : "mybrokersvc:9092",
"database.history.kafka.topic": "my-conf-topic",
"table.include.list": "MYSCHEMA.MYTABLE",
"database.oracle.version": 11,
"errors.log.enable": "true"
}
}
I've configured in this way and some topics are created:
my-conf-topic: Comes with the table DDL
servername
servername.MYSCHEMA.MYTABLE
In the 'kafka-poc-dev.MYSCHEMA.MYTABLE' topic are all the information from the table.
when i start the plugin all the information is saved with success! But the problem is that every new insert or update does not appears on the topic.
One more thing, my oracle is not the version 11, my version is Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production, but if I do not put the property "database.oracle.version": 11, it gives me the error:
"org.apache.kafka.connect.errors.ConnectException: An exception
occurred in the change event producer. This connector will be
stopped.\n\tat
io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:42)\n\tat
io.debezium.connector.oracle.xstream.XstreamStreamingChangeEventSource.execute(XstreamStreamingChangeEventSource.java:82)\n\tat
io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:140)\n\tat
io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:113)\n\tat
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat
d.java:834)\nCaused by: oracle.streams.StreamsExa:343)\n\tat
io.debezium.connector.oracle.xstream.XstreamStreamingChangeEventSource.execute(XstreamStreamingChangeEventSource.java:70)\n\t...
7 more\n"
Can somebody help me understand what i'm doing wrong here?
Now when i create the connector the table is being locked.. and the data is not arriving at the topics...
Table being locked
Thanks!
I'm facing a similar problem, but currently using the LogMiner adapter.
The initial snapshot and streaming works just fine, but can't get any more update/insert events if I add more connectors to Kafka Connect to monitor different tables and schemas.
Everything just stops working, even though I can see that the LogMiner sessions are still active.
Did you enable Golden Gate replication and Archive log mode?
About the database.oracle.version problem you're facing, you should just use the default value as mentioned here:
https://debezium.io/documentation/reference/connectors/oracle.html#oracle-property-database-oracle-version
"database.oracle.version" : "12+"
Posting as an answer because I can't comment yet.
Hope it helps you somehow.
You are using container and PDB version of oracle so you need to pass database.pdb.name value in your property. you must have a user with logminer or Xstream access.
I'm trying to import a database having only one table into orientdb using their import functionality. I wrote this json
`{
"config": {
"log": "debug"
},
"extractor" : {
"jdbc": { "driver": "com.mysql.jdbc.Driver",
"url": "jdbc:mysql://localhost:8889/footballEvents",
"userName": "root",
"userPassword": "root",
"query": "select * from 10eventslight_2" }
},
"transformers" : [
{ "vertex": { "class": "events"} }
],
"loader" : {
"orientdb": {
"dbURL": "remote:localhost/footballEvents",
"dbUser": "root",
"dbPassword": "root",
"serverUser": "root",
"serverPassword": "root",
"dbAutoCreate": true
}
}
}`
Then I run the command sudo ./oetl.sh importScript.json and I don't get any error, the script runs normally. I attached the output of the command here
Reading the [orientdb] INFO committing message at the end I tried to connect to my database and run the commit command but the system answers me that no transaction is running. I'm quite sure that the dbUrl and the db/server credentials in my json are good because I can use this address to connect to my database via the orientdb console. Concerning the mysql part, no doubt it's working because it extracts data from the database and I know my credentials are ok.
So it looks like it's working, not any error comes up but nothing happens and I don't understand why.
If it has any importance, I'm on Mac OS 10.13.1 with orientdb 2.2.29.
Thanks in advance.
OrientDB Teleporter is a tool that synchronizes a RDBMS to OrientDB database. Teleporter is fully compatible with several RDBMS that have a JDBC driver: Teleporter has been successfully tested with Oracle, SQLServer, MySQL, PostgreSQL and HyperSQL. Teleporter manages all the necessary type conversions between the different DBMSs and imports all your data as Graph in OrientDB. This feature is available both for the OrientDB Enterprise Edition and the OrientDB Community Edition. But beware: in community edition you can migrate your source relational database but you cannot enjoy the synchronize feature, only available in the enterprise edition.
For more information: https://orientdb.com/docs/last/Teleporter-Home.html
Hope it helps
Regards
I'm using the Cloud 9 IDE to develop an application using MongoDB. I created a database called "appdata" at MongoLab and the following user:
{
"_id": "appdata.admin",
"user": "admin",
"db": "appdata",
"credentials": {
"SCRAM-SHA-1": {
"iterationCount": 10000,
"salt": "K/WUzUDbi3Ip4Vy59gNV7g==",
"storedKey": "9ow35+PtcOOhfuhY7Dtk7KnfYsM=",
"serverKey": "YfsOlFx1uvmP+VaBundvmVGW+3k="
}
},
"roles": [
{
"role": "dbOwner",
"db": "appdata"
}
]
}
Whenever I try connecting to the database through Cloud 9 Shell using the following command (given by MongoLab with my newly created user):
mongo ds057244.mongolab.com:57244/appdata -u admin -p admin
I get the following error message:
MongoDB shell version: 2.6.11
connecting to: ds057244.mongolab.com:57244/appdata
2015-11-22T05:23:49.015+0000 Error: 18 { ok: 0.0, errmsg: "auth failed",
code: 18 } at src/mongo/shell/db.js:1292
exception: login failed
Also, on my javascript file running on Cloud 9, while following this tutorial (which uses mongoose to access the DB) I got stuck on the post route for bears. Whenever I send a post request through postman with the specified fields set, the route doesn't return anything, neither a bear created nor an error message, which makes me think the problem is also failing to login to the database. The previous get request is working just fine and my code is the exactly same as the tutorial.
Does anyone know what the problem in any of the cases and what I need to do to solve them?
The shell problem was fixed updating it to the Database version (which was 3.0.3).
For the javascript files, I restarted the tutorial and made sure I downloaded all necessary dependencies with the most recent stable version (not the ones shown on the tutorial), after that the problem was solved.
Adapter xml file, the connectio policy
<connectionPolicy xsi:type="sql:SQLConnectionPolicy">
<dataSourceJNDIName>${custom-db.1.jndi-name}</dataSourceJNDIName>
</connectionPolicy>
wl.property file
custom-db.1.jndi-name=${custom-db.1.relative-jndi-name}
custom-db.1.relative-jndi-name=jdbc/datasrc
custom-db.1.driver=oracle.jdbc.driver.OracleDriver
custom-db.1.url=jdbc:oracle:thin:#localhost:1521:XE
custom-db.1.username=hr
custom-db.1.password=tiger
I have imported the jar files ojdbc14.jar
The adapter is deployed but gives error at runtime as,
{
"errors": [
"Runtime: Datasource jdbc/datasrc not found in jndi"
],
"info": [
],
"isSuccessful": false,
"warnings": [
]
}
Please let me know how to resolve the issue.
Thanks in advance.
It seems you are using Oracle 10g. I think Worklight does not support Oracle 10g. It only supports Oracle 11g. You can also refer this link- Worklight Database Integration
I want to connect vertica with JDBC. But I got errors.
Here is my code :
....
Class.forName("com.vertica.jdbc.Driver");
....
connection= DriverManager.getConnection
(
"jdbc:vertica://192.168.2.116:5433/schema", "dbadmin", "pass123"
);
But I got this error(if I open the netbeans database section I got same error message. But I connect to vertica with client(Dbeaver)) :
ex = (java.sql.SQLException) java.sql.SQLException: [Vertica]No enum const class com.vertica.dsi.dataengine.utilities.MetadataSourceColumnTag.COLUMN_SİZE
How can I fix this?
So if you need jdbc client for vertica in netbeans or intellij use this vertica jdbc driver. It's the one that worked for me. (taken from dbvisuzlizer).
i think it is because of your locale. in this case turkish i guess.
COLUMN_SİZE has upper case i -> İ
it is verticas fault to use toUpper digressivly.
Vertica's connect string uses databasename, not schema name after the host:port. See the doc for details:
https://my.vertica.com/docs/CE/6.0.1/HTML/index.htm#1395.htm
Connection conn = DriverManager.getConnection(
"jdbc:vertica://VerticaHost:portNumber/databaseName",
"username", "password");
By default, users have a search path of "$user, public, v_catalog, v_monitor and v_internal", therefore, you can create and use a matching username to connect directly to the desired SCHEMA.
Its about 32 bit - 64 bit issue I think, because it is working on 32 bit windows I cant understand
make sure the connector (vertica-jdbc-xxxx.jar) is in the JDK\jre\lib\ext folder