Cant import table to H2O, using Postgresql JDBC in Ubuntu - postgresql

I am having trouble to import a sql table to H2O.ai using Postgresql JDBC Driver in Ubuntu. I'm getting the follow error:
ERROR MESSAGE:
SQLException: ERROR: relation "XXX" does not exist
Position: 22
Failed to connect and read from SQL database with connection_url: jdbc:postgresql://localhost:5432/...**
I am executing H2O with the follow command:
java -cp h2o.jar:/usr/share/java/postgresql-9.4.1212.jar water.H2OApp
The JDBC driver is installed and already try to construct the Connection URL in several ways.
I'm using this one right now:
jdbc:postgresql://localhost:5432/XXX?&useSSL=false

Related

Hyperledger iroha - schema error while connecting to postgres database

I am trying to deploy Hyperledger Iroha on MacOS (BigSUr) locally, and while running the following command
./build/bin/irohad --config example/config.postgres.sample --genesis_block example/genesis.block --keypair_name example/node0
I get the error
Storage initialization failed: Cannot execute query. Fatal error. ERROR: relation "schema_version" does not exist LINE 1: ... test, iroha_major, iroha_minor, iroha_patch from schema_ver... ^ while executing "select 1 test, iroha_major, iroha_minor, iroha_patch from schema_version;".
I have installed Postgresql DB locally and created the iroha_data database.
Is there a schema that i must load additionally or does it get auto created ?
I was able to overcome this issue by adding -drop_state while starting iroha daemon.

IBM DB2 SQL Connection Errors

I can connect to IBM DB2 inside the IBM Cloud Pak for Data, but when I try to run the exact same %sql connection it errors out. What am I missing?
'''%sql ibm_db_sa://un:pw#host:port/db?security=SSL'''
(ibm_db_dbi.Error) ibm_db_dbi::Error: [IBM][CLI Driver] SQL5005C The operation failed because the database manager failed to access either the database manager configuration file or the database configuration file.\r SQLCODE=-5005
(Background on this error at: http://sqlalche.me/e/dbapi)
Connection info needed in SQLAlchemy format, example:
postgresql://username:password#hostname/dbname
or an existing connection: dict_keys([])
IBM DB2 SQL
Try loading the package ibm_db

How to connect in db2 LUW with SSL using python?

I'M running SQL queries (client side) from DB2 databases using ibm_db & ibm_db_dbi using pandas. However one of our datasource implemented new security.I'm running Python3.7 and DB2 11.0
Below is my current connection string:
dsn = (
"DRIVER={{IBM DB2 ODBC DRIVER}};"
"DATABASE={0};"
"HOSTNAME={1};"
"PORT={2};"
"PROTOCOL=TCPIP;"
"UID={3};"
"PWD={4};"
"Security={5};"
"SSLClientKeystoredb={6};"
"SSLClientKeystoreDBPassword={7};").format(dsn_database, dsn_hostname, dsn_port, dsn_uid, dsn_pwd, dsn_security, dsn_keystore, dsn_keypwd)
And I have an error message:
Exception Traceback (most recent call last)
in ()
----> 1 con= ibm_db.connect(dsn, "", "")
SQLCODE=-1109on: [IBM][CLI Driver] SQL1109N The specified DLL "GSKit Error: 202" could not be loaded. SQLSTATE=42724
I also look for GSKit and install it in my machine, then put it on the Path for environment variable but error still persists.
Hope you can help me with this problem.

Sqoop import failing with exception interface org.apache.hadoop.mapreduce.lib.db.DBWritable not org.apache.sqoop.mapreduce.DBWritable

I have to migrate code from teradata to hive.. while importing data from Teradata using sqoop, its failing
with below error:
ERROR tool.ImportTool: Encountered IOException running import job:
java.io.IOException: java.lang.RuntimeException: interface
org.apache.hadoop.mapreduce.lib.db.DBWritable not
org.apache.sqoop.mapreduce.DBWritable
at com.cloudera.sqoop.teradata.imports.TeradataImportJob.configureInputFormat(TeradataImportJob.java:111)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:231)
at com.cloudera.sqoop.teradata.TeradataManager.importTable(TeradataManager.java:86)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:413)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:502)
at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
Anyone faced any issue like this?
Can you check the version of the teradata connector that you are using. Try using a different version of the connector jar. I have faced an issue with importing from MySQL table and changing to an earlier version of MySQL connector fixed my issue.

iReport designer 4.5.1 /4.6.0 cannot interact with Hive

I have followed the instructions from here and installed the updated plugin. The error has become:
Query error
Message: net.sf.jasperreports.engine.JRException:
Error executing SQL statement for : null Level: SEVERE Stack Trace:
Error executing SQL statement for : null com.jaspersoft.hadoop.hive.HiveFieldsProvider.getFields(HiveFieldsProvider.java:113)
com.jaspersoft.ireport.hadoop.hive.designer.HiveFieldsProvider.getFields(HiveFieldsProvider.java:32)
com.jaspersoft.ireport.hadoop.hive.connection.HiveConnection.readFields(HiveConnection.java:154)
com.jaspersoft.ireport.designer.wizards.ConnectionSelectionWizardPanel.validate(ConnectionSelectionWizardPanel.java:146)
org.openide.WizardDescriptor$7.run(WizardDescriptor.java:1357)
org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:572)
org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:997)
After downgrading to 4.5.0 the error has become (the connection is verified and I am able to query the table from hive):
Query error
Message: net.sf.jasperreports.engine.JRException: Query returned non-zero code: 10, cause:
FAILED: Error in semantic analysis: Line 1:14 Table not found 'panstats' Level:
SEVERE Stack Trace: Query returned non-zero code: 10, cause:
FAILED: Error in semantic analysis: Line 1:14 Table not found 'panstats'
com.jaspersoft.hadoop.hive.HiveFieldsProvider.getFields(HiveFieldsProvider.java:260)
com.jaspersoft.ireport.hadoop.hive.designer.HiveFieldsProvider.getFields(HiveFieldsProvider.java:32)
com.jaspersoft.ireport.hadoop.hive.connection.HiveConnection.readFields(HiveConnection.java:146)
com.jaspersoft.ireport.designer.wizards.ConnectionSelectionWizardPanel.validate(ConnectionSelectionWizardPanel.java:146)
org.openide.WizardDescriptor$7.run(WizardDescriptor.java:1357)
org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:572)
org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:997)
I am using Hive 0.8.1 on OS X Lion 10.7.4.
Is your query as simple as select * from panstats? I suspect that the query is not the problem, but you'll want to confirm that first.
You could try querying that table from a tool like SQuirreL SQL. If that tool also cannot get the data, then it's probably a Hive issue. If it can... then it's probably an issue with iReport or the Hive plugin.
It sounds like Hive is not configured to share metadata. It uses the annoying default configuration with Derby, so outside connections don't get access to your panstats table. I came across this article about configuring Hive earlier this year. It documents using MySQL instead of derby. If that's indeed the problem, then it's just a Hive configuration issue. Following that article would solve things both for SQuirreL and for iReport.