I'M running SQL queries (client side) from DB2 databases using ibm_db & ibm_db_dbi using pandas. However one of our datasource implemented new security.I'm running Python3.7 and DB2 11.0
Below is my current connection string:
dsn = (
"DRIVER={{IBM DB2 ODBC DRIVER}};"
"DATABASE={0};"
"HOSTNAME={1};"
"PORT={2};"
"PROTOCOL=TCPIP;"
"UID={3};"
"PWD={4};"
"Security={5};"
"SSLClientKeystoredb={6};"
"SSLClientKeystoreDBPassword={7};").format(dsn_database, dsn_hostname, dsn_port, dsn_uid, dsn_pwd, dsn_security, dsn_keystore, dsn_keypwd)
And I have an error message:
Exception Traceback (most recent call last)
in ()
----> 1 con= ibm_db.connect(dsn, "", "")
SQLCODE=-1109on: [IBM][CLI Driver] SQL1109N The specified DLL "GSKit Error: 202" could not be loaded. SQLSTATE=42724
I also look for GSKit and install it in my machine, then put it on the Path for environment variable but error still persists.
Hope you can help me with this problem.
Related
I can connect to IBM DB2 inside the IBM Cloud Pak for Data, but when I try to run the exact same %sql connection it errors out. What am I missing?
'''%sql ibm_db_sa://un:pw#host:port/db?security=SSL'''
(ibm_db_dbi.Error) ibm_db_dbi::Error: [IBM][CLI Driver] SQL5005C The operation failed because the database manager failed to access either the database manager configuration file or the database configuration file.\r SQLCODE=-5005
(Background on this error at: http://sqlalche.me/e/dbapi)
Connection info needed in SQLAlchemy format, example:
postgresql://username:password#hostname/dbname
or an existing connection: dict_keys([])
IBM DB2 SQL
Try loading the package ibm_db
I am trying to connect my web application to an old MS Access mdb file.
I found this page:
https://learn.microsoft.com/en-us/dotnet/api/system.data.odbc.odbcconnection?view=dotnet-plat-ext-6.0
But I can't figure out how to work with connection strings, I keep getting the Error:
'ERROR [08001] [Microsoft][ODBC Driver 18 for SQL Server]MAX_PROVS: Error Locating Server/Instance Specified [xFFFFFFFF].
ERROR [HYT00] [Microsoft][ODBC Driver 18 for SQL Server]Login timeout expired
ERROR [01S00] [Microsoft][ODBC Driver 18 for SQL Server]Invalid connection string attribute
ERROR [08001] [Microsoft][ODBC Driver 18 for SQL Server]A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.'
https://www.connectionstrings.com/microsoft-odbc-driver-17-for-sql-server/
Found this page, although I have the 18 driver installed it should only change the 17 -> 18.
My current Connection string looks like this
Driver={ODBC Driver 18 for SQL Server}; Server=.\SQLExpress; AttachDbFilename = D:\temp\datacollector\microfas.mdb; Trusted_Connection=yes;
But I have no idea if I am approaching it the right way. The application will eventually run locally at a customer with a known location of the .mdb file.
I am trying to do the differential data load from db2 to PostgreSQL table through InfoSphere Federation Server.
Followed below steps and got the expeption:
SQL1822N Unexpected error code "55000" received from data source "FEDSER".
Associated text and tokens are "This ResultSet is closed.".
Please find the below steps which I followed:
create wrapper jdbc
DB20000I The SQL command completed successfully.
CREATE SERVER FEDSER TYPE JDBC VERSION '12' WRAPPER JDBC OPTIONS( ADD DRIVER_PACKAGE 'E:\Sandhya\postgresql-8.1-415.jdbc3.jar', URL 'jdbc:postgresql://localhost:5432/SCOPEDB', DRIVER_CLASS 'org.postgresql.Driver', DB2_IUD_ENABLE 'Y', db2_char_blankpadded_comparison 'Y', db2_varchar_blankpadded_comparison 'Y', VARCHAR_NO_TRAILING_BLANKS 'Y', JDBC_LOG 'Y')
DB20000I The SQL command completed successfully.
CREATE USER MAPPING FOR SANAGARW SERVER FEDSER OPTIONS (REMOTE_AUTHID 'postgres',REMOTE_PASSWORD '*****')
DB20000I The SQL command completed successfully
SELECT COUNT(*) FROM "SCOPE".EMPLOYEE
SQL1822N Unexpected error code "55000" received from data source "FEDSER".
Associated text and tokens are "This ResultSet is closed.".
I am using Postgres version 12, Java version "1.8.0_241"
Please help me to resolve this issue. or once connection get created then I can only create the nickname.
Consider using Db2 11.5 instead of InfoSphere Federation Server which went out of support 2017-09-30 https://www.ibm.com/support/lifecycle/#/search?q=InfoSphere%20Federation%20Server
Db2 11.5 includes inbuilt support for PostgreSQL federation in all Db2 editions including Db2 Community Edition
https://www.ibm.com/support/pages/data-source-support-matrix-federation-bundled-db2-luw-v115
I am having trouble to import a sql table to H2O.ai using Postgresql JDBC Driver in Ubuntu. I'm getting the follow error:
ERROR MESSAGE:
SQLException: ERROR: relation "XXX" does not exist
Position: 22
Failed to connect and read from SQL database with connection_url: jdbc:postgresql://localhost:5432/...**
I am executing H2O with the follow command:
java -cp h2o.jar:/usr/share/java/postgresql-9.4.1212.jar water.H2OApp
The JDBC driver is installed and already try to construct the Connection URL in several ways.
I'm using this one right now:
jdbc:postgresql://localhost:5432/XXX?&useSSL=false
I have followed the instructions from here and installed the updated plugin. The error has become:
Query error
Message: net.sf.jasperreports.engine.JRException:
Error executing SQL statement for : null Level: SEVERE Stack Trace:
Error executing SQL statement for : null com.jaspersoft.hadoop.hive.HiveFieldsProvider.getFields(HiveFieldsProvider.java:113)
com.jaspersoft.ireport.hadoop.hive.designer.HiveFieldsProvider.getFields(HiveFieldsProvider.java:32)
com.jaspersoft.ireport.hadoop.hive.connection.HiveConnection.readFields(HiveConnection.java:154)
com.jaspersoft.ireport.designer.wizards.ConnectionSelectionWizardPanel.validate(ConnectionSelectionWizardPanel.java:146)
org.openide.WizardDescriptor$7.run(WizardDescriptor.java:1357)
org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:572)
org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:997)
After downgrading to 4.5.0 the error has become (the connection is verified and I am able to query the table from hive):
Query error
Message: net.sf.jasperreports.engine.JRException: Query returned non-zero code: 10, cause:
FAILED: Error in semantic analysis: Line 1:14 Table not found 'panstats' Level:
SEVERE Stack Trace: Query returned non-zero code: 10, cause:
FAILED: Error in semantic analysis: Line 1:14 Table not found 'panstats'
com.jaspersoft.hadoop.hive.HiveFieldsProvider.getFields(HiveFieldsProvider.java:260)
com.jaspersoft.ireport.hadoop.hive.designer.HiveFieldsProvider.getFields(HiveFieldsProvider.java:32)
com.jaspersoft.ireport.hadoop.hive.connection.HiveConnection.readFields(HiveConnection.java:146)
com.jaspersoft.ireport.designer.wizards.ConnectionSelectionWizardPanel.validate(ConnectionSelectionWizardPanel.java:146)
org.openide.WizardDescriptor$7.run(WizardDescriptor.java:1357)
org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:572)
org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:997)
I am using Hive 0.8.1 on OS X Lion 10.7.4.
Is your query as simple as select * from panstats? I suspect that the query is not the problem, but you'll want to confirm that first.
You could try querying that table from a tool like SQuirreL SQL. If that tool also cannot get the data, then it's probably a Hive issue. If it can... then it's probably an issue with iReport or the Hive plugin.
It sounds like Hive is not configured to share metadata. It uses the annoying default configuration with Derby, so outside connections don't get access to your panstats table. I came across this article about configuring Hive earlier this year. It documents using MySQL instead of derby. If that's indeed the problem, then it's just a Hive configuration issue. Following that article would solve things both for SQuirreL and for iReport.