Tomcat 8 Db2 Error: com.ibm.db2.jcc.b.eo: DB2 SQL Error: SQLCODE=-551, SQLSTATE=42501, SQLERRMC=M25044 - db2

I have been trying to resolve this error which was working for so long
out of nowhere we have started facing this issue.
My application, which is a plain Java web application (Jsp/Servlets and couple of utilities and control classes) running on Tomcat 8
One of the functionality is, the user keys in an id which is a key for a DB query to fire up the Database and get the results
In doing so I get this below error, which is more or less a symptom of user not having the privilege to execute the query on the
Db2 Database table.
When I am trying out the same query from any kind of Db2 Client tools or SQL prompt, I don't get this error at all:
" com.ibm.db2.jcc.b.eo: DB2 SQL Error: SQLCODE=-551, SQLSTATE=42501, SQLERRMC=M25044"

SQLCODE -551 means that the User executing the Query does not have the right Privilege. So find out which User is running the Query and grant the Privilege to that User. May be from other SQL Clients you use a different User.

Related

Error on loading data to remote DB2 server

I'm new to Db2. I'm trying to send data from remote Db2 server A to remote Db2 server B using a Java based application. I was able to fetch the data from server A and get it stored in the control/data files; but when I try to send the data to server B, I get following exception.
com.ibm.db2.jcc.am.SqlSyntaxErrorException: DB2 SQL Error: SQLCODE=-104, SQLSTATE=42601, SQLERRMC=EXTERNAL;T_DATA SELECT * FROM;<table_expr>, DRIVER=4.26.14
The control file has the command:
INSERT INTO <TABLE_NAME> SELECT * FROM EXTERNAL '<PATH_TO_DATAFILE>'
USING (DELIMITER '\t' FORMAT TEXT SOCKETBUFSIZE 100 REMOTESOURCE 'JDBC')
The data file contains records where each value separated by tab per record.
Both server A and B are using Db2 v9.5
The failure was caused by the target server-B being an out of support version of Db2 (v9.5) that does not have any ability to understand external tables. Hence it reported (correctly) sqlcode -104 on the token EXTERNAL which it did not understand.
So the design is incorrect for the available Db2-versions at your site. You can only use external tables in Db2-LUW versions that are recent (v11.5).
Depending on the tools available, you can use commands (external tools, not SQL) to export data from the source, and load it into the target. Additionally, if there is network connectivity directly between server-A and server-B then an administrator can arrange federation between them allowing direct inserts.
Db2 v9.5 also supported load from cursor, and load from remote cursor (although there were problems, long since fixed in newer versions).

Connecting to Amazon Redshift in Azure Data Studio via the Postgresql Connector

I've recently joined a company with a mixed set of databases that include a Redshift cluster and some SQL databases. I'd like to use a single IDE to access both for analytical reporting, so I don't have to switch between tools. I'm currently using workbench, which works, but it's not clicking with me.
I do like Azure Data Studio, but it's SQL Server and Postgres only. Given the similarities between Redshift and Postgres, I thought I'd see if I could connect using the Postgres driver.
I've installed the Postgres extension and can "connect" to the database. However when I try to explore the database using the tree view, I get the error message 'Cannot Expand Node'. When I run a simple query that works in workbench, e.g.
Select * from [server].[database].[table]
I get the following Error message:
Started executing query at Line 1
cursors can only be used within the transaction that created them.
Total execution time: 00:00:00.019
I know I'm trying to do something that shouldn't be done. And if I can't, I can't. But has anyone here managed to get a redshift connection going in Azure Data Studio?
FWIW, I've come across a GitHub Repository that may be a Redshift driver for data studio - but this looks like a clone of the Postgres driver, with no activity since march (not even renaming the 'Postgres' titles to Redshift)... and therefore I'm dubious.

Informatica DB2 DSN not working in Designer

I am trying to connect to DB2 database to import source structure. I tried using ODBC DB2 Wire Protocol Driver Setup. I provided IP Address, TCP Port, Location ( DB2 to Z/OS and Iseries), but when I click on test connection I get below error:
[Informatica][ODBC DB2 Wire Protocol driver][DB2]NULLID.DDOS510A DOES NOT HAVE PRIVILEGE TO PERFORM OPERATION PACKAGE ON THIS OBJECT.
Same method I tried in lower environment of DB2 and connection works. but in higher environment I get this error. ( I verified login in the database directly and my user id has login access).
This is not a programming question, it is about configuration.
The reason that it works on one database, but fails on another, is because only one of the databases has the correct permissions.
Ask the DBA to grant relevant privileges to the userid at the database.
You will find more details at the following IBM technote and also at here.

dBeaver (CE): DB2 LUW Connection with SQL ERROR 42704. Table Schema won't open but able to write SQL queries

After about a year and half, I am finally able to connect to the DB2 database we have through dBeaver. The connection is successful as a LUW (Our db2 is z/os). I was able to get the drivers required after installing IBM Data Studio.
Once I am connected, I go down the schema, get to Tables, and on clicking that, I get the below error.
SQL Error [42704]: SYSCAT.SCHEMATA IS AN UNDEFINED NAME. SQLCODE=-204, SQLSTATE=42704, DRIVER=3.69.56
SYSCAT.SCHEMATA IS AN UNDEFINED NAME. SQLCODE=-204, SQLSTATE=42704, DRIVER=3.69.56
THE DESCRIBE STATEMENT DOES NOT SPECIFY A PREPARED STATEMENT. SQLCODE=-516, SQLSTATE=26501, DRIVER=3.69.56
THE CURSOR SQL_CURLH200C1 IS NOT IN A PREPARED STATE. SQLCODE=-514, SQLSTATE=26501, DRIVER=3.69.56
SQL Error [42704]: SYSCAT.SCHEMATA IS AN UNDEFINED NAME. SQLCODE=-204, SQLSTATE=42704, DRIVER=3.69.56
However, if ignore the error and go to New SQL query and write a simple
Select * from schema.table
it works fine and get the results I want.
Considering the time i have spent to get till here, this is sufficient, but to deploy as a solution in my department, I need to be able to look at a Table List (schema).
Any help would be awesome.
EDIT1: What the issue is here, is that there is no SCHEMA with the name SYSCAT and no table named SCHEMATA.
The z/OS Db2 catalog has different names than the ones used on Db2 on distributed (Linux Unix Windows aka LUW). Here is a list of objects on Db2 z/OS that you can review.
It looks like you are using dBeaver to navigate through a UI the objects on Db2 for z/OS. You will need to ensure you have a db2 jcc driver that is for z/OS Db2. It looks like you may be using one from LUW as the SYSCAT.SCHEMATA is an LUW object, not a z/OS object.
Your other query works because you are specifying a known table name. Other queries should be fine. The issue is the interface in dbeaver is looking at Db2 system objects for LUW and not z/OS. This will continue until you are able to resolve the driver issue.
The IBM Data Server Drivers also require server sided set-up. Please see this information https://www.ibm.com/support/knowledgecenter/SSEPEK_12.0.0/java/src/tpc/imjcc_jccenablespsandtables.html
In DBeaver, when you create a connection choose the "DB2 z/OS driver" option under the Db2 drop down, when connecting to DB2 for z/OS
BTW DBeaver can shell share with Data Studio, so you can (if you wish) use both products in one install. No guarantees that they share happily in all cases, but it appears to work reasonably well.

Where do Postgres DBLink queries run?

All,
I'm running a query on the target server that retrieves data from a source server. My query is using the digest function. Digest is on both servers. It is embedded in a UDF that is also present on both servers. BTW, the "select" portion of the query runs perfectly on the source server.
I would think that when you submit a remote query it will execute on the remote box. I am receiving a "1 function digest(text, unknown) does not exist ..." error. Also, since all the functions are in the public schema on both servers, I don't see how Postgres is failing is find the function.
Any help appreciated.
TIA,
Mike
Queries are executed on server specified in connection string. If no host was given, then on localhost. They are executed using role from connection string. That also includes search_path of that role.
Unrelated to your question, but I would try 2 things while debugging that issue you described:
Connect to remote server using the same role as dblink connection string and executing query
schema qualify that function in dblink: public.digest(..)