db2 resource limitation error while using ibm-db - db2

I have a generic question (not an issue). I am trying to run a big query with (lot of join conditions connecting 15-20 tables). Do we have any limitations in ibm_db while running big queries ? The query has been running in our production environment for more than 15 years. I am able to run the query in a in-home .Net tool. However, while running it using ibm-db in pycharm I keep getting sqlcode -905 resource limitation error. Is there anything I am missing with ibm-db usage ?
Any insight will be helpful. Thank you for the help.

Most likely this -905 sqlcode has nothing to do with python or ibm_db.
Instead, it is more likely due to how the workload/resource management is configured at the Db2 server. Your question gave zero facts about the target Db2 environment, or about the difference(s) between the execution that works (.net) versus the execution that triggers the limitation.
One specific detail to eliminate is that the account (auth-id) used for the .net application might be different to the account you use when connecting from python. The Db2-server may be configured to allocate based on User-Id (auth-ID) , or other client side factor (depending on the Db2-server platform and version).
You can prove that the -905 symptom has nothing to do with python or ibm_db by temporary eliminating both, for example by submitting the same query from the db2cli tool (or db2 clp if your client workstation has it), or by submitting the query from jdbc (as long as you use the same account name for connecting as you do with python).
Contact your DBA team for details of the configuration of the WLM or RLF or whatever resource management tooling is deployed at the target Db2 subsystem. In addition, use python ibm_db to print out the full details of the exception (including the resource name, limit amount1/2, limit source, as they can also yield more information).

Related

SQL Server query Linked Server returns an Interface error code 7390 when executed remotely

I have a problem when querying Active Directory or MySQL database as Linked servers.
The problem occurs when running the query through SSMS on a server other than the database server where AD is mounted.
If I run these queries on the actual Db server through SSMS I get results from the linked server.
If I run these queries on a 'Management' machine on a separate VLAN they return error 7390
The requested operation could not be performed because OLE DB provider "ADsDSOObject" for linked server "ACTIVEDIR" does not support the required transaction interface.
This only affects the Linked servers, I can query any table on the Db server from the management machine, so it's not ports and networking (that I can see).
I have tried changing the settings for RPC, RPC Out and Promotion of Distributed Transactions in the properties sheet of the linked servers, with various combinations but I still get no results, just the error
For good measure I have also tried to set the TRANSACTION ISOLATION LEVEL to READ UNCOMMITTED .. in the SQL blocks executed
It used to work before I migrated from SQLserver 2008R2 to 2016....
I would appreciate any guidance and wisdom ..

I got error on mongo db query after enable security on server side

Before security disabled my script is working fine but after enable security, I start getting error:
No one will be able to help unless you provide at least error details and the relevant messages from the jmeter.log file.
However one thing is obvious: you're using deprecated test elements with an outdated MongoDB client driver. MongoDB 3.0 introduced new Async API which is not compatible with current JMeter thread model therefore you might want to reconsider your approach to MongoDB load testing and use JSR223 Sampler instead.
Check out MongoDB Performance Testing with JMeter article for more details.
Also consider using separate machines for JMeter and MongoDB because both are resource intensive when it comes to high loads and you will get inaccurate results given you have JMeter installed at the same machine as MongoDB

IBM DB2 ODBC Driver Issue [Error 69899] Error occurred in the database host server code. SQLSTATE= S1000

After upgrade our IBM System i (aka i5/OS or AS/400) from V5R4 to V7R1, one of our applications that connect to DB2 using ODBC fails with the following error:
Error Code: 69899
SQLSTATE: S1000
[IBM] [System i Access ODBC Driver] [DB2 for i5/OS] PWS0005
Error occurred in the database host server code.
The symptoms are:
In a While / Wend loop a CURSOR is declared, then opens, do fetch(s) and close.
If at any iteration the cursor does not retrieve any rows, in the following iteration the error occurs after declaring the cursor (with a different SQL query) when you try to open it.
First we updated the ODBC driver to the latest version available, but the problem persists.
Because we needed an urgent solution, I solved the problem by making a pre-select to determine if the cursor will return rows, otherwise skip that iteration, this solves the problem for now but does not seem a very elegant solution.
Any idea how to get more information about the error that occurs on the host?
Thank you very much in advance.
Generally speaking, if an error occurs in the server side code, you should call IBM support and report it. They'll ask if you're on the latest cume and probably the latest database group PTFs.
The server runs the ODBC connexion in a job called QZDASOINIT. Since there are probably many connexions to the system, there are probably many QZDASOINIT jobs. To find yours, go to a terminal session and WRKOBJLCK MYPROFILE *USRPRF. You'll be presented with a list of jobs running with your user profile. At least one of them will be the QZDASOINIT job you're looking for. Use option 5 to look at the job, then option 10 to see the job log. Press F10 to see the detailed messages and F18 to go to the bottom (most recent) entries.
If the error was so severe that the server job terminated abnormally, there won't be a lock on your user profile. Instead, go to the spooled job log by using WRKSPLF.
IBM have been logging some SQL internal errors since V5R4. select * from qrecovery.qsq901s; to see any SQLCODE -901 errors.
Make sure that you have installed the latest fix pack for the latest version of System I Access
I've had this error before and it was caused by a syntax error in the connection string. It was a setting that was insignificant in older versions of the OS and more significant in newer versions, but did not cause the connection itself to fail so it was hard to track down.
For example: Port Number:8471 had a spelling mistake and was Porte Number:8471 hard to spot but once found, it fixed the problem for me. Basically everything past this part of the connection got ignored.
Wanted to add another solution to this problem. The SQL Packages that exist on your system get corrupted after/and or during upgrades. You MUST delete these packages after an upgrade. This will get rid of the old packages and will allow the system to recreate the packages at the new OS version level. When deleting SQL packages some connections/jobs may have locks on those packages so you might have to shut host services down. Use the DLTSQLPKG command to do the delete. In v7r2 and higher there are some additional steps to do as IBM changed somethings when it comes to packages you can find the info here http://www-01.ibm.com/support/docview.wss?uid=nas8N1015556
Or tell your ODBC/JDBC/.Net Data adapter/provider to not use packages. This is probably less desirable as there are performance benefits to packages.

Using Entity Framework with Informix

I've been trying for quite some time to use Entity Framework with our IBM Informix databases. Hours of searching has pointed me towards installing the IBM .NET Data Server Provider, which I have installed, however when I attempt to add a new Entity Model to my project I only have the Microsoft SQL Server Data Providers listed. Am I missing a step? Is this even possible?
I am not an expert on Windows or .NET; treat any comments I make with due caution.
Installing the .NET Data Server Provider is an important first step. You now have to make sure that you can use it to connect to the Informix databases you want to manipulate. There are several things you'll need to check here:
Is the server (meaning the Informix instance) configured to allow DRDA connections?
By default, it probably isn't.
If you're the DBSA (database system administrator), you'll need to check that you've enabled 'drsoctcp' connections on the system, and configured a server alias to use that connection.
If you're not the DBSA, you'll need to chat with your DBSA to get the relevant information.
Assuming that you have DRDA connectivity enabled at the server side, you then need to ensure you have an appropriately configured ... DSN? Your client code needs to be able to connect to the server.
There is no reason I'm aware of why it cannot be done. However, I don't know exactly how to guide you step-by-step through any of the above.
You might need to seek assistance from IBM Technical Support.
You would help everyone if you clarified which version of Informix (the DBMS) you have, along with the version information for the platform where it is running (whether Windows or Unix, and the o/s version information) - and which version of the Data Server Provider you are using (and which variant of Windows you are using it on).

Postgres: "ERROR: cached plan must not change result type"

This exception is being thrown by the PostgreSQL 8.3.7 server to my application.
Does anyone know what this error means and what I can do about it?
ERROR: cached plan must not change result type
STATEMENT: select code,is_deprecated from country where code=$1
I figured out what was causing this error.
My application opened a database connection and prepared a SELECT statement for execution.
Meanwhile, another script was modifying the database table, changing the data type of one of the columns being returned in the above SELECT statement.
I resolved this by restarting the application after the database table was modified. This reset the database connection, allowing the prepared statement to execute without errors.
I'm adding this answer for anyone landing here by googling ERROR: cached plan must not change result type when trying to solve the problem in the context of a Java / JDBC application.
I was able to reliably reproduce the error by running schema upgrades (i.e. DDL statements) while my back-end app that used the DB was running. If the app was querying a table that had been changed by the schema upgrade (i.e. the app ran queries before and after the upgrade on a changed table) - the postgres driver would return this error because apparently it does caching of some schema details.
You can avoid the problem by configuring your pgjdbc driver with autosave=conservative. With this option, the driver will be able to flush whatever details it is caching and you shouldn't have to bounce your server or flush your connection pool or whatever workaround you may have come up with.
Reproduced on Postgres 9.6 (AWS RDS) and my initial testing seems to indicate the problem is completely resolved with this option.
Documentation: https://jdbc.postgresql.org/documentation/head/connect.html#connection-parameters
You can look at the pgjdbc Github issue 451 for more details and history of the issue.
JRuby ActiveRecords users see this: https://github.com/jruby/activerecord-jdbc-adapter/blob/master/lib/arjdbc/postgresql/connection_methods.rb#L60
Note on performance:
As per the reported performance issues in the above link - you should do some performance / load / soak testing of your application before switching this on blindly.
On doing performance testing on my own app running on an AWS RDS Postgres 10 instance, enabling the conservative setting does result in extra CPU usage on the database server. It wasn't much though, I could only even see the autosave functionality show up as using a measurable amount of CPU after I'd tuned every single query my load test was using and started pushing the load test hard.
For us, we were facing similar issue. Our application works on multiple schema. Whenever we were doing schema changes, this issue started occruding.
Setting up prepareThreshold=0 parameter inside JDBC parameter disables statement caching at database level. This solved it for us.
I got this error, I manually ran the failing select query and it fixed the error.