Trying to create a new table column in DashDB but getting a timeout error - ibm-cloud

My problem is pretty much the same as here
DB2 deadlock timeout Sqlstate: 40001, reason code 68 due to update statements called from servlet using SQL
The problem is that I am using dashDB which runs as a service in IBM Cloud (formerly known as bluemix), so I don't have access to the same administrative tools some DB2 DBA has access to (AFAIK).
So I have a simple table, but when I try to add a column, I get this error
SQL0911N: SQL0911N The current transaction has been rolled back
because of a deadlock or timeout. Reason code "68". SQLSTATE=40001
[IBM][CLI Driver][DB2/LINUXX8664] SQL0911N The current transaction has
been rolled back because of a deadlock or timeout. Reason code "68".
SQLSTATE=40001
I've stopped all other DB activity such as other select statements, and I've tried using eclipse JDBC-based DB IDE instead of the web-based administrative DashDB provided by IBM Cloud (just because its authentication session just ends too quickly) without success.

Try this and post if it works.
http://www-01.ibm.com/support/docview.wss?uid=swg21440972

Related

CloudRun Suddenly got `Improper path /cloudsql/{SQL_CONNECTION_NAME} to connect to Postgres Cloud SQL instance "{SQL_CONNECTION_NAME}"`

We have been running a service using NestJS and TypeORM on fully managed CloudRun without issues for several months. Yesterday PM we started getting Improper path /cloudsql/{SQL_CONNECTION_NAME} to connect to Postgres Cloud SQL instance "{SQL_CONNECTION_NAME}" errors in our logs.
We didn't make any server/SQL changes around this timestamp. Currently there is no impact to the service so we are not sure if this is a serious issue.
This error is not from our code, and our third party modules shouldn't know if we use Cloud SQL, so I have no idea where this errors come from.
My assumption is Cloud SQL Proxy or any SQL client used in Cloud Run is making this error. We use --add-cloudsql-instances flag when deploying with "gcloud run deploy" CLI command.
Link to the issue here
This log was recently added in the Cloud Run data path to provide more context for debugging CloudSQL connectivity issues. However, the original logic was overly aggressive, emitting this message even for properly working CloudSQL connections. Your application is working correctly and should not receive this warning.
Thank you for reporting this issue. The fix is ready and should roll out soon. You should not see this message anymore after the fix is out.

Cannot execute DROP EXTENSION in a read-only transaction (drop extension if exists google_insights)

I'm using Google Cloud SQL (Postgres) and created read replica for my DB.
Now I see in logs such an error:
2021-01-16 12:02:46.393 UTC [93149]: [9-1] db=cloudsqladmin,user=cloudsqladmin ERROR: cannot execute DROP EXTENSION in a read-only transaction
2021-01-16 12:02:46.393 UTC [93149]: [10-1] db=cloudsqladmin,user=cloudsqladmin STATEMENT: drop extension if exists google_insights;
These errors repeat constantly - exactly 120 errors every single hour.
As I understand the Google Cloud tries to drop some of its custom extensions for Postgres and can't do that because replica is read only.
Does anyone know why it happens and how to fix that?
The error message is caused by an issue with the Query Insight feature (in order to avoid getting this error message, simply avoid enabling the Query Insight feature when creating the master and the read replica).
I created the following issue on your behalf that I recommend you to star and follow to check all the relevant updates from the Cloud SQL product team.

Unable to connect from BigQuery job to Cloud SQL Postgres

I am not able to use the federated query capability from Google BigQuery to Google Cloud SQL Postgres. Google announced this federated query capability for BigQuery recently in beta state.
I use EXTERNAL_QUERY statement like described in documentation but am not able to connect to my Cloud SQL instance. For example with query
SELECT * FROM EXTERNAL_QUERY('my-project.europe-north1.my-connection', 'SELECT * FROM mytable;');
or
SELECT * FROM EXTERNAL_QUERY("my-project.europe-north1.pg1", "SELECT * FROM INFORMATION_SCHEMA.TABLES;");
I receive this error :
Invalid table-valued function EXTERNAL_QUERY Connection to PostgreSQL server failed: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.
Sometimes the error is this:
Error encountered during execution. Retrying may solve the problem.
I have followed the instructions on page https://cloud.google.com/bigquery/docs/cloud-sql-federated-queries and enabled BigQuery Connection API. Some documents use different quotations for EXTERNAL_QUERY (“ or ‘ or ‘’’) but all the variants end with same result.
I cannot see any errors in stackdriver postgres logs. How could I correct this connectivity error? Any suggestions how to debug it further?
Just adding another possibility for people using Private IP only Cloud SQL instances. I've just encountered that and was wondering why it was still not working after making sure everything else looked right. According to the docs (as of 2021-11-13): "BigQuery Cloud SQL federation only supports Cloud SQL instances with public IP connectivity. Please configure public IP connectivity for your Cloud SQL instance."
I just tried and it works, as far as the bigquery query runs in EU (as of today 6 October it works).
My example:
SELECT * FROM EXTERNAL_QUERY("projects/xxxxx-xxxxxx/locations/europe-west1/connections/xxxxxx", "SELECT * FROM data.datos_ingresos_netos")
Just substitute the first xxxxs with your projectid and the last ones with the name you gave to the connection in The bigquery interface (not cloudsql info, that goes into the query)
Unfortunately BigQuery federated queries to Cloud SQL work currently only in US regions (2019 September). The documents (https://cloud.google.com/bigquery/docs/cloud-sql-federated-queries) say it should work also in other regions but this is not the case.
I tested the setup from original question multiple times in EU and europe-north1 but was not able to get it working. When I changed the setup to US or us-central1 it works!
Federated queries to Cloud SQL are in preview so the feature is evolving. Let's hope Google gets this working in other regions soon.
The BigQuery dataset and the Cloud SQL instance must be in the same region, or same location if the dataset is in a multi-region location such as US and EU.
Double check this according to Known issues listed.
server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.
This message usually means that you will find more information in the logs of the remote database server. No useful information could be sent to the "client" server because the connection disappeared so there was no way to send it. So you have look in the remote server.

IBM DB2 ODBC Driver Issue [Error 69899] Error occurred in the database host server code. SQLSTATE= S1000

After upgrade our IBM System i (aka i5/OS or AS/400) from V5R4 to V7R1, one of our applications that connect to DB2 using ODBC fails with the following error:
Error Code: 69899
SQLSTATE: S1000
[IBM] [System i Access ODBC Driver] [DB2 for i5/OS] PWS0005
Error occurred in the database host server code.
The symptoms are:
In a While / Wend loop a CURSOR is declared, then opens, do fetch(s) and close.
If at any iteration the cursor does not retrieve any rows, in the following iteration the error occurs after declaring the cursor (with a different SQL query) when you try to open it.
First we updated the ODBC driver to the latest version available, but the problem persists.
Because we needed an urgent solution, I solved the problem by making a pre-select to determine if the cursor will return rows, otherwise skip that iteration, this solves the problem for now but does not seem a very elegant solution.
Any idea how to get more information about the error that occurs on the host?
Thank you very much in advance.
Generally speaking, if an error occurs in the server side code, you should call IBM support and report it. They'll ask if you're on the latest cume and probably the latest database group PTFs.
The server runs the ODBC connexion in a job called QZDASOINIT. Since there are probably many connexions to the system, there are probably many QZDASOINIT jobs. To find yours, go to a terminal session and WRKOBJLCK MYPROFILE *USRPRF. You'll be presented with a list of jobs running with your user profile. At least one of them will be the QZDASOINIT job you're looking for. Use option 5 to look at the job, then option 10 to see the job log. Press F10 to see the detailed messages and F18 to go to the bottom (most recent) entries.
If the error was so severe that the server job terminated abnormally, there won't be a lock on your user profile. Instead, go to the spooled job log by using WRKSPLF.
IBM have been logging some SQL internal errors since V5R4. select * from qrecovery.qsq901s; to see any SQLCODE -901 errors.
Make sure that you have installed the latest fix pack for the latest version of System I Access
I've had this error before and it was caused by a syntax error in the connection string. It was a setting that was insignificant in older versions of the OS and more significant in newer versions, but did not cause the connection itself to fail so it was hard to track down.
For example: Port Number:8471 had a spelling mistake and was Porte Number:8471 hard to spot but once found, it fixed the problem for me. Basically everything past this part of the connection got ignored.
Wanted to add another solution to this problem. The SQL Packages that exist on your system get corrupted after/and or during upgrades. You MUST delete these packages after an upgrade. This will get rid of the old packages and will allow the system to recreate the packages at the new OS version level. When deleting SQL packages some connections/jobs may have locks on those packages so you might have to shut host services down. Use the DLTSQLPKG command to do the delete. In v7r2 and higher there are some additional steps to do as IBM changed somethings when it comes to packages you can find the info here http://www-01.ibm.com/support/docview.wss?uid=nas8N1015556
Or tell your ODBC/JDBC/.Net Data adapter/provider to not use packages. This is probably less desirable as there are performance benefits to packages.

Postgres: "ERROR: cached plan must not change result type"

This exception is being thrown by the PostgreSQL 8.3.7 server to my application.
Does anyone know what this error means and what I can do about it?
ERROR: cached plan must not change result type
STATEMENT: select code,is_deprecated from country where code=$1
I figured out what was causing this error.
My application opened a database connection and prepared a SELECT statement for execution.
Meanwhile, another script was modifying the database table, changing the data type of one of the columns being returned in the above SELECT statement.
I resolved this by restarting the application after the database table was modified. This reset the database connection, allowing the prepared statement to execute without errors.
I'm adding this answer for anyone landing here by googling ERROR: cached plan must not change result type when trying to solve the problem in the context of a Java / JDBC application.
I was able to reliably reproduce the error by running schema upgrades (i.e. DDL statements) while my back-end app that used the DB was running. If the app was querying a table that had been changed by the schema upgrade (i.e. the app ran queries before and after the upgrade on a changed table) - the postgres driver would return this error because apparently it does caching of some schema details.
You can avoid the problem by configuring your pgjdbc driver with autosave=conservative. With this option, the driver will be able to flush whatever details it is caching and you shouldn't have to bounce your server or flush your connection pool or whatever workaround you may have come up with.
Reproduced on Postgres 9.6 (AWS RDS) and my initial testing seems to indicate the problem is completely resolved with this option.
Documentation: https://jdbc.postgresql.org/documentation/head/connect.html#connection-parameters
You can look at the pgjdbc Github issue 451 for more details and history of the issue.
JRuby ActiveRecords users see this: https://github.com/jruby/activerecord-jdbc-adapter/blob/master/lib/arjdbc/postgresql/connection_methods.rb#L60
Note on performance:
As per the reported performance issues in the above link - you should do some performance / load / soak testing of your application before switching this on blindly.
On doing performance testing on my own app running on an AWS RDS Postgres 10 instance, enabling the conservative setting does result in extra CPU usage on the database server. It wasn't much though, I could only even see the autosave functionality show up as using a measurable amount of CPU after I'd tuned every single query my load test was using and started pushing the load test hard.
For us, we were facing similar issue. Our application works on multiple schema. Whenever we were doing schema changes, this issue started occruding.
Setting up prepareThreshold=0 parameter inside JDBC parameter disables statement caching at database level. This solved it for us.
I got this error, I manually ran the failing select query and it fixed the error.