Keep getting error ""Create Service - Failed to fetch" on IBM Cloud's Db2 page - db2

I've tried, failed, deleted the database and tried again 7 times now and I get this error each time. I'm on the lite plan and taking the IBM Data Science Certification course and I can't get past this part. Any assistance would be greatly appreciated.
Deleting database (can only have one in lite plan I believe) retried several times

I just verified that I am able to create a fully working Lite instance on my end. Is it possible that it's a networking issue on your end? Was that the full error message? It seems to be cut off. In what region and datacenter are you trying to create the service instance?

Related

Security processing failed with reason "19" ("USERID DISABLED or RESTRICTED"). SQLSTATE=08001

I am running into this error when trying to access db2 through my code, as well as the cloud console. I am using db2 hosted on ibm cloud.
Security processing failed with reason "19" ("USERID DISABLED or RESTRICTED"). SQLSTATE=08001
I am unable to perform sql queries, or access any of my table data through the console, or perform any admin access. I cannot figure out what the issue is let alone how to solve it. What could be my issue?
While I agree with #mao, here is my finding (worked for me with a free account) for anyone who ends up here for the same problem. As suggested in discussion forums at IBM course, Applied-data-science-capstone, you need to create new service credentials for your Db2 database. If you do not have any important table in your current database, it is even safer to delete it and recreate a new Db2 either in London or Dallas region followed by new service credentials. As of today, if you use sqlalchemy package in Python, versions higher than 1.4 are incompadible so:
!pip uninstall sqlalchemy==1.4 -y && pip install sqlalchemy==1.3.24
To find the location of your credentials on IBM cloud check: Connection credentials or this picture.
This is not a question for stackoverflow because it is not about programming Db2. It is an operational matter for IBM.
Some people reported this symptom with accounts that were created long ago, or which failed to migrate to new versions of Db2-on-cloud , or which became expired before migrations through lack of use or lack of renewals.
If you pay for an IBM managed service, then contact IBM cloud support to resolve such problems.
If you have a free (lite) account, currently you get no formal support. You can drop the service, and create a new service , possibly at a different data centre, using a different email address if necessary.

CloudRun Suddenly got `Improper path /cloudsql/{SQL_CONNECTION_NAME} to connect to Postgres Cloud SQL instance "{SQL_CONNECTION_NAME}"`

We have been running a service using NestJS and TypeORM on fully managed CloudRun without issues for several months. Yesterday PM we started getting Improper path /cloudsql/{SQL_CONNECTION_NAME} to connect to Postgres Cloud SQL instance "{SQL_CONNECTION_NAME}" errors in our logs.
We didn't make any server/SQL changes around this timestamp. Currently there is no impact to the service so we are not sure if this is a serious issue.
This error is not from our code, and our third party modules shouldn't know if we use Cloud SQL, so I have no idea where this errors come from.
My assumption is Cloud SQL Proxy or any SQL client used in Cloud Run is making this error. We use --add-cloudsql-instances flag when deploying with "gcloud run deploy" CLI command.
Link to the issue here
This log was recently added in the Cloud Run data path to provide more context for debugging CloudSQL connectivity issues. However, the original logic was overly aggressive, emitting this message even for properly working CloudSQL connections. Your application is working correctly and should not receive this warning.
Thank you for reporting this issue. The fix is ready and should roll out soon. You should not see this message anymore after the fix is out.

Debezium on AWS RDS Postgres with rds.force_ssl not working well

Has anyone managed to get Debezium to work over AWS RDS Postgres with rds.force_ssl turned on in the parameter group?
The connector seems to work for a bit, and then we begin to receive errors like Database connection failed when writing to copy and Exception thrown while calling task.commit().
I have scoured the web searching for this issue, and I see many people encountered it, and many Jira issues opened about it.
The response generally is "Check your network configuration" or "Disable your SSL".
I just can't get it to work for some reason, and obviously disabling encryption in transit is not possible in production use cases (at least in ours).
I would appreciate any kind of help or insight into how to solve this!

Cannot create Services in IBM Bluemix

Every time, when I try to create a service in IBM Bluemix (web and CLI), the following error message appears:
Creating service instance my-compose-for-mysql-service in org XXX / space XXX as XXX...
FAILED
Server error, status code: 502, error code: 10001, message: Service broker error: {"description"=>"Deployment creation failed - {\"errors\":\"insufficient_storage\",\"status\":507}"}
How can free storage or fix the error?
I already did the following steps:
Delete all other spaces and apps
Delete all services
Reinstall CLI
-
This error message is stating that the compose backend has reached full capacity and does not have enough resources to create your service.
The compose engineers will be aware of this issue and will be working towards adding more capacity to the backend.
Please wait and try again later, or if urgent raise a support ticket.
Are you using the experimental version of the MYSQL service, which has been retired? The experimental instances were disabled recently on August 7, 2017. There is a newer production version of the Compose for MySQL service, which is available here: https://console.ng.bluemix.net/catalog/services/compose-for-mysql/
For more information about the experimental service retirement and its replacement, see: https://www.ibm.com/blogs/bluemix/2017/06/bluemix-experimental-services-sunset/
Okay, after reaching out to various support agents:
The problem is not a general bug. I was using a company related account which accumulate all databases of the company domain in one Sandbox which is just running out of storage. Compose seems to already working on it.
My solution until the official fix: Use a different non-business account to host the database.

DB2 Transaction log is full. How to flush / clear it?

I’m working on a experiment regarding to a course I’m taking about tuning DB2. I’m using the EC2 from Amazon (aws) to conduct the experiment.
My problem is, however, that I have to test a non-compression against row-compression in DB2 and to do that I’ve created a bsh file that run those experiments. But when I reach to my compression part I get the error ”Transaction log is full”; and no matter how low I set the inserts for it is complaining about my transaction log.
I’ve scouted Google for a day now trying to find some way to flush / clear the log or just get rit of it, i don’t need it. I’ve tried to increase the size but nothing has helped.
Please, I hope someone has an answer to solve this frustrating problem
Thanks
- Mestika
There is no need to "clear the log" in DB2. When a transaction is rolled back, DB2 releases the log space used by the transaction.
If you've increased the log size and it has not helped, please post more information about what you're trying to do.
No need of restarting. Just try to force the applications using DB2 force applications all.
Increase the Actie Log File Size and try to force application connections and terminate the connections.
Try to run the job now.
db2 force applications all
db2 update db cfg for sample using logfilsiz 5125
db2 force applications all
db2 terminate
db2 connect to sample
Run your job and monitor.
Just restart the instance, it would release the pending logs and you should be fine