I tried to create a new service and for some reason the status is stuck at "provision in progress"
I cannot delete the service and create a new one. My progress on my project is stuck because I can only have one lite DB service.
I am not sure what I should do next
Help is really appreciated
I experienced the same issue, but it was resolved after going back to Dashboard and selecting db2 service again (while the status was still "provision in progress").
Related
We have been running a service using NestJS and TypeORM on fully managed CloudRun without issues for several months. Yesterday PM we started getting Improper path /cloudsql/{SQL_CONNECTION_NAME} to connect to Postgres Cloud SQL instance "{SQL_CONNECTION_NAME}" errors in our logs.
We didn't make any server/SQL changes around this timestamp. Currently there is no impact to the service so we are not sure if this is a serious issue.
This error is not from our code, and our third party modules shouldn't know if we use Cloud SQL, so I have no idea where this errors come from.
My assumption is Cloud SQL Proxy or any SQL client used in Cloud Run is making this error. We use --add-cloudsql-instances flag when deploying with "gcloud run deploy" CLI command.
Link to the issue here
This log was recently added in the Cloud Run data path to provide more context for debugging CloudSQL connectivity issues. However, the original logic was overly aggressive, emitting this message even for properly working CloudSQL connections. Your application is working correctly and should not receive this warning.
Thank you for reporting this issue. The fix is ready and should roll out soon. You should not see this message anymore after the fix is out.
I've tried, failed, deleted the database and tried again 7 times now and I get this error each time. I'm on the lite plan and taking the IBM Data Science Certification course and I can't get past this part. Any assistance would be greatly appreciated.
Deleting database (can only have one in lite plan I believe) retried several times
I just verified that I am able to create a fully working Lite instance on my end. Is it possible that it's a networking issue on your end? Was that the full error message? It seems to be cut off. In what region and datacenter are you trying to create the service instance?
I have a node-red flow in bluemix that uses dash-db nodes also. So each time some dash db maintenance or some other reason, this db connection gets lost and all writes fail. When i redeploy, everything is fine again. Bluemix shows only logs of last few hours hence I am finding it very difficult to debug. Meanwhile i was thinking of doing an automatic redeploy after i detect this issue to avoid losing writes.
Can this be done using GET /flows followed by POST /flows in the same node-red app itself?
it would be worth raising this as an issue with the dash-db nodes so the author can help address it - https://github.com/smchamberlin/node-red-nodes-cf-sqldb-dashdb
Yes, you can post back the flows. The full admin http api is documented here: http://nodered.org/docs/api/admin/ - have a look at the 'reload' option on /flows.
Here's the scenario:
I can SSH into my Chef-Server . But I can't SSH into any of the Chef-Clients. So this is how I work : I have a workstation to change or create Roles . All the chef-clients are running as daemons , so when they wake up , they notice state changes and start updating themselves .
Now , I need to configure code deployments on these clients . I was thinking I could use application cookbook for that , and add recipes to the roles using my workstation . But won't that result in deployments every time the chef-clients wake up and find revision changes ? I want an On Demand kind of deployment : I want to deploy only when the code is deployment ready , not for any other commit till that point .
How do I achieve this ?
Couple of questions
When id your code deployment ready? How would you know? If it's a repeatable process could you not code that into a recipe? if it's not a repeatable process you need to make it one so that it can be automated
IE run cucumber tests and if they all pass then deploy else just do nothing?
We feed from Artifactory and use the web api to check the latest installer available to us. If it's the same as previously installed (done by checking/creating a registry key) we say to the user, this build is already installed so we're skipping. If it's not the same we install. Now I know this isn't the exact same scenario but it feels to me like some custom code is going to be needed here.
Either that or leverage databag values to say install=true or false depending on the state of the code. You would update project a's install item in the databag when you want to deploy and the rest of the time it's set to false. The recipe would only proceed if the value was true?
Why not have a branch where HEAD is always ready to be deployed? Only push to this branch when your code is ready to go out into the world. Then you don't have to worry about intermediate, unstable states of your repository being synced by chef. Of course, you still have to wait for a client to wake up and sync before you see your changes, so if latency is a problem this won't work.
I’m working on a experiment regarding to a course I’m taking about tuning DB2. I’m using the EC2 from Amazon (aws) to conduct the experiment.
My problem is, however, that I have to test a non-compression against row-compression in DB2 and to do that I’ve created a bsh file that run those experiments. But when I reach to my compression part I get the error ”Transaction log is full”; and no matter how low I set the inserts for it is complaining about my transaction log.
I’ve scouted Google for a day now trying to find some way to flush / clear the log or just get rit of it, i don’t need it. I’ve tried to increase the size but nothing has helped.
Please, I hope someone has an answer to solve this frustrating problem
Thanks
- Mestika
There is no need to "clear the log" in DB2. When a transaction is rolled back, DB2 releases the log space used by the transaction.
If you've increased the log size and it has not helped, please post more information about what you're trying to do.
No need of restarting. Just try to force the applications using DB2 force applications all.
Increase the Actie Log File Size and try to force application connections and terminate the connections.
Try to run the job now.
db2 force applications all
db2 update db cfg for sample using logfilsiz 5125
db2 force applications all
db2 terminate
db2 connect to sample
Run your job and monitor.
Just restart the instance, it would release the pending logs and you should be fine