ERROR: (gcloud.sql.export.sql) HTTPError 409: Operation failed because another operation was already in progress - google-cloud-sql

Running sql exports via jenkins (backups), On a regular basis i receive
"ERROR: (gcloud.sql.export.sql) HTTPError 409: Operation failed because another operation was already in progress. ERROR: (gcloud.sql.operations.wait) argument OPERATION [OPERATION ...]: Must be specified.
I'm trying to determine where i can see which job are causing this to fail
ive tried to extending the gcloud sql operations wait --timeout to 1600
no luck
gcloud sql operations wait --timeout=1600

To wait for an operation, you need to specify the ID of the operation, as #PYB said. Here's how you can do that programmatically, like in a Jenkins script:
$ gcloud sql operations list --instance=$DB_INSTANCE_NAME --filter='NOT status:done' --format='value(name)' | xargs -r gcloud sql operations wait
$ gcloud sql ... # whatever you need to do

There are 2 errors here that could be affecting you. The first one is that there is an administrative operation starting before the previous one has completed. Reading through this “Best Practices” doc on SQL will help you on that front:
https://cloud.google.com/sql/docs/mysql/best-practices#admin
Specifically, in the Operations tab you can see the operations that are running.
Finally, the [OPERATION] argument is missing from the command “gcloud sql operations wait --timeout=1600”. See the documentation on that command here: https://cloud.google.com/sdk/gcloud/reference/sql/operations/wait
OPERATION is the name of the running operation, and if you wish to list all instance operations to get the right name, you can use this command:
https://cloud.google.com/sdk/gcloud/reference/sql/operations/list.
The operations names are 36 chars string on hexadecimal format, so your command should look something like this:
“gcloud sql operations wait OPERATION aaaaaaaa-0000-0000-0000-000000000000 --timeout=1600”
Cheers

I have the same problem during a long running import:
gcloud sql import sql "mycompany-mysql-1" $DB_BACKUP_PATH --database=$DB_NAME -q
Does it really mean if the import runs for an hour, I am not able to create databases
during that time? Really???
gcloud sql databases create $DB_NAME --instance="mycompany-mysql-1", -i "mycompany-mysql-1" --async "
This is a big issue if you use GCloud inside CI/CD! Anyone with an easy solution?
My idea until now:
download the backup to the CI/CD from the cloud bucket
connect over CLI to the MySQL and import the dump this way
But this means whenever two task inside the CI/CD want to do more than one task at the same time, one task will fail or I have to wait. Very Sad, if I am got it correct.

Related

GCloud Compute Project-Info Add-Metadata Hangs

The following command doesn't exit on my system:
gcloud compute project-info add-metadata --metadata=itemname=itemvalue
I am using powershell on windows, and I've also tested on a linux container in docker. In both environments, the metadata is updated, but the command never terminates.
If I provide an invalid key, or update to the existing value, I do get the output: No change requested; skipping update for [project]. and the program exits. Performing an actual update produces the hang.
I need this command to terminate so that I can use it in a script. I would like to be able to check the exit code to ensure the update occurred successfully.
You aren't patient enough. In large projects, this operation can take significant time to process. Give the script several minutes to complete.

ERROR: cannot execute SELECT in a read-only transaction when connecting to DB

When trying to connect to my Amazon PostgreSQL DB, I get the above error. With pgAdmin, I get "error saving properties".
I don't see why to connect to a server, I would do any write actions?
There are several reasons why you can get this error:
The PostgreSQL cluster is in recovery (or is a streaming replication standby). You can find out if that is the case by running
SELECT pg_is_in_recovery();
The parameter default_transaction_read_only is set to on. Diagnose with
SHOW default_transaction_read_only;
The current transaction has been started with
START TRANSACTION READ ONLY;
You can find out if that is the case using the undocumented parameter
SHOW transaction_read_only;
If you understand that, but still wonder why you are getting this error, since you are not aware that you attempted any data modifications, it would mean that the application that you use to connect tries to modify something (but pgAdmin shouldn't do that).
In that case, look into the log file to find out what statement causes the error.
This was a bug which is now fixed, Fix will be available in next release.
https://redmine.postgresql.org/issues/3973
If you want to try then you can use Nightly build and check: https://www.postgresql.org/ftp/pgadmin/pgadmin4/snapshots/2019-02-17/

How can you get super privilege for a google sql cloud instance?

I am working on a google app engine. In google cloud sql i have created one instance and whenever i import my sql file in cloud sql's instance then it shows me an error like below:
ERROR 1227 (42000) at line 1088: Access denied; you need (at least one of) the SUPER privilege(s) for this operation
Operation failed with exitcode 1
What to do to get super privilege for my cloud sql instance?
You can't have SUPER root priviliges in CLoud SQL due to its restrictions [1]. Here [2] are some tips to import files thta might help.
[1] https://cloud.google.com/sql/faq
[2] https://cloud.google.com/sql/docs/import-export#import
Statement
DEFINER=username#`%
is an issue in your backup dump.
The solution that you can work around is to remove all the entry from sql dump file and import data from GCP console.
Use command to edit the dump file and generate new one -
cat DUMP_FILE_NAME.sql | sed -e 's/DEFINER=<username>#%//g' >
NEW-CLEANED-DUMP.sql
After removing the entry from dump and completing successfully you can try reimporting.
you can edit import sql database file and remove DEFINER from file.
I had the same problem few days ago.
I deleted "Definer = your username # local host" from MySql, and tried to import after, it worked. D

postgres libpq: synchronous COPY mysteriously cancelled "due to user request"

My application is using libpq to write data to Postgres using the COPY API. After over 900000 successful COPY+commit (each containing a single row, don't ask) actions, one errored out with the following:
ERROR: canceling statement due to user request
CONTEXT: COPY [...]
My code never calls PQcancel or related friends, which I think is precluded anyway by the fact that libpq is being used synchronously and my app is not multi-threaded.
libpq v8.3.0
Postgres v9.2.4
Is there any reasonable explanation for what might have caused the COPY to be cancelled? Will upgrading libpq (as I have done in more recent versions of my application) be expected to improve the situation?
The customer reports that the Postgres server may have been shut down when this error was reported, but I'm not convinced since the error text is pretty specific.
That error will be emitted when you:
send a PQcancel
use pg_cancel_backend
Hit control-C in psql (which invokes PQcancel)
Send SIGINT to a backend, e.g. kill -INT or kill -2.
My initial answer was incorrect, claiming that the following also produced the same error. They don't; these:
pg_terminate_backend
pg_ctl shutdown -m fast
will emit a different error FATAL: terminating connection due to administrator command.

App to monitor PostgreSQL queries in real time?

I'd like to monitor the queries getting sent to my database from an application. To that end, I've found pg_stat_activity, but more often then not, the rows which are returned read " in transaction". I'm either doing something wrong, am not fast enough to see the queries come through, am confused, or all of the above!
Can someone recommend the most idiot-proof way to monitor queries running against PostgreSQL? I'd prefer some sort of easy-to-use UI based solution (example: SQL Server's "Profiler"), but I'm not too choosy.
PgAdmin offers a pretty easy-to-use tool called server monitor
(Tools ->ServerStatus)
With PostgreSQL 8.4 or higher you can use the contrib module pg_stat_statements to gather query execution statistics of the database server.
Run the SQL script of this contrib module pg_stat_statements.sql (on ubuntu it can be found in /usr/share/postgresql/<version>/contrib) in your database and add this sample configuration to your postgresql.conf (requires re-start):
custom_variable_classes = 'pg_stat_statements'
pg_stat_statements.max = 1000
pg_stat_statements.track = top # top,all,none
pg_stat_statements.save = off
To see what queries are executed in real time you might want to just configure the server log to show all queries or queries with a minimum execution time. To do so set the logging configuration parameters log_statement and log_min_duration_statement in your postgresql.conf accordingly.
pg_activity is what we use.
https://github.com/dalibo/pg_activity
It's a great tool with a top-like interface.
You can install and run it on Ubuntu 21.10 with:
sudo apt install pg-activity
pg_activity
If you are using Docker Compose, you can add this line to your docker-compose.yaml file:
command: ["postgres", "-c", "log_statement=all"]
now you can see postgres query logs in docker-compose logs with
docker-compose logs -f
or if you want to see only postgres logs
docker-compose logs -f [postgres-service-name]
https://stackoverflow.com/a/58806511/10053470
I haven't tried it myself unfortunately, but I think that pgFouine can show you some statistics.
Although, it seems it does not show you queries in real time, but rather generates a report of queries afterwards, perhaps it still satisfies your demand?
You can take a look at
http://pgfouine.projects.postgresql.org/