Google Cloud SQL export failing with error "Could not complete the operation" - google-cloud-storage

I have a google storage bucket to which I want to export my google cloud sql database. I go to the Export tab, select a location on my bucket, give a filename, and choose the database I want to export. But I'm always greeted with Could not complete the operation.
It has been happening to me for the last 2 days. This flow had worked a couple of weeks ago, and I haven't tweaked around with the settings since then.
Is there a way I can get a more descriptive response, so I can identify the error? Also, how do I export my cloud sql db until then? Do I connect with the psql client and figure out a way from there?

Related

How to log SQL queries on a Google Cloud SQL PostgreSQL 11 instance?

I have to log all DDL and DML queries executed on a Google Cloud SQL PostgreSQL instance.
I checked a lot of websites, but there is no clear information available. I tried using the pgAudit extension, but that is not supported by Cloud SQL.
Can someone please suggest the extension to be used or any other way of logging SQL queries?
Also, if the user logins can be logged, then that will be helpful, too.
Short Answer - Database Flags
The solution provided in the other answer can be used, if PostgreSQL is locally installed or if we have access to the server container. In Google Cloud, however, this file cannot be directly accessed from the instance.
I found that this can be achieved on Google Cloud SQL instance by setting the various parameters given in this link - PostgreSQL configuration parameters as database flags.
Note: All of the parameters are not supported, hence verify in the official documentation by Google given below.
Google Cloud Database Flags for PostgreSQL
Add in postgresql.conf:
log_statement=mod
https://www.postgresql.org/docs/12/runtime-config-logging.html says
logs all ddl statements, plus data-modifying statements such as
INSERT, UPDATE, DELETE, TRUNCATE, and COPY FROM. PREPARE, EXECUTE, and
EXPLAIN ANALYZE statements are also logged if their contained command
is of an appropriate type.
To log connections and disconnections, add in postgresql.conf:
log_connections=on
log_disconnections=on
On October 12, 2020 Google Cloud SQL for PostgreSQL added support for pgAudit. Please check these docs for more information.

How to take backup of Tableau Server Repository(PostgreSQL)

we are using 2018.3 version of Tableau Server. The server stats like user login, and other stats are getting logged into PostgreSQL DB. and the same being cleared regularly after 1 week.
Is there any API available in Tableau to connect the DB and take backup of data somewhere like HDFS or any place in Linux server.
Kindly let me know if there are any other way other than API as well.
Thanks.
You can enable access to the underlying PostgreSQL repository database with the tsm command. Here is a link to the documentation for your (older) version of Tableau
https://help.tableau.com/v2018.3/server/en-us/cli_data-access.htm#repository-access-enable
It would be good security practice to limit access to only the machines (whitelisted) that need it, create or use an existing read-only account to access the repository, and ideally to disable access when your admin programs are complete (i.e.. enable access, do your query, disable access)
This way you can have any SQL client code you wish query the repository, create a mirror, create reports, run auditing procedures - whatever you like.
Personally, before writing significant custom code, I’d first see if the info you want is already available another way, in one of the built in admin views, via the REST API, or using the public domain LogShark or TabMon systems or with the Addon (for more recent versions of Tableau) the Server Management Add-on, or possibly the new Data Catalog.
I know at least one server admin who somehow clones the whole Postgres repository database periodically so he can analyze stats offline. Not sure what approach he uses to clone. So you have several options.

Unable to connect from BigQuery job to Cloud SQL Postgres

I am not able to use the federated query capability from Google BigQuery to Google Cloud SQL Postgres. Google announced this federated query capability for BigQuery recently in beta state.
I use EXTERNAL_QUERY statement like described in documentation but am not able to connect to my Cloud SQL instance. For example with query
SELECT * FROM EXTERNAL_QUERY('my-project.europe-north1.my-connection', 'SELECT * FROM mytable;');
or
SELECT * FROM EXTERNAL_QUERY("my-project.europe-north1.pg1", "SELECT * FROM INFORMATION_SCHEMA.TABLES;");
I receive this error :
Invalid table-valued function EXTERNAL_QUERY Connection to PostgreSQL server failed: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.
Sometimes the error is this:
Error encountered during execution. Retrying may solve the problem.
I have followed the instructions on page https://cloud.google.com/bigquery/docs/cloud-sql-federated-queries and enabled BigQuery Connection API. Some documents use different quotations for EXTERNAL_QUERY (“ or ‘ or ‘’’) but all the variants end with same result.
I cannot see any errors in stackdriver postgres logs. How could I correct this connectivity error? Any suggestions how to debug it further?
Just adding another possibility for people using Private IP only Cloud SQL instances. I've just encountered that and was wondering why it was still not working after making sure everything else looked right. According to the docs (as of 2021-11-13): "BigQuery Cloud SQL federation only supports Cloud SQL instances with public IP connectivity. Please configure public IP connectivity for your Cloud SQL instance."
I just tried and it works, as far as the bigquery query runs in EU (as of today 6 October it works).
My example:
SELECT * FROM EXTERNAL_QUERY("projects/xxxxx-xxxxxx/locations/europe-west1/connections/xxxxxx", "SELECT * FROM data.datos_ingresos_netos")
Just substitute the first xxxxs with your projectid and the last ones with the name you gave to the connection in The bigquery interface (not cloudsql info, that goes into the query)
Unfortunately BigQuery federated queries to Cloud SQL work currently only in US regions (2019 September). The documents (https://cloud.google.com/bigquery/docs/cloud-sql-federated-queries) say it should work also in other regions but this is not the case.
I tested the setup from original question multiple times in EU and europe-north1 but was not able to get it working. When I changed the setup to US or us-central1 it works!
Federated queries to Cloud SQL are in preview so the feature is evolving. Let's hope Google gets this working in other regions soon.
The BigQuery dataset and the Cloud SQL instance must be in the same region, or same location if the dataset is in a multi-region location such as US and EU.
Double check this according to Known issues listed.
server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.
This message usually means that you will find more information in the logs of the remote database server. No useful information could be sent to the "client" server because the connection disappeared so there was no way to send it. So you have look in the remote server.

Google SQL instance stuck after operation "restore from backup"

After I proceed to restore database from automated backup file generated on Mar 13, 2019, the SQL instance stuck in this state forever:"
Restoring from backup. This may take a few minutes. While this operation is running, you may continue to view information about the instance."
The database size is very small, less than 1MB.
For future users that experience problems like this is in the future, here is how you can handle it:
If you have a Google Cloud support package, file a support ticket directly with support for the quickest response.
Otherwise please file a private GCP issue describing the problem, remembering to include the project id and instance name.
However - Cloud SQL instances are monitored for stuck states like this, so often the issue will resolve itself within a few hours.

How to find out why import fails on Google Cloud SQL

I generate a .sql file, on my laptop, that contains around 11 million insert statements into several tables.
Locally I run a MySQL database, into which I import this file. It takes a while, but it succeeds without any problems. The local MySQL version is:
mysql Ver 14.14 Distrib 5.6.16, for osx10.7 (x86_64) using EditLine wrapper
I want to import this file into a Google Cloud SQL instance. To do so, I first gzip the .sql file and upload it to a bucket in Google Cloud Storage.
Then I create a D0 pay-per-use instance (the least powerful / cheapest). I click 'Import' and enter the name of the file on cloud storage.
The import starts, but after a while (around a day) the import fails, stating: An unknown error occurred.
I tried this using a MySQL 5.5 and an experimental 5.6 instance, both fail at different inserts. (I can see what the latest successful insert was).
My problem is, I cannot find out what MySQL thinks is the problem.
How can I ask the Google developer console to show me a log? I tried on the Google APIs page which has a 'Logs' tab, but it gives me An error has occurred. Please retry later.
Maybe Google Cloud SQL has some limits on the insert statements that my local MySQL does not have?
One of the fields is a MEDIUMTEXT, which I believe can be larger than 65.536 bytes.
Any advice is appreciated.
---------- UPDATE -----------
I mailed with the cloud-sql team and they confirmed the problem was that the import timed out.
So indeed, 24 hours is the maximum time an import may take on Cloud SQL.
Solutions are: use a more powerful instance for the import (and use asynchronous replication), or split up the .sql in multiple parts.
Another approach is to use several values per insert statement, just make sure the line does not exceed 4MB. This is what the value of max_allowed_packet is on cloud sql. It speeds up the insert greatly.
In fact, this makes it possible to have the D0 instance import the file in a few hours, so I don't need to bump it to a more powerful one.