gcloud beta run deploy fails after sucessfully uploading image, fails to enable API - google-cloud-sql

gcloud beta run deploy used to work but now I'm getting an error:
$ gcloud beta run deploy $PROJECT --image $IMAGE_NAME --platform=managed --region us-central1 --project $PROJECT --add-cloudsql-instances $PROJECT-db
...
DONE
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
abcdefj-higj-lmnopquer-uvw-xyz 2019-06-29T13:59:07+00:00 1M4S gs://$PROJECT_cloudbuild/source/XYZ123.96-aae829d50a2e43a29dce44d1f93bafbc.tgz gcr.io/$PROJECT/$PROJECT (+1 more) SUCCESS
API [sql-component.googleapis.com] not enabled on project
[$PROJECT]. Would you like to enable and retry (this will take a
few minutes)? (y/N)? y
Enabling service [sql-component.googleapis.com] on project [$PROJECT]...
ERROR: (gcloud.beta.run.deploy) INVALID_ARGUMENT: Invalid operation name operations/noop.DONE_OPERATION, refers to an already DONE operation
I've checked the APIs from the console, both Cloud SQL Admin and Cloud SQL APIs are enabled. I've also tried disabling them and run the deploy command again, but to no avail.
More info:
The SQL server instance is part of the same project. Changing the --add-cloudsql-instances parameter to the connection name ($PROJECT:$REGION:$SQLNAME) has no effect
Manually enabling the server has no effect: gcloud services enable sql-component.googleapis.com --project XXX
Removing the --add-cloudsql-instances parameter and the server deploys successfully.
This works: gcloud sql connect $PROJECTDB --user=root --quiet
# NOTE: ($PROJECTDB) is the same parameter as --add-cloudsql-instances above

There seems to be a bug in gcloud v253.0.0 when deploying deploy Cloud Run services with Cloud SQL instances (requires Gmail log-in).
Once I downgraded to gcloud v251.0.0, I got rid of the "API [sql-component.googleapis.com] not enabled" error message and was able to deploy Cloud Run services with Cloud SQL instances again.
$ gcloud components update --version 251.0.0
UPDATE, July 17, 2019: The issue is fixed in Cloud SDK 254.0.0. If you upgrade to the latest version now, deploying Cloud Run services with Cloud SQL instances should work:
$ gcloud components update

For this problem there were two issues:
Enabling API services. I recommend enabling services prior to running Cloud Run deploy as this can take longer than Cloud Run might allow. Run this command first: gcloud services enable sql-component.googleapis.com
The Cloud SQL connection name was incorrect. Specifying the correct name helps.
The format of the Cloud SQL connection name is: $PROJECT:$REGION:$GCP_SQL_NAME.
Example: development-123456:us-central1:mysqldb
This command will return information about the Cloud SQL instance including the connection name:
gcloud sql instances describe <instance_name>
Note. Cloud Run has several commands for specifying the Cloud SQL instance to attach.
--add-cloudsql-instances - This option appends the specified connection name.
--set-cloudsql-instances - This option replaces the current Cloud SQL connection name.
If you are not deploying a new version to Cloud Run, it is not necessary to use the --add-cloudsql-instances option as the value persists. I prefer to use the --set-cloudsql-instances option to clearly specify the Cloud SQL instances.
Cloud Run supports multiple Cloud SQL instances. You can have add more than one connection name.

Related

GCloud authentication race conditions

I'm trying to avoid race conditions with gcloud / gsutil authentication on the same system but different CI/CD jobs on my Gitlab-Runner on a Mac Mini.
I have tried setting the auth manually with
RUN gcloud auth activate-service-account --key-file="gitlab-runner.json"
RUN gcloud config set project $GCP_PROJECT_ID
for the Dockerfile (in which I'm performing a download operation from a Google Cloud Storage bucket).
I'm using a configuration in the bash script to run the docker command and in the same script for authenticating I'm using
gcloud config configurations activate $TARGET
Where I've previously done the above two commands to save them to the configuration.
The configurations are working fine if I start the CI/CD jobs one after the other has finished. But I want to trigger them for all clients at the same time, which causes race conditions with gcloud authentication and one of the jobs trying to download from the wrong project bucket.
How to avoid a race condition? I'm already authenticating before each gsutil command but still its causing the race condition. Do I need something like CloudBuild to separate the runtime environments?
You can use Cloud Build to get separate execution environments but this might be an overkill for your use case, as a Cloud Build worker uses an entire VM which might be just too heavy, linux containers / Docker can provide necessary isolation as well.
You should make sure that each container you run has a unique config file placed in the path expected by gcloud. The issue may come from improper volume mounting (all the containers share the same location from the host/OS), or maybe you should mount a directory containing their configuration file (unique for each bucket) on running an image, or perhaps you should run gcloud config configurations activate in a Dockerfile step (thus creating image variants for different buckets if it’s feasible).
Alternatively, and I think this solution might be easier, you can switch from Cloud SDK distribution to standalone gsutil distribution. That way you can provide a path to a boto configuration file through an environment variable.
Such variables can be specified on running a Docker image.

Trying to create a gcloud cloud run service, it says "Cloud Run is not available in the regions allowed by your organization. "

I am trying to create a simple test cloud run service, when creating the service via the UI once I click 'create service' it says
Cloud Run (fully managed) "Cloud Run is not available in the regions allowed by your organization. "
Cloud run for Anthos
Why am I not able to create a cloud run fully managed service?
Probably, you have to enable Cloud Run.
In the Marketplace page, search with "Cloud Run".
This is due to an issue with Cloud Run.
Until it is resolved, you need to manually enable the Cloud Run API at https://console.cloud.google.com/marketplace/details/google-cloud-platform/cloud-run
Here is a list of all the available regions and resources available for Cloud Run for Anthos
And for Cloud Run (Fully Managed) the available regions are (as of 2020-01-08):
asia-northeast1 (Tokyo)
europe-west1 (Belgium)
us-central1 (Iowa)
us-east1 (South Carolina)
You will have to contact the responsible person in the organization under which your project exists to remove the restriction on at least one of the supported regions before you can use Cloud Run.
The error message has nothing to do with the underlying problem. I discovered in newly created projects you need to activate Compute by accessing it in the console and then waiting a few minutes.

GCloud Function not firing after Firestore Create

I've deployed a function to gcloud using the following command line script:
gcloud functions deploy my_new_function --runtime python37 \
--trigger-event providers/cloud.firestore/eventTypes/document.create \
--trigger-resource projects/my_project_name/databases/default/documents/experiences/{eventId}
This worked successfully, and my function was deployed. Here is what I expected to happen as a result:
Any time a new document was created within the experiences firestore collection, the function my_new_function would be invoked.
What is actually happening:
my_new_function is never being invoked as a result of a new document being created within experiences
The --source parameter is for deploying from source control, which you are not trying to do . You will want to deploy from your local machine instead. You run gcloud from the directory you want to deploy.

MariaDB Backup from the command line

The Backup feature in the developer console for creating backups is great. I would however like the possibility to automate this. Is there a way to do so from the cf command line app?
Thanks
It's not possible from the cf cli, but there's an API endpoint for triggering backups.
API Docs | Custom Extensions | Swisscom Application Cloud Filter for
Cloud Foundry (CF) Cloud Controller (CC) API. Implements Swisscom
proprietary extensions
POST /custom/service_instances/{service-instance-id}/backups
Creates a backup for a given service instance
See for more Info Service Backup and Restore in docs.developer.swisscom.com
Create Backup To create a backup, navigate to the service instance in
the web console and then to the “Backups” tab. There you can click the
“Create” button to trigger a manual backup.
Note: Backups have to be triggered manually from the web console.
Be aware that you can only keep a set number of backups per service
instance. The actual number is dependent on the service type and
service plan. In case you already have the maximum number, you cannot
create any new backups before deleting one of the existing.
It may take several minutes to backup your service (depending on the
size of your service instance).
Restore Backup You can restore any backup at any time. The current
state of your backup will be overwritten and replaced with the state
saved to the backup. You are advised to create a backup of the current
state before restoring an old state.
Limitations You can only perform one backup or restore action per
service instance at a time. If an action is still ongoing, you cannot
trigger another one. You cannot exceed the maxmimum number of backups
per service instance
We did this by developing a small Node.js application which is running on the cloud in the same space and which backups our maria and mongo db every night automatically.
EDIT:
You can download the code from here:
https://github.com/theonlyandone/cf-backup-app
Fresh from the press: Swisscom Application Cloud cf CLI Plugin can also automate backup and restore.
The official cf CLI plugin for the Swisscom Application Cloud gives
you access to all the additional features of the App Cloud.
cf install-plugin -r CF-Community "Swisscom Application Cloud"
from 0.1.0 release notes
Service Instance Backups
Add cf backups command (list all backups of a service instance)
Add cf create-backup command (create a new backup of a service instance)
Add cf restore-backup command (restore an existing backup of a service instance)
Add cf delete-backup command (delete an existing backup of a service instance)
Despite the answer from Matthias Winzeler saying it's not possible, in fact it's totally possible to automate MariaDB backups through the command line.
I developed a plugin for the CF CLI:
https://github.com/gsmachado/cf-mariadb-backup-plugin
In future I could extend such plugin to backup any kind of service that is supported by the Cloud Foundry Provider's API (in this case, Swisscom AppCloud API).

Using Presto on Cloud Dataproc with Google Cloud SQL?

I use both Hive and MySQL (via Google Cloud SQL) and I want to use Presto to connect to both easily. I have seen there is a Presto initialization action for Cloud Dataproc but it does not work with Cloud SQL out of the box. How can I get that initialization action to work with Cloud SQL so I can use both Hive/Spark and Cloud SQL with Presto?
The easiest way to do this is to edit the initialization action installing Presto on the Cloud Dataproc cluster.
Cloud SQL setup
Before you do this, however, make sure to configure Cloud SQL so it will work with Presto. You will need to:
Create a user for Presto (or have a user ready)
Adjust any necessary firewall rules so your Cloud Dataproc cluster can connect to the Cloud SQL instance
Changing the initialization action
In the Presto initialization action there is a section which sets up the Hive configuration and looks like this:
cat > presto-server-${PRESTO_VERSION}/etc/catalog/hive.properties <<EOF
connector.name=hive-hadoop2
hive.metastore.uri=thrift://localhost:9083
EOF
You can add a new section like this (below) which sets up the mysql properties. Add something like this:
cat > presto-server-${PRESTO_VERSION}/etc/catalog/mysql.properties <<EOF
connector.name=mysql
connection-url=jdbc:mysql://<ip_address>:3306
connection-user=<username>
connection-password=<password>
EOF
You will obviously want to replace <ip_address>, <username>, and <password> with your correct values. Moreover, if you have multiple Cloud SQL instances to connect to, you can add multiple sections and give them different names, so long as the filename ends in .properties.