GCloud Function not firing after Firestore Create - google-cloud-firestore

I've deployed a function to gcloud using the following command line script:
gcloud functions deploy my_new_function --runtime python37 \
--trigger-event providers/cloud.firestore/eventTypes/document.create \
--trigger-resource projects/my_project_name/databases/default/documents/experiences/{eventId}
This worked successfully, and my function was deployed. Here is what I expected to happen as a result:
Any time a new document was created within the experiences firestore collection, the function my_new_function would be invoked.
What is actually happening:
my_new_function is never being invoked as a result of a new document being created within experiences

The --source parameter is for deploying from source control, which you are not trying to do . You will want to deploy from your local machine instead. You run gcloud from the directory you want to deploy.

Related

gcloud CLI application logs to bucket

There are Scala application Spark jobs that run daily in GCP. I am trying to set up a notification to be sent when run is compeleted. So, one way I thought of doing that was to get the logs and grep for a specific completion message from it (not sure if there's a better way). But I figured out the logs are just being shown in the console, inside the job details page and not being saved on a file.
Is there a way to route these logs to a file in a bucket so that I can search in it? Do I have to specify where to show these logs in the log4j properties file, like give a bucket location to
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
I tried to submit a job with this but it's giving me this error: grep:**-2022-07-08.log: No such file or directory
...
gcloud dataproc jobs submit spark \
--project $PROJECT --cluster=$CLUSTER --region=$REGION --class=***.spark.offer.Main \
--jars=gs://**.jar\
--properties=driver-memory=10G,spark.ui.filters="",spark.memory.fraction=0.6,spark.sql.files.maxPartitionBytes=5368709120,spark.memory.storageFraction=0.1,spark.driver.extraJavaOptions="-Dcq.config.name=gcp.conf",spark.executor.extraJavaOptions="-Dlog4j.configuration=log4j-executor.properties -Dcq.config.name=gcp.conf" \
--gcp.conf > gs://***-$date.log 2>&1
By default, Dataproc job driver logs are saved in GCS at the Dataproc-generated driverOutputResourceUri of the job. See this doc for more details.
But IMHO, a better way to determine if a job has finished is through gcloud dataproc jobs describe <job-id> 1, or the jobs.get REST API 2.

GCloud authentication race conditions

I'm trying to avoid race conditions with gcloud / gsutil authentication on the same system but different CI/CD jobs on my Gitlab-Runner on a Mac Mini.
I have tried setting the auth manually with
RUN gcloud auth activate-service-account --key-file="gitlab-runner.json"
RUN gcloud config set project $GCP_PROJECT_ID
for the Dockerfile (in which I'm performing a download operation from a Google Cloud Storage bucket).
I'm using a configuration in the bash script to run the docker command and in the same script for authenticating I'm using
gcloud config configurations activate $TARGET
Where I've previously done the above two commands to save them to the configuration.
The configurations are working fine if I start the CI/CD jobs one after the other has finished. But I want to trigger them for all clients at the same time, which causes race conditions with gcloud authentication and one of the jobs trying to download from the wrong project bucket.
How to avoid a race condition? I'm already authenticating before each gsutil command but still its causing the race condition. Do I need something like CloudBuild to separate the runtime environments?
You can use Cloud Build to get separate execution environments but this might be an overkill for your use case, as a Cloud Build worker uses an entire VM which might be just too heavy, linux containers / Docker can provide necessary isolation as well.
You should make sure that each container you run has a unique config file placed in the path expected by gcloud. The issue may come from improper volume mounting (all the containers share the same location from the host/OS), or maybe you should mount a directory containing their configuration file (unique for each bucket) on running an image, or perhaps you should run gcloud config configurations activate in a Dockerfile step (thus creating image variants for different buckets if it’s feasible).
Alternatively, and I think this solution might be easier, you can switch from Cloud SDK distribution to standalone gsutil distribution. That way you can provide a path to a boto configuration file through an environment variable.
Such variables can be specified on running a Docker image.

running bash commands in cnf template

I have completed the first few steps as mentioned in this article.
https://aws.amazon.com/blogs/mt/running-bash-commands-in-aws-cloudformation-templates/
But I am getting an error at this:
aws cloudformation deploy \
--stack-name comandrunner-test-iops \
--template-file ./examples/commandrunner-example-iopscalc-template.yaml
The following resource(s) failed to create: [IopsCalculator]. Rollback
requested by user.
How do I know why the stack is not successfully created in this case?
I checked the documentation for this command:
https://docs.aws.amazon.com/cli/latest/reference/cloudformation/deploy/index.html
There is nothing really helpful there. Nothing in the parent command neither.
The best option is to look at the CloudFormation console. Sometimes Cloud Formation doesn't help too much around this type of errors.

gcloud beta run deploy fails after sucessfully uploading image, fails to enable API

gcloud beta run deploy used to work but now I'm getting an error:
$ gcloud beta run deploy $PROJECT --image $IMAGE_NAME --platform=managed --region us-central1 --project $PROJECT --add-cloudsql-instances $PROJECT-db
...
DONE
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
abcdefj-higj-lmnopquer-uvw-xyz 2019-06-29T13:59:07+00:00 1M4S gs://$PROJECT_cloudbuild/source/XYZ123.96-aae829d50a2e43a29dce44d1f93bafbc.tgz gcr.io/$PROJECT/$PROJECT (+1 more) SUCCESS
API [sql-component.googleapis.com] not enabled on project
[$PROJECT]. Would you like to enable and retry (this will take a
few minutes)? (y/N)? y
Enabling service [sql-component.googleapis.com] on project [$PROJECT]...
ERROR: (gcloud.beta.run.deploy) INVALID_ARGUMENT: Invalid operation name operations/noop.DONE_OPERATION, refers to an already DONE operation
I've checked the APIs from the console, both Cloud SQL Admin and Cloud SQL APIs are enabled. I've also tried disabling them and run the deploy command again, but to no avail.
More info:
The SQL server instance is part of the same project. Changing the --add-cloudsql-instances parameter to the connection name ($PROJECT:$REGION:$SQLNAME) has no effect
Manually enabling the server has no effect: gcloud services enable sql-component.googleapis.com --project XXX
Removing the --add-cloudsql-instances parameter and the server deploys successfully.
This works: gcloud sql connect $PROJECTDB --user=root --quiet
# NOTE: ($PROJECTDB) is the same parameter as --add-cloudsql-instances above
There seems to be a bug in gcloud v253.0.0 when deploying deploy Cloud Run services with Cloud SQL instances (requires Gmail log-in).
Once I downgraded to gcloud v251.0.0, I got rid of the "API [sql-component.googleapis.com] not enabled" error message and was able to deploy Cloud Run services with Cloud SQL instances again.
$ gcloud components update --version 251.0.0
UPDATE, July 17, 2019: The issue is fixed in Cloud SDK 254.0.0. If you upgrade to the latest version now, deploying Cloud Run services with Cloud SQL instances should work:
$ gcloud components update
For this problem there were two issues:
Enabling API services. I recommend enabling services prior to running Cloud Run deploy as this can take longer than Cloud Run might allow. Run this command first: gcloud services enable sql-component.googleapis.com
The Cloud SQL connection name was incorrect. Specifying the correct name helps.
The format of the Cloud SQL connection name is: $PROJECT:$REGION:$GCP_SQL_NAME.
Example: development-123456:us-central1:mysqldb
This command will return information about the Cloud SQL instance including the connection name:
gcloud sql instances describe <instance_name>
Note. Cloud Run has several commands for specifying the Cloud SQL instance to attach.
--add-cloudsql-instances - This option appends the specified connection name.
--set-cloudsql-instances - This option replaces the current Cloud SQL connection name.
If you are not deploying a new version to Cloud Run, it is not necessary to use the --add-cloudsql-instances option as the value persists. I prefer to use the --set-cloudsql-instances option to clearly specify the Cloud SQL instances.
Cloud Run supports multiple Cloud SQL instances. You can have add more than one connection name.

Pass internalIpOnly to projects.regions.clusters.create through gcloud

GceClusterConfig object has property internalIpOnly, but there is no clear documentation on how to specify that flag through gcloud command. Is there a way to pass that property?
That feature was first released in gcloud beta dataproc clusters create where you can use the --no-address flag to turn that on. The feature recently became General Availability, and should be making it into the main gcloud dataproc clusters create any moment now (it's possible if you run gcloud components update you'll get the flag in the non-beta branch even though the public documentation hasn't been updated to reflect it yet).