Unexpected value for default_scope - gcloud

I have set up an entire environment using gcloud compute (with great success) and am now trying to script the updating of an image in an instance template. First step is to switch off auto delete on the the instance I wish to use as the base. I cannot get it to work without the following error:
$ gcloud compute --project testing-141313 instances set-disk-auto-delete mantle-test-robot-dtrc --zone europe-west1-b --device-name /dev/sda --no-auto-delete
ERROR: (gcloud.compute.instances.set-disk-auto-delete) Unexpected
value for default_scope ScopeEnum.GLOBAL, expected None or ZONE

Related

sam build when deployed in github now fails

Last week I managed to successfully deploy an AWS Lambda function (verified in the AWS console). This morning, I can no longer update the Lambda function. After deleting the Lambda function and pushing changes again, the Lambda was still not able to be created. Instead I get the following traceback:
Build Failed
Error: PythonPipBuilder:ResolveDependencies - The directory '/github/home/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/github/home/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Invalid requirement: 'Warning: The lock flag'
In the workflow deploy file:
- name: Build Lambda image
run: sam build
I don't know exactly what has changed to now cause this error. I tried the flag --use-container which successfully moves on the next step of deploying the Lambda image, however there I now encounter further error messages. I'd like to understand why before adding the --user-container flag, I didn't encounter this error. Is this --use-container flag necessary when not using the sam cli?
Further info
Building via the sam cli tool works, not when pushed via the Github actions workflow.

Resource does not exist error on hasura while github ci cd

I am trying to setup github ci cd with hasura...
I did everything as document said so, but since I am applying change locally on database, on cloud deployment it is saying table already exist while applying migration (which is logically correct).
now I want to avoid, skip or sync migration between cloud and local for that hasura mentioned a command in same doc.
While executing this command I am getting resource not found error
command: hasura migrate apply --skip-execution --version 1631602988318 --endpoint "https://customer-support-dev.hasura.app/v1/graphql" --admin-secret 'mySecretKey'
error: time="2021-09-14T20:44:19+05:30" level=fatal msg="{\n \"path\": \"$\",\n \"error\": \"resource does not exist\",\n \"code\": \"not-found\"\n}"
This was a silly mistake --endpoint must not contain URL path. So its value will be https://customer-support-dev.hasura.app.

gcloud logging with regular expression

I'm trying to use gcloud logging along with regular expression. My query works in the console but I can't get it going via the CLI.
gcloud logging read "resource.type=gce_instance AND protoPayload.authenticationInfo.principalEmail=~'.*#example.com.au'" --limit=10 --format=json
I get the error:
ERROR: (gcloud.logging.read) INVALID_ARGUMENT: Unparseable filter: unrecognized node at token 'MEMBER'
I've tried with and without various quotes '' "" "\"\
I also have the same trouble when doing timestamp dates as well:
gcloud logging read "resource.type=gce_instance AND timestamp > '2021-06-15T00:00:00.000000Z'"
I get the error:
ERROR: (gcloud.logging.read) INVALID_ARGUMENT: Unparseable filter: syntax error at line 1, column 112, token ':';
Your first gcloud expression should look like this:
cloud logging read "resource.type=gce_instance AND protoPayload.authenticationInfo.principalEmail:'.*#example.com.au'"
I changed = sign to :.
And the second one like this:
gcloud logging read 'resource.type=gce_instance AND timestamp > "2021-08-15T00:00:00.000000Z"'
I exchanged single with double quotes (literally).
It's best to have a quick look at the gcloud logging read command documentation (I figured out a proper syntax this way).

`--trigger-resource' description error for cloud firestore trigger

I am trying an example of Cloud Functions trigger based on Cloud Firestore. While deploying the function using gcloud, I am getting this error:
gcloud functions deploy hello_firestore --runtime python32
--trigger-event providers/cloud.firestore/eventTypes/document.update
--trigger-resource projects/patch-us/databases/ (default)
/documents/books/{booksid}
bash: syntax error near unexpected token `('
Can someone point out whats wrong with the command line?
It was a very stupid mistake..
gcloud functions deploy hello_firestore --runtime python32 --trigger-event providers/cloud.firestore/eventTypes/document.update --trigger-resource "projects/patch-us/databases/(default)/documents/books/{booksid}"
The path needs to be within inverted commas.

gcloud Export to Google Storage Bucket from Cloud SQL instance

Running this command:
gcloud sql instances export myinstance gs://my_bucket_name/filename.csv -d "mydatabase" -t "mytable"
Giving me the following error:
ERROR: (gcloud.sql.instances.import) ERROR_RDBMS
I have manually ran console uploads to the bucket which go fine. I am able to login to the sql instance and run queries. Which makes me think that there are no permission issues. Has anybody ever seen this type of error and knows a way around it?
Note: i have googled for possible situations, and most of them point to either sql or bucket permission issues.
Nvm. I figured out that i need to make an oauth connection (using the json token generated from gcloud api/credentials section) to the instance before interacting with it.