Having problem to add local hasura to Google cloud run - deployment

Do you have some information or tutorial to add local hasura to google cloud run.
I already successfully set the hasura at google cloud run, but it seems i have a problem to connect it with our local database in hasura.
i got an error
ERROR: (gcloud.builds.submit) Unable to read file [cloudbuild.yaml]: [Errno 2] No such file or directory: 'cloudbuild.yaml'
Is there something is not configured yet or?
Best
Zaid

Your question is vague.
The error you reference is Google Cloud Build and suggests that you're trying gcloud builds submit ... and this is failing because the command is unable to find a cloudbuild.yaml file. It's entirely probably you want to do the deployment using Cloud Build but you'll need to create the cloudbuild.yaml file for this to work.
For those of unfamiliar with "hasura", do you mean hasura.io?
This appears to require a container image running that defaults to port :8080 (which is good as that's a default assumed by Cloud Run) and a connection to a PostreSQL database.
If you're using Cloud SQL to run PostgreSQL, you can follow the instructions here

Related

Wait time for Google Cloud service account role change to propagate

I am using a downloaded JSON file containing service account keys, instead of ADC, with code running on my local developer machine and communicating with live GCP Firestore.
After adding a service account to a role, in my case roles/datastore.user, do I have to do anything before it takes effect?
E.g. wait 15 minutes, redownload the JSON, restart some services, something else?
Question relates to this error in automated tests running on my machine.
Test method MyProject.Data.Repositories.FirestoreRepositoryTests.FirestoreAccountDocRepository_UpdateAsync__updates threw exception:
Grpc.Core.RpcException: Status(StatusCode="PermissionDenied", Detail="Permission denied on resource project my-project-prodlike.", DebugException="Grpc.Core.Internal.CoreErrorDetailException: {"created":"#1642697226.430711000","description":"Error received from peer ipv4:172.217.169.74:443","file":"/Users/einari/Projects/grpc/grpc/src/core/lib/surface/call.cc","file_line":1074,"grpc_message":"Permission denied on resource project my-project-prodlike.","grpc_status":7}")
Note - I'm using Contrib.Grpc.Core.M1 since I'm on new MacBook.
Note - I'm no longer using the above and now using Google's workaround GRPC lib adapter, just in case. See https://github.com/googleapis/google-cloud-dotnet/issues/7560#issuecomment-975414370.
The permission denied problem was being caused by an incorrect project name (and not permission actually being denied).
At the top of the Google Cloud Console is the name of the current project. However, that's actually just a pointless alias, the real project identifier is not displayed by default, though it is in the URL in the browser.
Of course, the error message implies it found its target resource and it denied access.
I'm so tired.

CloudRun Suddenly got `Improper path /cloudsql/{SQL_CONNECTION_NAME} to connect to Postgres Cloud SQL instance "{SQL_CONNECTION_NAME}"`

We have been running a service using NestJS and TypeORM on fully managed CloudRun without issues for several months. Yesterday PM we started getting Improper path /cloudsql/{SQL_CONNECTION_NAME} to connect to Postgres Cloud SQL instance "{SQL_CONNECTION_NAME}" errors in our logs.
We didn't make any server/SQL changes around this timestamp. Currently there is no impact to the service so we are not sure if this is a serious issue.
This error is not from our code, and our third party modules shouldn't know if we use Cloud SQL, so I have no idea where this errors come from.
My assumption is Cloud SQL Proxy or any SQL client used in Cloud Run is making this error. We use --add-cloudsql-instances flag when deploying with "gcloud run deploy" CLI command.
Link to the issue here
This log was recently added in the Cloud Run data path to provide more context for debugging CloudSQL connectivity issues. However, the original logic was overly aggressive, emitting this message even for properly working CloudSQL connections. Your application is working correctly and should not receive this warning.
Thank you for reporting this issue. The fix is ready and should roll out soon. You should not see this message anymore after the fix is out.

MLflow Artifacts Storing artifacts(google cloud storage) but not displaying them in MLFlow UI

I am working on a docker environment(docker-compose) with a jupyter notebook docker image and a postgres docker image for running ML models and using google cloud storage to store the model artifacts. Storing the models on the cloud storage works fine but i can't get to show them within the MLFlow UI. I have seen similar problems but non of the solutions used google cloud storage as the storage location for artifacts. The error message says the following Unable to list artifacts stored under <gs-location> for the current run. Please contact your tracking server administrator to notify them of this error, which can happen when the tracking server lacks permission to list artifacts under the current run's root artifact directory.What could possibly be causing this problem?
I had the exactly the same issue. Keywords are docker-compose, google cloud storage, success in storing in GCS, but failure in listing artifacts in UI.
In my case, it turns out that in docker-compose file, if you assign the env vars by reading from a .env file (eg. GOOGLE_APPLICATION_CREDENTIALS), the server might start before the assignment. The quick solve is to assign the env var directly with key environment: instead of using key env_file:.
For sensitive data that you still need to put in .env file, you can add wait time for the server, and add depends on: in docker-compose file to make sure that the database container starts before the mlflow server if you are using database-backed store.
I faced a same issue when running mlflow from local. The issue got resolved after adding GOOGLE_APPLICATION_CREDENTIALS to the environment variables.
https://googleapis.dev/python/google-api-core/latest/auth.html

Is there any other way other than MLab/ObjectRocket to migrate the app from parse

I have created an EC2 AWS instance with rocksDb engine now I am trying to migrate the parse application to this Instance as instructed here
https://gyminutesapp.wordpress.com/2016/04/28/migrating-parse-mongodb-to-mongorocks-on-aws
Is it compulsory that I have to do it via MLab/ObjectRockect or is there any other way??
Can Anyone help me out with the further steps, How to connect to parseServer and migrate the data?
You can move to any mongodb database. You can setup any server and install mongodb, allow remote access, and push your data from parse.com to your own mongodb database. This is the first step in parse migration process.
Below are the other steps to take care :
1. Host the open source parse server, configure it to connect to your database.
2. Fix your cloud code, minor changes might be required.
3. Replace the cloud modules that you are using with npm module counterpart.
deploy !!!

Add endpoint in Cloud Integration service

I failed to add endpoint in Cloud Integration service, following the steps below:
Login to Bluemix
Create a Cloud Integration service
click on Secure connections tab
Download the connector and install it on the controller node
provide the public key
Refresh the connection. It should show connected
try to add the endpoint It is giving error fail to connect endpoint
Each time that you create a new basic connection, a new installation .tar file is created specific to that installation. They all have unique /home/nativeapiadmin/mgmt.tunnel files that are configured specifically for that connector. If you want to reuse an existing copy of the installation, you must edit the mgmt.tunnel file with the correct host name or IP address, and port numbers.Then, restart the connector.
If above does not resolve your problem, run the following procedure to clean up and recreate the endpoint:
delete your basic connector from cloud integration
create a new basic connector, with a new name
Download the Linux 64-bit installer and make sure the file size is  around 844,128 bytes
remove the older connector from the system
delete the "nativeapiadmin" user and the user's directory
delete the known_host file in the /root/.ssh directory
reinstall the connector, please read the INSTALL_README file that is included in the zip
upload the id_rsa.pub key from same machine as connector was installed
create the endpoint