Resource does not exist error on hasura while github ci cd - github

I am trying to setup github ci cd with hasura...
I did everything as document said so, but since I am applying change locally on database, on cloud deployment it is saying table already exist while applying migration (which is logically correct).
now I want to avoid, skip or sync migration between cloud and local for that hasura mentioned a command in same doc.
While executing this command I am getting resource not found error
command: hasura migrate apply --skip-execution --version 1631602988318 --endpoint "https://customer-support-dev.hasura.app/v1/graphql" --admin-secret 'mySecretKey'
error: time="2021-09-14T20:44:19+05:30" level=fatal msg="{\n \"path\": \"$\",\n \"error\": \"resource does not exist\",\n \"code\": \"not-found\"\n}"

This was a silly mistake --endpoint must not contain URL path. So its value will be https://customer-support-dev.hasura.app.

Related

sam build when deployed in github now fails

Last week I managed to successfully deploy an AWS Lambda function (verified in the AWS console). This morning, I can no longer update the Lambda function. After deleting the Lambda function and pushing changes again, the Lambda was still not able to be created. Instead I get the following traceback:
Build Failed
Error: PythonPipBuilder:ResolveDependencies - The directory '/github/home/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/github/home/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Invalid requirement: 'Warning: The lock flag'
In the workflow deploy file:
- name: Build Lambda image
run: sam build
I don't know exactly what has changed to now cause this error. I tried the flag --use-container which successfully moves on the next step of deploying the Lambda image, however there I now encounter further error messages. I'd like to understand why before adding the --user-container flag, I didn't encounter this error. Is this --use-container flag necessary when not using the sam cli?
Further info
Building via the sam cli tool works, not when pushed via the Github actions workflow.

'Yarn Prisma Migrate Dev' fails with Error: P1000 stating the credentials provided are not valid. These credentials work on all other machines

Whenever I run the command yarn prisma migrate dev --preview-feature or just yarn prisma migrate dev I receive:
Error: P1000: Authentication failed against database server at `localhost`, the provided database credentials for `dbusername` are not valid.
The exact same setup and credentials on all other machines work so I know it doesn't have to do with the credentials. This is using a PostgreSQL database on Docker, with the prisma seed and schema in VS Code.
It possibly that the credentials are not actually being checked since I don't see any failed attempts show up in the log on the Docker side of things. Any ideas on how to resolve this or that I need to provide more info?

Move my hasura cloud schema, relations, tables etc. and put into my offline docker file using docker-compose

So basically I have my cloud hasura with existing schema, relations tables etc... and i want to offline it using docker and try using metadata export and import that seems not working how can I do it or is there other ways to do it?
this is the docker i want to offline
this is my cloud i want to get the schemas or metadata
OR MAYBE I JUST MANUALLY RECREATE THE TABLES AND RELATIONS??
When using the steps outlined in the Hasura Quickstart with Docker page then the following steps would help get all the table definitions, relationships etc., setup on the local instance just like it is set up on hasura cloud instance.
Migrate all the database schema and metadata using the steps mentioned in Setting up migrations
Since you want to migrate from hasura cloud use the URL of the cloud instance in step 2. Perform steps 3-6 as described in the above link.
Bring up the local docker environment. Ideally edit the docker-compose.yaml file to set HASURA_GRAPHQL_ENABLE_CONSOLE: "false" before running docker-compose up -d.
Resume the process of applying migrations from step 7. Use the endpoint from local instance. For example,
$ hasura metadata apply --endpoint http://localhost:8080
$ hasura migrate apply --endpoint http://localhost:8080

Gcloud dataflow job failed to write to temp location

I am invoking dataflow job using gcloud cli. My command looks like below;
gcloud dataflow jobs run avrojob4 \
--gcs-location=gs://dataflow-templates/latest/Cloud_Bigtable_to_GCS_Avro \
--region=europe-west1 \
--parameters bigtableProjectId="project-id",bigtableInstanceId="instance-id",bigtableTableId="table-id",outputDirectory="gs://avro-data/avrojob4/",filenamePrefix="avrojob4-"
and:
ERROR: Failed to write a file to temp location 'gs://dataflow-staging-us-central1-473832897378/temp/'. Please make sure that the bucket for this directory exists, and that the project under which the workflow is running has the necessary permissions to write to it.
Can someone help me how to pass temp location as specific value through above command?
There is no --temp-location flag for this command:
https://cloud.google.com/sdk/gcloud/reference/dataflow/jobs/run
I suspect you're attempting to solve the issue by creating the flag but, as you've seen this does not work.
Does the bucket exist?
Does the Dataflow service account have suitable permissions to write to it?
Can you gsutil ls gs://dataflow-staging-us-central1-473832897378?
if yes, then it's likely that the Dataflow service does not have permission to write to the bucket. Please review the instructions in the following link for adding the correct permissions for the Dataflow (!) service account:
https://cloud.google.com/dataflow/docs/concepts/security-and-permissions#accessing_cloud_storage_buckets_across_google_cloud_platform_projects

AWS DMS Streaming replication : Logical Decoding Output Plugins(test_decoding) not accessible

I'm trying to migrate a PostgreSQL DB persisted on cloud (on DO droplet) to RDS using AWS Database Migration Service (DMS).
I've successfully configured the replication instance and endpoints.
I've created a task with Migrate existing data and replicate ongoing changes. When I start the task it shows some error ERROR: could not access file "test_decoding": No such file or directory.
I've tried to create a replication slot manually on my DB console it throws the same error.
I've followed the procedures which was suggested on the DMS documentation for Postgres
I'm using PostgreSQL 9.4.6 on my source endpoint.
I presume that the problem is the output plugin test_decoding was not accessible to do the replication.
Please assist me to resolve this. Thanks in advance!
You must install postgresql-contrib additional supplied modules on Your source endpoint.
If it is installed, make sure, directory where test_decoding module located is the same with directory where PostgreSQL expect it.
In *nix, You can check module directory by command:
pg_config --pkglibdir
If it is not the same, copy module, or make symlink, or some other solution You prefer.