Move my hasura cloud schema, relations, tables etc. and put into my offline docker file using docker-compose - docker-compose

So basically I have my cloud hasura with existing schema, relations tables etc... and i want to offline it using docker and try using metadata export and import that seems not working how can I do it or is there other ways to do it?
this is the docker i want to offline
this is my cloud i want to get the schemas or metadata
OR MAYBE I JUST MANUALLY RECREATE THE TABLES AND RELATIONS??

When using the steps outlined in the Hasura Quickstart with Docker page then the following steps would help get all the table definitions, relationships etc., setup on the local instance just like it is set up on hasura cloud instance.
Migrate all the database schema and metadata using the steps mentioned in Setting up migrations
Since you want to migrate from hasura cloud use the URL of the cloud instance in step 2. Perform steps 3-6 as described in the above link.
Bring up the local docker environment. Ideally edit the docker-compose.yaml file to set HASURA_GRAPHQL_ENABLE_CONSOLE: "false" before running docker-compose up -d.
Resume the process of applying migrations from step 7. Use the endpoint from local instance. For example,
$ hasura metadata apply --endpoint http://localhost:8080
$ hasura migrate apply --endpoint http://localhost:8080

Related

Connect Postgres db hosted in azure storage using docker

I am trying to connect the postgres database hosted in azure storage account from within the flyway, flyway is running as docker image in docker container
docker run --rm flyway/flyway -url=jdbc:postgresql://postgres-azure-db:5432/postgres -user=user -password=password info but I am getting the error ERROR: Unable to obtain connection from database
Any idea/doc-link would be helpful
You have a similar error (different context, same general solution) in this flyway issue
my missing piece for reaching private Cloud SQL instances from private worker pools in Cloud Build was a missing network route.
The fix is ensuring the Service Networking VPC peer has the "Export custom routes" setting enabled, and that the Cloud Router advertises the route.
In your context (Azure), see "Quickstart: Use Azure Data Studio to connect and query PostgreSQL"
You can also try with a local Postgres instance first, and Azure Data Studio, for testing.
After exploring few option, I implemented the flyway using the Azure container instance. Create an ACI to store the flyway docker image and to execute the commands inside ACI, Also created a file share to keep the config file and sql scripts.
All these resource (Storage, ACI, file share) I created via the terraform scripts which is being triggered from Jenkins.

Copy data from Postgres DB (GCP Project A) to another Postgres DB (GCP Project B)

I would be happy to get your help / feedback re data load.
Goal:
Load source data from a Postgres database, which is located in GCP project A to another Postgres database, which is located in GCP project B.
Challenge:
Get a connection (I have an IAM account with sufficient rights to run a COPY TO / COPY FROM command) to the Postgres DB in GCP Project A and copy the table either to a CSV or create a dump that can be used in order to be inserted to another Postgres DB in GCP Project B.
How do I connect to the database (e.g. if I create a key, where shall I store the json keyfile and would that approach even be feasible?) with this IAM email account?
Other ways I've researched were to use psycopg2 (thus I could use the function cursor.copy_expert (which doesn’t need any superuser right or Postgres user credentials and copy the data), but I didn’t succeed in connecting to the database with psycopg2 due to challenges with cloud proxy.
Another idea was to use pg_dump or gcloud sql export csv.
I would be curious if some of you were facing a similar challenge and how did you solve it and what might be the best way/practice
You can have a try out database migration service. You can set up a continuous migration configuration and use Cloud SQL for PostgreSQL.
Hello after a lot of searching I've come to these solutions:
If you have continuous copy, you need to use the database migration service, check this documentation.
If you have one shot copy:
you can restore your instance, see the bottom page of this documentation
you can create a bucket and backup your instance on it, then import it from the other project

GCP Cloud SQL Terraform Postgres extension

Is there an official way of installing an extension on a GCP Postgress Cloud SQL instance via Terraform?
Closest I've found is this unofficial Postgres resource but it's not immediately clear how to link the two. This issue on their tracker sort of shows how, but far from a step by step guide.
if it makes any difference, I'm trying to provision a Postgres Cloud SQL instance with PostGIS.
Thanks.
Terraform is a deployment tool, to create all your infrastructure. To install an extension on Postgres, you need a installation tool, because you need to connect to the database and to run a command.
It's the same thing if you want to create a user in the database and you want to grant some privileges on it.
In summary, you can't achieve that with Terraform. I recommend you to have an installation tool, such as Ansible to perform this action.
An alternative is to create, with Terraform, a micro VM with a startup script that connect the database, run the command and destroy itself at the end.

How to create database template on azure postgresql service

While using the database service for postgresql om azure, it looks like it is not possible to create a custom template database.
What I want to achieve, is that a regular account, csn create new databases with a specific extension enabled.
The creation can be delegated, but the enabling of the extension fails in my test for all but the initial database admin account.
I've just tried the following in Azure Database for PostgreSQL and it worked (please bear with me as I walk you through my steps in Azure).
In Azure CLI (bash) followed steps in Quick Start for Azure PostgreSQL to create a new resource group > create a PG server in it > create a new DB (originaldb) on that PG server. All worked just fine.
Then I enabled a earthdistance and cube (pre-req for earthdistance) extensions for orginaldb (the DB I created in step 1).
Then I used CLI to create another DB (dbclone) using orginaldb as a template:
CREATE DATABASE dbclone TEMPLATE originaldb;
It worked just fine and cube and earthdistance extensions are enabled in dbclone DB.
Now on to trying it with another PG user: I created a PG user (user1) and granted this user DB creation privilege.
Then I logged on to my server as user1 and created another DB from CLI using the same command:
CREATE DATABASE dbclone2 TEMPLATE originaldb;
It worked too and I see that cube and earthdistance extensions are enabled in dbclone2 database.
Is that what you're trying to do? Are you hitting errors following the same steps or you're trying to do something different?

Using Docker and MongoDB

I have been using Docker and Kubernetes for a while now, and have set up a few databases(postgres and mysql) and services.
Now I was looking at adding a mongoDB, but it seems different when it comes to user management.
Take for example postgres:
https://hub.docker.com/_/postgres/
Immediately I can declare users with a password on setup and then connect using this. It seems the mongo image does not support this. Is there a way to simply declare users on startup and use them similar to the postgres setup? This is, without having to exec into the image, modify auth settings and restarting the mongo service.