What are the APIs heroku uses to manage postgres DB? - postgresql

How the Heroku CLI tool manages DBs. What are the APIs they use? The tasks I am trying to do from the app are create/delete a postgres DB, create a dump, and import a dump using python code and not from the console or cli.

There is no publicly defined API for the Heroku Data products, unfortunately. That said, in my experience, the paths are fairly stable and can mostly be reasoned out. This CLI plugin might give you a head start on trying to work out the routes you'd need to hit in order to achieve your goals.

Related

What's the correct way to manage tables and columns in a postgres db in production with Prisma?

For my first backend project, I decided to create a Content Management System (CMS) with NextJS as my full-stack framework and Prisma as the ORM. As a principle, a customizable CMS must be able to programmatically create, drop and modify tables from the database; however, to my (scarce) knowledge of Prisma (and DBs in general) this can only be achieved with prisma migrate dev or prisma db push. According to the documentation, both these CLI tools should not, under any circumstances, be used in production.
I've tried to ignore the docs' warnings and run the prisma migrate dev programmatically with execSync(), but it knows when it is running under a non-interactive environment, shutting it down. Even if I was successful, it does not feel right.
This leads me to believe there's another way to manage these tables, but I can't seem to find it. The only alternative that comes to mind is to use raw SQL, which is absolutely possible but it doesn't look right, given Prisma is such a robust ORM tool.
My question then is: how can I programmatically and safely manage relational database tables using Prisma in production?

Automated Table creation for Postgres in Cloud Foundry

We have a cloud-foundry app that is bound to a Postgresql service. Right now we have to manually connect to the Posgresql database with pgAdmin, and then manually run the queries to create our tables.
Attempted solution:
Do a cloud foundry run-task in which I would install
1) Install psql and connect to the remote database
2) Create the tables
The problem I ran into was that cf run-task has limited permissions to install packages.
What is the best way to automate database table creation for a cloud-foundry application?
Your application will run as a non-root user, so it will not have the ability to install packages, at least in the traditional way. If you want to install a package, you can use the Apt Buildpack to install it. This will install the package, but into a location that does not require root access. It then adjusts your environment variables so that binaries & libraries can be found properly.
Also keep in mind that tasks are associated with an application (they both use the same droplet), so to make this work you'd need to do one of two things:
1.) Use multi-buildpacks to run the Apt buildpack plus your standard buildpack. This will produce a droplet that has both your required packages and your app bits. Then you can start your app and kick of tasks to set up the DB.
2.) Use two separate apps. One for your actual app and one for your code that seeds the database.
Either one should work though. Both are valid ways to seed your database. The other option, which is what I typically done, is to use some sort of tool to do this. Some frameworks like Rails, have this built-in. If your framework does not, you could bring your own tool, like Flyway. These tools often also help with the evolution of your DB schema, which can be useful too.

Heroku Release Phase - PG Backup Before Migrations

With the Heroku Release Phase is it possible to run pg:backups:capture? Or is there another method to go about for creating a database backup before trying to run migrations?
Technically this is possible, but must have heroku cli installed on your dyno and you need to authenticate it somehow. So one solution is to find or write buildpack that will install cli tool and add config variable with authentication credentials.
Another approach is to use a library such as https://github.com/kjohnston/pgbackups-archive. There is a problem though, it is using old heroku api, which will be disabled in April 2017. I don't know if there is any similar library that uses new api.
If you just want to backup your data and not necessarily use pg:backups:capture, you can just use write simple script that runs pg_dump DATABASE_URL with some additional options and uploads dump file to S3 or any other location. I think this is the easiest solution. Then just add this script as release command to Procfile.

Execute PostgreSQL statements upon deployment to Elastic Beanstalk

I am working on an application that has source code stored in GitHub, build and test is done by CodeShip, and hosting is done in Amazon Elastic Beanstalk.
I'm at a point where seed data is needed on the development database (PostgreSQL in Amazon RDS) and it is changing regularly in development.
I'd like to execute several SQL statements that are stored in GitHub when a deployment takes place. I haven't found a way to do this with the tools we're using, so I'm wondering if there are some alternatives.
If these are the same SQL statements, then you can simply create an .ebextension (see documentation) that will execute them after each deploy.
If the SQLs are dynamic per deploy, then I'd recommend a database migrations management tool. I'm familiar with rails that has that by default abut there's also a standalone migrations tool for non-rails projects. Google can suggest many other options.

How to deploy local Strapi content to Heroku

I have a local version of Strapi set up, and the codebase is pushing fine to Netlify for the frontend and Heroku for the backend.
However, I can't work out how to get the content held in the .tmp/data.db file into the mLab instance of the database on Heroku.
The structure is all in sync with my local version.
I've tried to export tables from SQL Lite to JSON files and then import them as collections using the CLI - which says it's imported the documents into Heroku (and I can see them in the mLab interface), but this was a last ditched attempt as I couldn't see a way of transferring the entire file.
However, this isn't working as the content types are still empty.
Make sure you well configured your ./config/environments/production/database.json with your mLab configs.
In development, you look using SQLite. This database is good for local development but can't be used in Heroku (see the storage system used by Heroku you will understand why.)
Be careful, you are using an SQL database in dev and a NoSQL database in production.
This looks special - depending on your data structure, you can have issues about the data migration. I don't suggest you to do that. Use the same type of database in dev and prod.