What's the correct way to manage tables and columns in a postgres db in production with Prisma? - postgresql

For my first backend project, I decided to create a Content Management System (CMS) with NextJS as my full-stack framework and Prisma as the ORM. As a principle, a customizable CMS must be able to programmatically create, drop and modify tables from the database; however, to my (scarce) knowledge of Prisma (and DBs in general) this can only be achieved with prisma migrate dev or prisma db push. According to the documentation, both these CLI tools should not, under any circumstances, be used in production.
I've tried to ignore the docs' warnings and run the prisma migrate dev programmatically with execSync(), but it knows when it is running under a non-interactive environment, shutting it down. Even if I was successful, it does not feel right.
This leads me to believe there's another way to manage these tables, but I can't seem to find it. The only alternative that comes to mind is to use raw SQL, which is absolutely possible but it doesn't look right, given Prisma is such a robust ORM tool.
My question then is: how can I programmatically and safely manage relational database tables using Prisma in production?

Related

What are the APIs heroku uses to manage postgres DB?

How the Heroku CLI tool manages DBs. What are the APIs they use? The tasks I am trying to do from the app are create/delete a postgres DB, create a dump, and import a dump using python code and not from the console or cli.
There is no publicly defined API for the Heroku Data products, unfortunately. That said, in my experience, the paths are fairly stable and can mostly be reasoned out. This CLI plugin might give you a head start on trying to work out the routes you'd need to hit in order to achieve your goals.

Execute PostgreSQL statements upon deployment to Elastic Beanstalk

I am working on an application that has source code stored in GitHub, build and test is done by CodeShip, and hosting is done in Amazon Elastic Beanstalk.
I'm at a point where seed data is needed on the development database (PostgreSQL in Amazon RDS) and it is changing regularly in development.
I'd like to execute several SQL statements that are stored in GitHub when a deployment takes place. I haven't found a way to do this with the tools we're using, so I'm wondering if there are some alternatives.
If these are the same SQL statements, then you can simply create an .ebextension (see documentation) that will execute them after each deploy.
If the SQLs are dynamic per deploy, then I'd recommend a database migrations management tool. I'm familiar with rails that has that by default abut there's also a standalone migrations tool for non-rails projects. Google can suggest many other options.

SQL Server Data Tools and Edmx

So we're using the new SSDT Microsoft released, pretty cool stuff. We are keeping a database project under version control with all the schemas, and an offline database for development and we can later deploy on SQL Azure database. We;re using EF in development, so my question is where would the edmx fit in, should we update the edmx file from the offline database or from the online SQL Azure directly, whats the best practice on this?
I would say that in your case "the production database is the truth", so I would update from SQL Azure. There's no right answer tho really.
Incidentally, in the early betas of SSDT it was possible to have a reference from an EDMX to a SSDT project thus your source code became the truth (which, in my opinion, is preferable) and the EDMX knew it was always working against "the truth". Unfortunately they ditched this feature and there are no signs of it returning.
For the EF to work correctly the EDMX file has to be in-synch with the database you are connecting to. It's hard to answer your question without knowing the development process you follow but I would imagine you use Sql Azure in production and develop against an on-premises database. Therefore one copy of the Edmx file will be used on production server. In the development environment you have a "living" copy of the edmx file that is changed as needed when the local database changes. When you get to the point you when you are ready to ship you deploy your app (include the edmx file) to a production environment that uses Sql Azure.
If, in your development environment, you update the edmx file from the SQL Azure then stuff will break or will not work correctly if the schema of the database in Azure is different from schema of your local database.

Is there a way to upgrade from a Heroku shared database to a production grade database like Basic or Crane?

I have been using the Heroku shared database for a while now in an app and I would like to upgrade to their new Basic/Crane/etc. production grade databases. I don't however see a clear path to do that.
The options as I see them are:
I could use db:pull/db:push to migrate data/schema from the current production database to the new database. I could go into maintenance mode, move the data, then update the config to point to the new database. Not terrible, but I fear that the old schema from the shared database is not v9 compatible? Maybe I'm wrong. This could also take a long time resulting in some major downtime. Not cool.
Use pg:backups to create a backup, and use the heroku pg:restore to move the data over. Again I fear the same schema issues but this would be much faster.
Start with a Basic/Crane database and use their Followers concept. This feels like the right way to do it, but I don't know if this works with the shared databases. If it does I do not understand how.
All of these options I feel require me to upgrade to postgres v9 at some point since all the new databases are v9. Is there a way to do this in the shared environment, and then maybe migrating will be less painful... maybe.
Any ideas or suggestions?
Their Migrating Between Databases document points out that your option (3) using Followers for a fast changeover doesn't work when you're starting with a shared instance. What you should do here is use pg:backups to make a second database to test your application against and confirm the new database acts correctly there. If that works, turn on maintenance mode and do it again for the real migration.
There's very few schema level incompatibility issues going between PostgreSQL 8.4 and 9.1. That migration doc warns about one of them, the Bytea changes. If you're using Bytea, you probably know it; it's not that common. Most of the other things changed between those versions are also features you're unlikely to use, such as the PL/pgSQL modifications made in PostgreSQL 9.0. Again, if you're using PL/pgSQL server-side functions, you'll probably know that too, and most people on Heroku don't.
Don't guess if your database is compatible, test. You can run multiple copies of PostgreSQL locally, they just need different data directories and ports configured. That way you can test against 9.1 and 8.4 at will.
You usually use the pg_dump from 9.1 to dump the 8.4 database - pg_dump knows about older versions, but not newer (obviously).
Unless you're doing something unusual with the database (presumably not, since you're on Heroku) then there's unlikely to be any problems just dumping + restoring between versions.

Restoring Ingres Database from one system to another system

We want to restore the database that we have got from the client as backup in our development environment, we are unable to restore the database successfully, can any one help us to know the steps involved in this restore process? Thanks in Advance.
Vijay, if you plan to make a new database out of checkpoints (+journals) made on another (physical) server, then I must disappoint you - it is going to be a painful process. Follow these instructions http://docs.actian.com/ingres/10.0/migration-guide/1375-upgrading-using-upgradedb . The process is basically the same as upgradedb . However, if architecture of the development server is different (say backup has been made on a 32bit system, and development machine is, say POWER6-based) then it is impossible to make your development copy of the database using this method.
On top of all this, this method of restoring backups is not officially supported by Actian.
My recommendation is to use the 'unloaddb' tool on the production server, export the database in some directory, SCP that directory to your development server, and then use the generated 'copy.in' file to create the development database. NOTE: this is the way supported by Actian, and you may find more details on this page: http://docs.actian.com/ingres/10.0/migration-guide/1610-how-you-perform-an-upgrade-using-unloadreload . This is the preferred way of migrating databases across various platforms.
It really depends on how the database has been backed up and provided to you.
In Ingres there is a snapshot (called a checkpoint) that can be restored into a congruent environment, but that can be quite involved.
There is also output from copydb and unloaddb commands which can be reloaded into another database. Things to look out for here are a change in machine architecture or paths that may have been embedded into the scripts.
Do you know how the database was backed up?