Cloning an AWS RDS with a snapshot - postgresql

I'm trying to create a copy of a existing database in the AWS (RDS) Console with a snapshot of the database however the button for migrating the snapshot is disabled, what might be the reason for that?

Based on the comments, the solution to the issue was to use Restore option, instead of Migrate.

You want to choose the “Copy” option not the migrate.

Related

Updating Azure Database for PostgreSQL flexible server using Alembic

I'm looking to amend some tables I have in a PostgreSQL instance on Azure, but I cannot work out how to perform the upgrades with Alembic.
I have been following the tutorial here, which includes a Heroku deployment around the 12:01:00 mark. In that case, once the changes have been defined, we can run heroku run "alembic upgrade head" to perform the upgrade. However, I cannot find the equivelant process for Azure.
My postgres instance is housed in a VNet and connected to a web app. Until now, I've made code changes to a server which is running in an attached web app. I push to GitHub which then deploys the changes in Azure. Obviously, if the table already exists in postgres, changes I make to the original schema are not reflected. I considered deleting the table and stating again, but this seems a very risky strategy.
A similar question was asked here, but has remained unanswered. I've also checked the documentation for Alembic and Azure but could not find anything.

Too many Prisma migration files

I'm currently working on a project that we are using Prisma 2 and postgreSQL as database support. From my understanding, whenever I made changes to the schema.prisma file and I want to migrate the changes to the database, I run prisma migrate dev locally. Then I will push the auto created migration files under migration folder to Github, and then our repo will run prisma migrate deploy to the staging or production server.
So, my concern is, each time I run prisma migrate dev, a new migration file will be created in migrations folder. So, there will be a lot of migration files under the migration folder with the development of the project. Is that what supposed to happen? Or is there a better way?
Thank you for your help. I'm pretty new to Prisma 2 and still trying to find the correct way to work with it. BTW I think Prisma 1 was easier to use :)
I’d recommend using prisma db push while developing locally and generating the migration with prisma migrate dev only when you think the schema changes are good to go. More at https://www.prisma.io/docs/guides/database/prototyping-schema-db-push
So, there will be a lot of migration files under the migration folder
with the development of the project. Is that what supposed to happen?
Yes, this is exactly what's supposed to happen. The migration file contains the history of all changess to your Prisma schema (and underlying database tables/configuration), so they all need to be retained.
I'm not sure if you could go around this somehow, but it's certainly not the recommended way to be using migrate.
You can find a list of best practices regarding migration in the Prisma migrate article in the docs.
It's been a very long time since the question is asked, but i want to warn about something:
As you may notice a table named _prisma_migrations is created by prisma automaticly and every time you add a new migration you are able to see the new migration's data over there. So any manual change made in migrations folder can cause problems since it breaks the relations between folder names and _prisma_migrations table.
Also docs for "squashing migrations": https://www.prisma.io/docs/guides/database/developing-with-prisma-migrate/squashing-migrations

Restore a backup generated using backpack backup manager

is there a way to restore the backup from the backpack backup manager?
I have check the https://github.com/Laravel-Backpack/BackupManager but i can not see an option to perform this action.
That package is just an interface for creating backups using spatie/laravel-backup. It provides no interface for restoring backups - you'll have to do that manually, I'm afraid.

Snowflake connectivity with GIT

Is there any way we can connect snowflake with GIT for version control. With the help of that, we can maintain version of our merge statement and any other sql script in GIT.
DBeaver has git integration and is the best solution my team has found for version control with Snowflake. It's not perfect but it allows you to run your scripts against Snowflake and then push your SQL code to a git repository through the app UI or command line.
Yes! One way to do this is to store your Snowflake SQL code in a file/files with the sql extension (i.e. filename.sql). You can add those files to a GIT repo and track them in the repo accordingly.
This is an age old question when dealing with databases and how one should go about versioning them. Unfortunately, no database really integrates directly into any VCS that I'm aware of.
My team has settled on using dbt. This essentially turns the database into a series of text files that are easily integrated with git. The short of it is that you edit your models as local text files, and then use dbt run to put these models into Snowflake itself. This is kind of nice as you can configure separate environments such as dev and prod.
Other answers help with using an IDE as a go-between for git and Snowflake. These projects could be useful also:
https://medium.com/snowflake/snowflake-vs-code-sql-tools-and-github-7eab915e10cb
use VSCode as the IDE with a useful snowflake extension
https://github.com/Snowflake-Labs/schemachange
manage schema changes as script in git, deploy them with CI/CD
https://github.com/Snowflake-Labs/sfsnowsightextensions#get-sfworksheets
the missing feature of SnowSight -- export worksheets
There is now a VSCode extension for Snowflake. I'm able to connect vscode to our repo (Azure DevOps in my case) and Snowflake. It's got some nice features too like being able to easily cycle through past queries (including query results) and gives the same level of detail (or more) than the Snowflake UI.

Restore deleted AWS Cloudformation stack

Does anyone know if there is a way to restore deleted stack on AWS Cloudformation? I can see the deleted stacks in the Filter but there is no option to restore them.
If restore is not possible can i recreate the same stack?
To give little background, my application is running on elasticbeanstalk and i did not realize it creates cloudformation stack for Autoscaling. I deleted it and then realized all my deployments fail. So wondering if i can restore it.
Thanks for all the help.
The easiest way: go to your Elastic Beanstalk environment and choose "Rebuild environment". AWS will recreate everything from scratch, including CloudFormation stack.