Execute PostgreSQL statements upon deployment to Elastic Beanstalk - postgresql

I am working on an application that has source code stored in GitHub, build and test is done by CodeShip, and hosting is done in Amazon Elastic Beanstalk.
I'm at a point where seed data is needed on the development database (PostgreSQL in Amazon RDS) and it is changing regularly in development.
I'd like to execute several SQL statements that are stored in GitHub when a deployment takes place. I haven't found a way to do this with the tools we're using, so I'm wondering if there are some alternatives.

If these are the same SQL statements, then you can simply create an .ebextension (see documentation) that will execute them after each deploy.
If the SQLs are dynamic per deploy, then I'd recommend a database migrations management tool. I'm familiar with rails that has that by default abut there's also a standalone migrations tool for non-rails projects. Google can suggest many other options.

Related

What are the APIs heroku uses to manage postgres DB?

How the Heroku CLI tool manages DBs. What are the APIs they use? The tasks I am trying to do from the app are create/delete a postgres DB, create a dump, and import a dump using python code and not from the console or cli.
There is no publicly defined API for the Heroku Data products, unfortunately. That said, in my experience, the paths are fairly stable and can mostly be reasoned out. This CLI plugin might give you a head start on trying to work out the routes you'd need to hit in order to achieve your goals.

How do I choose a local MySQL version that will be compatible with our future switch to CloudSQL?

For simplicity and cost, we are starting our project using local MySQL running on our GCE instances. We will want to switch to CloudSQL some months down the road.
Any advice on avoiding MySQL version conflicts/challenges would be much appreciated!
The majority of the documentation is for MySQL 5.7 so as an advice I recommend you use this version and review migrating to cloudsql concept this is a guide that will guide you through how to migrate safely which migration methods exist and how to prepare you MySQL database.
Another advice which I can give you is make the tutorial migrating mysql to cloud using automated workflow tutorial this guide also says that the any MySQL database running version 5.6 or 5.7 allows you to take advantage of the Cloud SQL automated migration workflow this tutorial is important to know how works and how deploy a source MySQL database on Compute Engine. The sql page will give you more tutorials if you want to learn more.
Finally I suggest to you check de sql pricing to be aware about the billing and also I suggest to you create a workspace with this you can have more transparency and more control over your billing charges by identifying and tuning up the services that are producing more log entries.
I hope all the information that I'm giving you are helpful.

Automated Table creation for Postgres in Cloud Foundry

We have a cloud-foundry app that is bound to a Postgresql service. Right now we have to manually connect to the Posgresql database with pgAdmin, and then manually run the queries to create our tables.
Attempted solution:
Do a cloud foundry run-task in which I would install
1) Install psql and connect to the remote database
2) Create the tables
The problem I ran into was that cf run-task has limited permissions to install packages.
What is the best way to automate database table creation for a cloud-foundry application?
Your application will run as a non-root user, so it will not have the ability to install packages, at least in the traditional way. If you want to install a package, you can use the Apt Buildpack to install it. This will install the package, but into a location that does not require root access. It then adjusts your environment variables so that binaries & libraries can be found properly.
Also keep in mind that tasks are associated with an application (they both use the same droplet), so to make this work you'd need to do one of two things:
1.) Use multi-buildpacks to run the Apt buildpack plus your standard buildpack. This will produce a droplet that has both your required packages and your app bits. Then you can start your app and kick of tasks to set up the DB.
2.) Use two separate apps. One for your actual app and one for your code that seeds the database.
Either one should work though. Both are valid ways to seed your database. The other option, which is what I typically done, is to use some sort of tool to do this. Some frameworks like Rails, have this built-in. If your framework does not, you could bring your own tool, like Flyway. These tools often also help with the evolution of your DB schema, which can be useful too.

Which ways do I have to import my tables and data from SQL Server to SQL Azure?

I'm looking for the easiest way to import data from SQL Server to SQL Azure.
I'd like to work locally, would there be a way to synchronize my local database to SQL Azure all the time?
The thing is I wouldn't like to update each time I add a table in my local database to SQL Azure.
I HIGHLY recommend using the SQL Database Migration Wizard: http://sqlazuremw.codeplex.com/ it is the best free tool I've used so far. Simple and works much easier the the SSMS and VS built in tools. I think the Red-Gate tools now work well with SQL Azure too - but I haven't had a chance to use them.
Have you looked at SQL Data Sync? A new October update just came out today.
http://msdn.microsoft.com/en-us/library/hh456371
Microsoft SQL Server Data Tools (SSDT) was developed by Microsoft to make it easy to deploy your DB to Azure.
Here's a detailed explanation: http://msdn.microsoft.com/en-us/library/windowsazure/jj156163.aspx or http://blogs.msdn.com/b/ssdt/archive/2012/04/19/migrating-a-database-to-sql-azure-using-ssdt.aspx
and here's how to automate the process of publishing: http://www.anujchaudhary.com/2012/08/sqlpackageexe-automating-ssdt-deployment.html
To look: SQL Server Data Tools Team Blog
There are a few ways to migrate databases, I would recommend you to do it by using the generate scripts Wizard.
Here are the steps to follow
http://msdn.microsoft.com/en-us/library/windowsazure/ee621790.aspx
Also there are others tools like Microsoft Sync Framework.
Here you'll find more information about it
http://msdn.microsoft.com/en-us/library/windowsazure/ee730904.aspx

Deploy Entity Framework Code First

I guess I should have thought of this before I started my project but I have successfully built and tested a mini application using the code-first approach and I am ready to deploy it to a production web server.
I have moved the folder to my staging server and everything works well. I am just curious if there is a suggested deployment strategy?
If I make a change to the application I don't want to lose all the data if the application is restarted.
Should I just generate the DB scripts from the code-first project and then move it to my server that way?
Any tips and guide links would be useful.
Thanks.
Actually database initializer is only for development. Deploying such code to production is the best way to get some troubles. Code-first currently doesn't have any approach for database evolution so you must manually build change scripts to your database after new version. The easiest approach is using Database tools in VS Studio 2010 Premium and Ultimate. If you will have a database with the old schema and a database with the new schema and VS will prepare change script for you.
Here are the steps I follow.
Comment out any Initialization strategy I'm using.
Generate the database scripts for schema + data for all the tables EXCEPT the EdmMetadata table and run them on the web server. (Of course, if it's a production server, BE CAREFUL about this step. In my case, during development, the data in production and development are identical.)
Commit my solution to subversion which then triggers TeamCity to build, test, and deploy to the web server (of course, you will have your own method for this step, but somehow deploy the website to the web server).
You're all done!
The Initializer and the EdmMetadata tables are needed for development only.