As a project grows, migrations (both makemigrations and migrate) take longer and longer delaying quite a lot every deployment.
Django let's you to squash them so you don't have a long list of migration files but still, the migrations take the same amount of time.
Then I tried the following:
Remove all the migration files in my app
Clean the django_migrations table in the DB
Run makemigrations (to create the 0001_initial for my app)
Run migrate --fake (to populate to django_migrations table)
Now the new migrations are really fast at the expense of losing the migration history.
So my question is, considering that this could be like a v1.0, and it is a standalone project which any other project depends on, What are the risks of doing this?
I have the feeling that it is something that should not be done, since I could't find any specific Django command to do it. South has a reset, but now django migrations only has the squash...
As long as all your installations was up to date when you deleted the migrations, there is no harm to delete old migrations. The single drawback is you can't go back in the database historic state easily once you deleted them, but except in case of old backups file, you should not have any use of it.
As I pointed out, just make sure everyone using your project is up to date before (or give them indications like the commit before the deletion so they can do the migrations before upgrading to HEAD if you are using a source version control).
If you are developping a third party app, it will make the installation more complicate (dealing with the --fake argument whether you are upgrading or installing...) but this is not your case.
Related
I am using moodle 3.2 version. I did some changes in moodle database tables.For example i have added Schoolyear column in mdl_course table for my requirement purpose.When moodle migrate to next version the changes will affect or not?.
It is generally a bad idea to mess around with core Moodle database tables. It can cause problems during upgrades (and will not be included in backups, unless you change the core code as well), so it is usually better to store extra data in new tables.
That being said, there are occasions where it is really not practical to do anything else, and, usually, it does not cause too actual problems. The harder part is the merging of the core code changes that work with the changed database tables.
I'd like to set up a development environment for our project that reflects the state of a production database, but any data modification on it would not be visible on the production. Does PostgreSQL provide features that would let me do that on the same database, for example making it feel as if the user had its own permamently uncommitted transaction?
How would you want this to work?
Imagine you modified something in the development database, and then the production database gets modified in an incompatible way.
Should the change on the production system fail? Should replication error out?
You could probably accomplish such a setup with Slony, but that is complicated, puts extra load on your production system, makes schema changes complicated and won't make you happy for the reason mentioned above.
The idea to have modifications in the development database take place in an uncommitted transaction is interesting, but how could you develop in a reasonable way if you cannot test transaction handling and other transaction can never see the work of your own transaction?
I think the best solution is to have a test database that is regularly updated from the production system, ideally by automatically restoring your nightly production database backup (that would have the additional advantage of testing your backups).
We are using code first migrations to keep our database and model in sync. At the moment we have the version number as name for the migration which clearly doesn't work out.
The problem is that multiple migrations with the same name where created by different developers independent of each other for their local database. This led to some weird behavior as the IMigrationMetadata.Id was different because of the time stamp but the classes are partial with the same name.
What is the way to go to call these migrations? The examples are always ridiculously oversimplified: e.g. adding a property Readers result in migration AddReaders.
Or should the migrations be broken down to these little changes? Instead of having accumulate all the changes into one big migration. What if there are dependencies?
Yes, I think the best way is to break down changes to small units, with descriptive names. As with git, where you should commit often, with migrations you should migrate often. Not necessarily property by property, but containing a logical unit of work.
Like if you need to add two tables for some feature, add those two tables in one migration. Avoid making big migrations where your work for days changing models before creating a migration. Time is essential with avoiding conflicts.
If there are dependencies, one migration should contain related changes, so if another developer applies the migration, the application still works.
When a developer makes a migration, it should be immediately committed and synced (shared with other devs, in case you are not using git).
When you work with small units of change, merging and resolving conflicts becomes much easier.
I have been struggling with the same problem and trying out different solutions. What we have come up with so far is to have all the developers exclude the migrations from the check in process and then have one designated developer do the "release migration" that includes the changes from all the others working on the project.
This is a question for those of you developing on a team of devs where all of you have separate databases. You're versioning your database using source control and other tools which will automatically bring dev databases up to date to the latest version of the database (schema, data, SP's, functions, etc.).
OK Great! But wait! What if you are developing on version 4.0 of your software, but now you need to switch branches to the 3.2 branch to fix a bug? The schema could be (almost assuredly is) very different by now...
I suppose if you went through the extra effort to write rollback scripts along with your change scripts, this could work. But that seems like a lot of work - is it really worth it?
Much easier would be to create a new 3.2-branch database and work with that while working on the 3.2-branch code. It doesn't seem reasonable to me to require that each developer has exactly one database to work with.
I'm going on a limb and assume that you are versioning the database as a binary? If all your database assets were in the form of constructive code (eg SQL scripts and/or text data dumps), the solution would be simple, as suggested by Mark: store these assets as part of the development branch. To work on version 3.2, switch the branch, re-run the create scripts and presto, 3.2 database. Merging would be just as easy as with regular code (or just as painful, depending on your version control system of choice).
Here are some suggestions to work in this mode:
If creating the database instances from text is too slow, make a cache on a shared disk volume, keyed by the contents of all the schema / data files (or the MD5 sum thereof).
Write a pre-commit hook to ensure that the schema and data dumps in the developer's instance are the same as the ones under version control. This prevents people from making changes to their dev database with an interactive tool, and then forgetting to commit them.
You mention change scripts; treat them as a liability. While they may be required by your deployment scenario (eg for customers who want to upgrade in-place), they duplicate information from the version history of the database, and per Murphy's law duplication means desynchronization sooner or later. Try to auto-generate the change scripts from the versioned database assets using "diff"; or if this cannot be achieved, dedicate some serious unit tests to database upgrades.
I have a database project that goes through iterations (only one so far) and I need to deploy a testing version to a live server. I'm not sure how to go about this.
I can make all the changes in a copy and then remake those changes in the live version. That doesn't make sense.
Is there a way to change a server name to an existing server? What's the best practice for this scenario?
With a Visual Studio Database Project, you should be able to have as many database connections defined as you like. When you go run your scripts, you can pick an menu option called "Run On...." and then pick which server connection to run those scripts on.
Just make sure the database name is the same for both instances, or make sure that you do not specify USE (database) at the top of all your scripts, if the database names are different from target to target.
In the first place, you should have scripts already written for the changes you made. They should be in source control. No changes ever should be made to database structure without a script and versioning.
Since you don't appear to have what you should have to deploy, then you need to tool to check the differences between the databases. Redgate's SQL Compare is the one to buy.
Be careful of simply using the tool without thinking, there may be changes in dev you are not yet ready to promote to prod. Read through the scripts before running them.
Also you may need SQL Data Compare to run against any lookup tables you have to see if new values have been added in dev that need to go to prod. Again these inserts should have been scripted and in source control and then deploying is simple.
Maybe I'm misunderstanding the question, but I don't see how you could just swap the databases. If you make a development version of a database and update the schema, you must surely run some tests and update the data. You can't just make that the development database now because it's full of test data.
What you need to do is run a tool that compares the old schema to the new schema and then apply these changes to the production database. There are tools out there on the market to do this. Failing that, you could dump the old and new schemas, run them through an ordinary file compare to get the differences, and then build an update script out of that.
On my present project we use what I think is a terrible practice: We keep a hand-maintained script of schema updates for each version, and every time someone makes a change they're supposed to update this script. Every now and then someone makes a mistake and we have to scramble to figure out what went wrong. Like we just had a problem deploying to our user acceptance test because someone updated the create statement for a new table to include a foreign key to another new table ... not realizing that the table being referenced was created until further down in the script. It worked fine it test because the tables were created in an order that made it work.
My conclusion is you're much better off to just make changes to the schema on the fly, then when you're done, run an automated compare to generate the ALTER statements.
By the way, on a project I worked on a few years ago, for a desktop application where each customer had their own copy of the database, we put in what I thought was a very nice feature: Every time the program started up, it compared the schema of the database to what it thought it ought to be, and if they didn't match, it automatically updated it. So when they installed a new version, it just automatically updated the database the first time they ran it.