We are using code first migrations to keep our database and model in sync. At the moment we have the version number as name for the migration which clearly doesn't work out.
The problem is that multiple migrations with the same name where created by different developers independent of each other for their local database. This led to some weird behavior as the IMigrationMetadata.Id was different because of the time stamp but the classes are partial with the same name.
What is the way to go to call these migrations? The examples are always ridiculously oversimplified: e.g. adding a property Readers result in migration AddReaders.
Or should the migrations be broken down to these little changes? Instead of having accumulate all the changes into one big migration. What if there are dependencies?
Yes, I think the best way is to break down changes to small units, with descriptive names. As with git, where you should commit often, with migrations you should migrate often. Not necessarily property by property, but containing a logical unit of work.
Like if you need to add two tables for some feature, add those two tables in one migration. Avoid making big migrations where your work for days changing models before creating a migration. Time is essential with avoiding conflicts.
If there are dependencies, one migration should contain related changes, so if another developer applies the migration, the application still works.
When a developer makes a migration, it should be immediately committed and synced (shared with other devs, in case you are not using git).
When you work with small units of change, merging and resolving conflicts becomes much easier.
I have been struggling with the same problem and trying out different solutions. What we have come up with so far is to have all the developers exclude the migrations from the check in process and then have one designated developer do the "release migration" that includes the changes from all the others working on the project.
Related
I am using moodle 3.2 version. I did some changes in moodle database tables.For example i have added Schoolyear column in mdl_course table for my requirement purpose.When moodle migrate to next version the changes will affect or not?.
It is generally a bad idea to mess around with core Moodle database tables. It can cause problems during upgrades (and will not be included in backups, unless you change the core code as well), so it is usually better to store extra data in new tables.
That being said, there are occasions where it is really not practical to do anything else, and, usually, it does not cause too actual problems. The harder part is the merging of the core code changes that work with the changed database tables.
As a project grows, migrations (both makemigrations and migrate) take longer and longer delaying quite a lot every deployment.
Django let's you to squash them so you don't have a long list of migration files but still, the migrations take the same amount of time.
Then I tried the following:
Remove all the migration files in my app
Clean the django_migrations table in the DB
Run makemigrations (to create the 0001_initial for my app)
Run migrate --fake (to populate to django_migrations table)
Now the new migrations are really fast at the expense of losing the migration history.
So my question is, considering that this could be like a v1.0, and it is a standalone project which any other project depends on, What are the risks of doing this?
I have the feeling that it is something that should not be done, since I could't find any specific Django command to do it. South has a reset, but now django migrations only has the squash...
As long as all your installations was up to date when you deleted the migrations, there is no harm to delete old migrations. The single drawback is you can't go back in the database historic state easily once you deleted them, but except in case of old backups file, you should not have any use of it.
As I pointed out, just make sure everyone using your project is up to date before (or give them indications like the commit before the deletion so they can do the migrations before upgrading to HEAD if you are using a source version control).
If you are developping a third party app, it will make the installation more complicate (dealing with the --fake argument whether you are upgrading or installing...) but this is not your case.
we are a team of 5 people working with microstrategy. We share every role, but we have no worklfow.
Everybody may build or change attributes and change the schema. This leads often to reports not working. Furthermore, there is no "good" documentation. We tried to establish a documentation with sharepoint, but there we also had no workflow.
Originally, we had an old project where for every report all the attributes where constructed newly. So we did not reuse any existing schema object.
Hence, we started a new project. We realized that due to lack of understanding and lack of workflow we make and made a lot of mistakes. We feel that we understand things better slowly (parent child), but the workflow is still horrible.
We have a development project and a lice project, but with the way we are working now, we have a lot of problems. Particularly, the missing version control system is killing us. We perform changes and forget what we did. Hence, we have to use backups, destroying useful work on a given day
So what are best practices to:
* deploy new attributes, facts and reports
* ensure that old reports work after constructing new attributes and facts
* improve documentation
* attributes defined on fact tables and parent-child relationships
Any help is appreciated
MicroStrategy development in a team environment, deploying from development to live, can be very challenging. As you rightly point out, the lack of version control, and unknown interdependencies between objects can cause untold problems. There's no one right answer to this question, but I would suggest the following:
Use all the tools provided by MicroStrategy. When you're deploying from one project to another, don't just drag and drop in Object Manager, create a package. When you deploy that package, make sure you choose to create an undo package, so you can rollback changes if you encounter any problems.
On that note, try to catch these problems in advance. Running Integrity Manager before and after a deployment, even if it's just to generate SQL for the reports, will point out if you've broken anything. On that note:
Create a third environment/project. Call this test/release control, whatever you prefer. Here you can test packages created in Object Manager, to ensure that they have the desired effect, and don't break anything. In effect, this is a dry run for your deployment to live. This environment should be regularly refreshed from live (via project duplication), to make sure it doesn't get in an unexpected state (as the result of a broken Object Manager package import for example).
Over and above that, I can only offer organisational advice. It's not uncommon for one person to take responsibility for schema objects (i.e. facts, attributes, transformations) so that developers don't undo each other's changes. If you have a large project, these objects could be split into functional areas, and individuals assigned.
Documentation is always tricky, but I like to put as much as possible into the object descriptions. This has the advantage of being visible in the Web interface (via tooltips), and included in the automated project documentation, should you choose to generate that. There is obviously the change log functionality for each object, but in my experience, those logs are soon not completed by developers, as saving happens too frequently. Still, if you can get people to populate that, you'd have a head start on understanding the change in your project.
To summarise:
Use Object Manager packages to deploy changes
Test changes with Integrity Manager, to catch any issues as early as possible
Use a release control project/environment, so you're not catching issues in your production environment
Assign responsibility for schema objects to a specific person or persons where possible.
Introduction
Drools Guvnor has it's own versioning system, that in production use allows the users of an application to modify the rules and decision tables in order to adapt to change in their business. Yet, the same assets continue to live on the development version control system, where new features to the app are developed.
This post is for looking insight/ideas/experience on rule development and deployment when working with Drools rules and Guvnor.
Below are some key concepts I've been puzzling about.
Deployment to Guvnor
First of all, what is the best way to deploy the drl files and decision tables to production environment? Just simply put them on a zip package and then unzip to Web-Dav folder? What I have navigated around Drools, I haven't found a way to import more than one file at a time. The fact model can be added as a jar archive, though. Guvnor seems to have a REST API of some sort, but using that would require custom deployment scripts.
Change management
Secondly, once the application is in production, the users will likely want to change the values in decision tables in order to set the discount percentages to higher for premium clients etc. This is all fine and dandy, until comes the time to start development of version 2.0 of the app.
Now what we have at this point is
drl files and decision tables in version controlling system
drl files and decision tables in production environment with user modifications, versioned by Guvnor
Now we are in the point of getting the rules and decision tables back from the Guvnor. And again is the Web-Dav folder the best for this, what other options there are?
Merge tools today can even handle Excel file diffs, but sounds like a merge hell to me on a big scale projects.
Keeping the fact model backwards compatible
Yet another topic is fact model integrity. For the assumed version 2.0, developers always want to make refactoring and tear the whole fact model upside down. Still, it must remain backwards compatible with the previous versions as there may be user modified rules that depend on that. Any tips on this? Just keep the fact model simple and clean? Plan ahead / suggest what the users could want to change?
Summary
I'm certain I'm not the first, and surely not the last, to consider options on deployment and change management with Drools and Guvnor. So, what I'd like to hear is comment, discussion, tips etc. on some best (and also the worst in order to avoid them) practices to handle these situations.
Thanks.
The best way to do things depends very much on your specific application and the environment you work in. However the following are pointers from my own experience. Note that I'll add just a few points for now. I'll probably come back to this answer when things come to me.
After the initial go-live, keep releases incremental and small
For your first release you have the opportunity to try things out. Take advantage of this opportunity, and do as much refactoring as possible, because...
Your application has gone live and your business users are maintaining rules in decision tables. You have made great gains in what folks in the industry like to call "business agility". Unfortunately, this tends to be at the expense of application development agility. All of your guided editor rules and decision table rules are tied to your fact model, so any changes existing properties of that fact model will break your decision tables. Unlike in most IDEs these days, you can't just right-click on a fact's getX() method, rename it, and expect all code which relies on that property to be updated.
Decision tables and guided rules are hard to refactor. If a fact has been renamed, then in many (all?) versions of Guvnor, that rule/table will no longer open. You need to get at the underlying XML file via WebDav and do some text searching and replacing. That may be very difficult, considering that to do this, you need to download the file from production to a test environment, make changes, test them, deploy them to a test environment. When you're happy with your changes you need to push them back up to the 'production' Guvnor. Unfortunately, while you were doing that, the users have updated a number of decision tables and you need to either start again, or re-apply the past couple of days' changes. In an ideal world, your business users will agree to make no rule changes for a period of time. But the only way to make that feasible is to keep the changes small so that you can make them in a couple of hours or a day depending on how generous they feel.
To mitigate this:
Keep facts used within Guvnor separate from your application domain classes. Your application developers can then refactor the internal application model to their hearts content, but such changes will not affect the business model. Maintain mappings between the two and ensure there are tests covering those mappings.
Avoid changes such as renaming facts or their properties. Make sure that facts you create and their properties have names which suit the domain and agree these with the business. On the other hand, adding a new property is relatively painless. It is well worth prompting the users to give you an eye on their future plans.
Keep facts as simple as possible. Don't go more complex than name-value pairs unless you really need to. For one thing, your business users will find it much easier to maintain rules. Your priority with anything managed within Guvnor should always be about making it easy for business users to maintain.
Keep external dependencies out of your facts. A developer may think it's a good idea to annotate a fact as a JPA #Entity, for easy persistence. Unfortunately, that adds dependencies which need to be added to Guvnor's classpath, requiring a restart.
Tips & tricks
My personal technique for making cross-environment changes is to connect Eclipse to two Guvnor WebDav directories, and checkout the rules into local directories, where each local directory maps to an environment. I then use the Eclipse diff tooling.
When building a Guvnor-managed knowledge base, I create a separate Maven project containing only the facts, and with no dependencies on anything else. It makes it a lot easier to keep them clean this way. Also, when I really do need to add a dependency (i.e. I use JodaTime where possible), then the build can have a step to generate a shaded JAR containing all the dependencies. That way you only ever deploy one JAR to Guvnor, which is guaranteed to contain the correct versions of your dependencies.
I'm sure there will be more that I think of. I'll try to remember to come back to this...
I'm working on a major update of an iOS application. Let's say that we have two branches, develop contains what's currently on the App Store and feature/new_version the one with the major update.
feature/new_version has a lot of model changes, so there's a new model version there that adds/removes entities, properties, etc. On the other hand, we had a couple of minor improvements and bugfixes in develop, that caused the creation of new model versions as well (these were updates submitted to the App Store too).
Now I'm stuck with two branches with very different data models. The question is: If I add the "missing" properties to the feature/new_version model, will core data be intelligent enough to do an automatic lightweight migration when I submit the major update to the App Store? Or should I download the data model used in develop and create a new model version in feature/new_version based on that one and re-add / remove all the changes since I first created the branch?
Whether automatic lightweight migration works depends on the nature of the changes from the old model to the new one. In your case, the differences between the currently released version to the one in your new_version branch.
If the changes are just adding new attributes, no problem, this is the scenario that automatic lightweight migration was designed for. If they're more complex, you're more likely to need some alternate migration scheme. You didn't detail the changes, but since you said that the new version "adds/removes entities" automatic migration doesn't sound very likely. Adding in the "missing" properties won't help if there are structural changes to the model. Core Data doesn't mind simple migrations but won't infer a refactoring of the model structure.
How you create the merged model doesn't really matter as long as it contains everything you need. If adding the new properties is all it takes, there's no reason to start over. What matters is that the resulting model is correct, not the steps you took to get it there.
The easiest way to tell whether automatic lightweight migration will work is often to just try it on a debug build and see what happens. Install the currently released version on a device, create some data, and then use Xcode to install the new version. Make sure that NSMigratePersistentStoresAutomaticallyOption and NSInferMappingModelAutomaticallyOption are both YES when adding the persistent store. If it works, great. If not, Core Data provides alternatives for when the model needs more than trivial changes.