I'm working on a major update of an iOS application. Let's say that we have two branches, develop contains what's currently on the App Store and feature/new_version the one with the major update.
feature/new_version has a lot of model changes, so there's a new model version there that adds/removes entities, properties, etc. On the other hand, we had a couple of minor improvements and bugfixes in develop, that caused the creation of new model versions as well (these were updates submitted to the App Store too).
Now I'm stuck with two branches with very different data models. The question is: If I add the "missing" properties to the feature/new_version model, will core data be intelligent enough to do an automatic lightweight migration when I submit the major update to the App Store? Or should I download the data model used in develop and create a new model version in feature/new_version based on that one and re-add / remove all the changes since I first created the branch?
Whether automatic lightweight migration works depends on the nature of the changes from the old model to the new one. In your case, the differences between the currently released version to the one in your new_version branch.
If the changes are just adding new attributes, no problem, this is the scenario that automatic lightweight migration was designed for. If they're more complex, you're more likely to need some alternate migration scheme. You didn't detail the changes, but since you said that the new version "adds/removes entities" automatic migration doesn't sound very likely. Adding in the "missing" properties won't help if there are structural changes to the model. Core Data doesn't mind simple migrations but won't infer a refactoring of the model structure.
How you create the merged model doesn't really matter as long as it contains everything you need. If adding the new properties is all it takes, there's no reason to start over. What matters is that the resulting model is correct, not the steps you took to get it there.
The easiest way to tell whether automatic lightweight migration will work is often to just try it on a debug build and see what happens. Install the currently released version on a device, create some data, and then use Xcode to install the new version. Make sure that NSMigratePersistentStoresAutomaticallyOption and NSInferMappingModelAutomaticallyOption are both YES when adding the persistent store. If it works, great. If not, Core Data provides alternatives for when the model needs more than trivial changes.
Related
I am using moodle 3.2 version. I did some changes in moodle database tables.For example i have added Schoolyear column in mdl_course table for my requirement purpose.When moodle migrate to next version the changes will affect or not?.
It is generally a bad idea to mess around with core Moodle database tables. It can cause problems during upgrades (and will not be included in backups, unless you change the core code as well), so it is usually better to store extra data in new tables.
That being said, there are occasions where it is really not practical to do anything else, and, usually, it does not cause too actual problems. The harder part is the merging of the core code changes that work with the changed database tables.
we are a team of 5 people working with microstrategy. We share every role, but we have no worklfow.
Everybody may build or change attributes and change the schema. This leads often to reports not working. Furthermore, there is no "good" documentation. We tried to establish a documentation with sharepoint, but there we also had no workflow.
Originally, we had an old project where for every report all the attributes where constructed newly. So we did not reuse any existing schema object.
Hence, we started a new project. We realized that due to lack of understanding and lack of workflow we make and made a lot of mistakes. We feel that we understand things better slowly (parent child), but the workflow is still horrible.
We have a development project and a lice project, but with the way we are working now, we have a lot of problems. Particularly, the missing version control system is killing us. We perform changes and forget what we did. Hence, we have to use backups, destroying useful work on a given day
So what are best practices to:
* deploy new attributes, facts and reports
* ensure that old reports work after constructing new attributes and facts
* improve documentation
* attributes defined on fact tables and parent-child relationships
Any help is appreciated
MicroStrategy development in a team environment, deploying from development to live, can be very challenging. As you rightly point out, the lack of version control, and unknown interdependencies between objects can cause untold problems. There's no one right answer to this question, but I would suggest the following:
Use all the tools provided by MicroStrategy. When you're deploying from one project to another, don't just drag and drop in Object Manager, create a package. When you deploy that package, make sure you choose to create an undo package, so you can rollback changes if you encounter any problems.
On that note, try to catch these problems in advance. Running Integrity Manager before and after a deployment, even if it's just to generate SQL for the reports, will point out if you've broken anything. On that note:
Create a third environment/project. Call this test/release control, whatever you prefer. Here you can test packages created in Object Manager, to ensure that they have the desired effect, and don't break anything. In effect, this is a dry run for your deployment to live. This environment should be regularly refreshed from live (via project duplication), to make sure it doesn't get in an unexpected state (as the result of a broken Object Manager package import for example).
Over and above that, I can only offer organisational advice. It's not uncommon for one person to take responsibility for schema objects (i.e. facts, attributes, transformations) so that developers don't undo each other's changes. If you have a large project, these objects could be split into functional areas, and individuals assigned.
Documentation is always tricky, but I like to put as much as possible into the object descriptions. This has the advantage of being visible in the Web interface (via tooltips), and included in the automated project documentation, should you choose to generate that. There is obviously the change log functionality for each object, but in my experience, those logs are soon not completed by developers, as saving happens too frequently. Still, if you can get people to populate that, you'd have a head start on understanding the change in your project.
To summarise:
Use Object Manager packages to deploy changes
Test changes with Integrity Manager, to catch any issues as early as possible
Use a release control project/environment, so you're not catching issues in your production environment
Assign responsibility for schema objects to a specific person or persons where possible.
We are using code first migrations to keep our database and model in sync. At the moment we have the version number as name for the migration which clearly doesn't work out.
The problem is that multiple migrations with the same name where created by different developers independent of each other for their local database. This led to some weird behavior as the IMigrationMetadata.Id was different because of the time stamp but the classes are partial with the same name.
What is the way to go to call these migrations? The examples are always ridiculously oversimplified: e.g. adding a property Readers result in migration AddReaders.
Or should the migrations be broken down to these little changes? Instead of having accumulate all the changes into one big migration. What if there are dependencies?
Yes, I think the best way is to break down changes to small units, with descriptive names. As with git, where you should commit often, with migrations you should migrate often. Not necessarily property by property, but containing a logical unit of work.
Like if you need to add two tables for some feature, add those two tables in one migration. Avoid making big migrations where your work for days changing models before creating a migration. Time is essential with avoiding conflicts.
If there are dependencies, one migration should contain related changes, so if another developer applies the migration, the application still works.
When a developer makes a migration, it should be immediately committed and synced (shared with other devs, in case you are not using git).
When you work with small units of change, merging and resolving conflicts becomes much easier.
I have been struggling with the same problem and trying out different solutions. What we have come up with so far is to have all the developers exclude the migrations from the check in process and then have one designated developer do the "release migration" that includes the changes from all the others working on the project.
During development process we keep on changing core data model file.Supposing we have many model files each built on previous version. Then At the time of submitting the app to appstore can I delete all the versions except the latest one and the one on which it is based.
Basically how to manage all version files??
During pre-1.0 development it's much more typical to not bother with model versioning. Edit the model as needed, don't create new versions, and delete your existing data whenever it doesn't match the new model version. This would be a bad idea after release, but while in development it's usually fine.
If you do need to maintain different versions during development for some reason, there are really no special steps to take to get rid of the old ones. Make sure that the latest model version is the current version (which will almost certainly be true anyway) and then delete the old model files. Voila, you're done. You don't need the old model files unless people will be using the app who already have data that uses those models, and when you first release the app, that will not be the case.
This is a question for those of you developing on a team of devs where all of you have separate databases. You're versioning your database using source control and other tools which will automatically bring dev databases up to date to the latest version of the database (schema, data, SP's, functions, etc.).
OK Great! But wait! What if you are developing on version 4.0 of your software, but now you need to switch branches to the 3.2 branch to fix a bug? The schema could be (almost assuredly is) very different by now...
I suppose if you went through the extra effort to write rollback scripts along with your change scripts, this could work. But that seems like a lot of work - is it really worth it?
Much easier would be to create a new 3.2-branch database and work with that while working on the 3.2-branch code. It doesn't seem reasonable to me to require that each developer has exactly one database to work with.
I'm going on a limb and assume that you are versioning the database as a binary? If all your database assets were in the form of constructive code (eg SQL scripts and/or text data dumps), the solution would be simple, as suggested by Mark: store these assets as part of the development branch. To work on version 3.2, switch the branch, re-run the create scripts and presto, 3.2 database. Merging would be just as easy as with regular code (or just as painful, depending on your version control system of choice).
Here are some suggestions to work in this mode:
If creating the database instances from text is too slow, make a cache on a shared disk volume, keyed by the contents of all the schema / data files (or the MD5 sum thereof).
Write a pre-commit hook to ensure that the schema and data dumps in the developer's instance are the same as the ones under version control. This prevents people from making changes to their dev database with an interactive tool, and then forgetting to commit them.
You mention change scripts; treat them as a liability. While they may be required by your deployment scenario (eg for customers who want to upgrade in-place), they duplicate information from the version history of the database, and per Murphy's law duplication means desynchronization sooner or later. Try to auto-generate the change scripts from the versioned database assets using "diff"; or if this cannot be achieved, dedicate some serious unit tests to database upgrades.