Best strategy for db update when updating application - iphone

I have function that initialize my database create tables etc.
Now I prepare version two of the application and in this function at the end I added check for column existence and if not exist I alter table.
My question is:
To avoid checking this all the time is it good to put in UserDefaults some flag that indicate that current app is version two and if it is to avoid this code?
This seams logical to me but other opinion is always welcome ;)

You could have a version number table/column in your database which stores the schema version number. Every time you change the schema, increment the number in your application file and then run the relevant migration code to get from one schema version to another whilst updating the schema version in the database.
This answer has a handy way of tracking db schema version numbers without creating a separate table in SQLite

Yes, you can user NSUSER Default to check this. I don't think anything wrong with this.

Related

How to implement schema migration for PostgreSQL database

I need to implement schema migration mechanism for PostgreSQL.
Just to remove ambiguity: with schema-migration I mean that I need upgrade my database structures to the latest version regardless of their current state on particular server instance.
For example in version one I created some tables, then in version two I renamed some columns and in version three I removed one table and created another one. I have multiple servers and on some of them I have version one on some version three etc.
My idea:
Generate hash for output produced by
pg_dump --schema-only
every time before I change my database schema. This will be a reliable way to identify database version in the future to which the patch should apply.
Contain a list of patches with the associated hashed to which they should apply.
When I need to upgrade my database I will run an application that will search for hash that corresponds to current database structure (by calculating hash of local database and comparing it with hash set that I have) and apply associated patch.
Repeat until next hash is not found.
Could you please point any weak sides of this approach?
Have you ever heard of https://pgmodeler.io ? At the company where I work we decided to go for this since it can perform schema diff even between local and remote. We are very satisfied with it.
Otherwise if you are more for a free solution, you could develop a migration tool which can be used to apply migrations you store in a single repo. Furthermore this tool could rely on a migration table you keep in a separate schema so that your DB(s) will always know which migrations were applied or not.
The beauty of this approach is that migrations can both be about a schema change and data changes.
I hope this can give you some ideas.

How do implement schema changes in a NOSQL storage system

How do you manage a major schema change when you are using a Nosql store like SimpleDB?
I know that I am still thinking in SQL terms, but after working with SimpleDB for a few weeks I need to make a change to a running database. I would like to change one of the object classes to have a unique id, as rather than a business name, and as it is referenced by another object, I will need to also update the reference value in these objects.
With a SQL database you would run set of sql statements as part of the client software deployment process. Obviously this will not work with something like SimpleDB as
there is no equivalent of a SQL update statement.
Due to the distributed nature of SimpleDB, there is no way of knowing when the changes you have made to the database have 'filtered' out to all the nodes running your client software.
Some solutions I have thought of are
Each domain has a version number. The client software knows which version of the domain it should use. Write some code that copies the data from one domain version to another, making any required changes as you go. You can then install new client software that then accesses the new domain version. This approach will not work unless you can 'freeze' all write access during the update process.
Each item has a version attribute that indicates the format used when it was stored. The client uses this attribute when loading the object into memory. Object can then be converted to the latest format when it is written back to SimpleDB. The problem with this is that the new software needs to be deployed to all servers before any writes in the new format occur, or clients running the old software will not know how to read the new format.
It all is rather complex and I am wondering if I am missing something?
Thanks
Richard
I use something similar to your second option, but without the version attribute.
First, try to keep your changes to things that are easy to make backward compatible - changing the primary key is the worst case scenario for this.
Removing a field is easy - just stop writing to that field once all servers are running a version that doesn't require it.
Adding a field requires that you never write that object using code that won't save that field. If you can't deploy the new version everywhere at once, use an intermediate version that supports saving the field before you deploy a version that requires it.
Changing a field is just a combination of these two operations.
With this approach changes are applied as needed - write using the new version, but allow reading of the old version with default or derived values for the new field.
You can use the same code to update all records at once, though this may not be appropriate on a large dataset.
Changing the primary key can be handled the same way, but could get really complex depending on which nosql system you are using. You are probably stuck with designing custom migration code in this case.
RavenDB another NoSQL database uses migrations to acheive this
http://ayende.com/blog/66563/ravendb-migrations-rolling-updates
http://ayende.com/blog/66562/ravendb-migrations-when-to-execute
Normally these type of changes are handled by your application that changes the schema to a newer one upon loading version X and converting to version Y and persisting

How to get the name of the table that was changed in sqlite?

Does anyone here knows how to get the name
of the table that was changed,updated or deleted
in SQLite?..i found the function changes() and totalChanges()
but they only return the number of database rows that were
changed or inserted or deleted by the most recently completed SQL statement.
In most RDBMS's you have some kind of journaling that captures all database transactions for data backup and recovery. In Oracle, it's called a redo log. That is where you would go to check if a table name has changed.
But I'm not familiar enough with SqlLite to know if this is available. I did find a thread where a similar question was asked, and it was recommended to implement it yourself. Try reading through the this link and see if this satisfies your requirements:
But aside from all of that, I would also recommend that your app use views, that way you protect the model from changes.

how can I selectively update some SQLite table to preserve user data for an iphone app update?

can you selectively update certain sqllite tables when updating an iphone app, in order to preserve user stored data? how? appreciate the help!
At the simplest level, you'll need to:
Store some version number information in the SQL database in the app's document directory.
When your app launches, you can compare this version data to the copy in your bundle.
If the version is different, you'll then need to activate a "updater" class, the responsibility of which is to:
3.1. Check for the existence of each table.
3.2. If it exists, load any existing data into a suitable data structure (an NSDictionary most likely), cull the table and create it in the "current" format, providing sensible defaults where no data exists.
As you can imagine, in the above scenario the updater class effectively needs to know how to create each table in turn, which isn't ideal - an alternative approach being to store a list of ALTER TABLE statements for each version and then apply them in turn until the database structure is up to date.

Seamlessly updating a postgres database - schemas, rename, how?

Actually a simple question, but I wasn't able to find any good conclusive answer.
Assuming a production database foo_prd, and a newer version of the same foo_new (on the same server) that is supposed to replace the old one. What is the cleanest way to seamlessly switch from _prd to _new?
RENAME-ing the databases would require to disconnect the current users via their pid. That would take down some requests, and new users might connect during the process. I was thinking of creating the tables of the new database as different SCHEMA and then change the search_path, e.g. from "$user",prd to "$user",new,prd.
What could possibly go wrong? Do you have any better suggestions? Am I taking the wrong approach altogether?
Do as you suggest: create the tables of the new database as different schema and then change the search_path.
But also create a user with the same name as the new schema and test everything before changing the search_path by logging in as this user with each of your apps - the new schema will be first in that user's search_path by default because the name matches.
Finally, take care when you come to drop the old schema - I suggest renaming first in case anything refers to it's objects using a qualified reference (eg prd.table or prd.function). After a few days/weeks it can then be dropped with confidence.
I would version my schema, and change my app to point to the new schema when ready.