I know it is xmas eve, so it is a perfect time to find hardcore programers online :).
I have a sqlite db fiel that contains over 10 K record, I generate the db from a mysql database, I have built the sqlite db within my iphone application the usual way.
The records contains information about products and their prices, shops and the like, this info of course is not static, I use an automatic scheme to populate and keep updating my mysql db.
Now, how can I update the iphoen app sqlite database with the new information available in the mysql db, the db structure is still the same, but the records contains new information.
Thanks.
Ahed
info:
libsqlite3.0,
iphone OS 3.1,
mysql 2005,
Mac OS X 10.6.2
There is a question you need to answer first; How do you determine the set of changed records in your MySQL database?
Or, more specifically, given that the MySQL database is in state A, some transactions occur and now it is in state B, how do you know what changed between A and B?
Bottom line; you need a schema in MySQL that enables this. Once you have answered that question, then you can answer the "how do I sync problem?".
I have a similar application.
I am using Push Notification to let my users know there is new or updated data available.
Each time a record on the server is updated, I store a sequential record-number alongside the record.
Each UDID that's registered has a "last updated" number associated with it that contains the highest record-number it has ever downloaded.
When any given device comes to get it's updates, all database records greater than the UDID's last updated record-number as stored on the server are sent to the device. If everything goes OK, the last updated record-number for the UDID is set to the last record number sent.
The user has the option to fetch all records and refresh his database if he feels any need to sync his device to the entire database.
Seems to be working well.
-t
You can find many other similar questions by searching for "iphone synchronization":
https://stackoverflow.com/search?q=iphone+synchronization
I'm going to assume that the data is going only from mysql to sqlite, and not the reverse direction.
Three are a few ways that I could imagine doing this. The first is to just redownload
the entire database during every update. Another way, which I'm describing below, would be to create a "log" table to record the modifications to your master table, and then download just the new logs when doing the update.
I would creat a new "log" table in your SQL database to log changes to the table needing synchronization. The log could contain a "revision" column to track in what order changes were made, a "type" column to specify if it was a insert, update, or or delete, a the row-id if your affected row, and finally have the entire set of columns from your master table.
You could automate the creation of log entries by using stored procedures as wrappers to modify your master table.
With only 10k records, I wouldn't expect this log table to grow to be that huge.
You would then in sqlite keep track of the latest revision downloaded from mysql. To update the table, you would download all log entries after the latest update, and then apply them to your sqlite table.
Related
So, I have a PostGreSQL DB. For some chosen tables in that DB I want to maintain a plain dump of the rows when modified. Note this dump is not a recovery or backup dump. It is just a file which will have the incremental rows. That is, whenever a row is inserted or updated, I want that appended to this file or to a file in a folder. Idea is to load that folder into say something like hive periodically so that I can run queries to check previous states of certain rows, columns. Now, these are very high transactional tables and the dump does not need to be real time. It can be in batches, every hour. I want to avoid a trigger firing hundreds of times every minute. I am looking for something which is off the shelf - already available in PostGreSQL. I did some research but everything is related to PostGreSQL backup - which is not the exact use case.
I have read some links like https://clarkdave.net/2015/02/historical-records-with-postgresql-and-temporal-tables-and-sql-2011/ Implementing history of PostgreSQL table etc - but these are based on insert update trigger and create the history table on PostGreSQL itself. I want to avoid both. I cannot have the history on PostGreSQL as it will be huge soon. And I do not want to keep writing to files through a trigger firing constantly.
MongoDB contains data ready for client-side apps. The raw data being stored in Google BigQuery (GBQ). Each day a lot of new data being added into GBQ and once a day pretty much everything in MongoDB needs to be updated according to the most recent data in GBQ. All outdated (not updated) records must be deleted.
What is the right way to handle MongoDB update with close to 0 downtime?
Among the crazy solutions: may be i should have two instances of MongoDB, one is in production, another is being updated. Once the second db updated, i'll run Google Kubernetes Engine deploy with changed configs, so all clients will be smoothly moved from previous data to the updated one without messing up with partially updated data and without downtime. Though, i have never heard about such solutions, so i'm not sure if this is the right one.
Another solution is to have two versions of each collection under a single instance of MongoDB. Once collection is updated, server switches to that collection.
The 2nd solution seems a good option, if you know the trigger for the update, you can have minimum downtime by creating a new collection (named by date or a unique serial maybe) and update your code accordingly.
I had some good experience doing this for a fashion website sometime back, where we scraped data (using scrapinghub) and imported them into mongodb (collections stored by date) and used accordingly. So our scraping ran early morning (5-6AM) and when our editors/curators came in the office, they would start using the current dated collection (via the Web Interface of course :) )
I've got a Postgres 9.1 database that contains weather information. The dataset consists of approximately 3.1 million rows.
It takes about 2 minutes to load the data from a CSV file, and a little less to create a multicolumn index.
Every 6 hours I need to completely refresh the dataset. My current thinking is I would import the new dataset into a different database name, such as "weather_imported" and once the import and index creation are finished, I would drop the original database and rename the imported database.
In theory, clients would continue to query the database during this operation, though if that has ill effects, I could probably arrange to have the clients silently ignore a few errors.
Questions:
Will that strategy work?
If a client happened to be in the
process of running a query at the time of DB drop, my assumption is
the database would not complete the drop until the query were
finished - true?
What if a query happened between the time the
DB were dropped and the rename? I assume a "database not found"
error.
Is there a better strategy?
Consider the following strategy as an alternative:
Include a "dataset version" field in the primary table.
Store the "current dataset version" in some central location, and write your selects to only search for rows which have the current dataset version.
To update the dataset:
Insert all the data with a new dataset version. (You could just use the start time of the update job as a version.)
Update the "current dataset version" atomically to the value you just inserted.
Delete all data with an older version than the version number you just inserted.
Presto -- no need to shuffle databases around.
So I'm planning an app that will involve having a master db on a server, lets say 3,000 CDs, with the columns Title, Artist, and Release Date.
1)When a user adds a CD to their collection, it will add it to the apps local SQLite DB. But lets say I spelled a CD title wrong, so I make an update to it. When the user goes to sync, how should I go about handling an updated row? Should I have a column 'IsUpdated' that is just a numeric value that increase by one every time I update that row? That way when the app sees IsUpdated on the server is larger than the local IsUpdated for that particular item, it will now to replace the contents. Does that make sense? Is it even practical? What other option would there be?
2) How would I do about handling the addition of brand new columns? Like adding a Barcode or Price? Do I just push an update for the app that adds the new columns locally, then do the same on the server, and let the rest take its run? Which would also trickle to number 1 with the syncing issue.
First you have to give more detail than that. Is the entire 3000 master list also replicated down to the remote db?
Sounds like it.
Ok so if that the case, this isn't a DB design issue so much as it is replication.
It's a bad idea to update every row in a table, especially one that makes the row longer. You'll be better off just dropping the table and recreating. <--- that's how it works in RDBMS on servers, no idea if that concept changes on a client db. And now we get into more iPhone questions of replication than simple db replication. Would it be better to just republish the app? Is the user data segregated from the server data. Can DDL be done on the local/remote tables after published?
Instead of searching the entire list for changes as you outline in #1. I would keep a dated delta table. The local app would store a last_updated_Datetime, any records in the delta table after that datetime would need to be brought down. Once downloaded the local system can determine how to apply them. Again this is inappropriate for mass changes.
I have an App using sqlite. On first start, I copy the .db file into NSDocumentDirectory (so that I can make updates to it). In later versions, I plan to add new data to this database. How can I make sure, that with every application update (but not with every app start) the newest copy of this DB will be copied to NSDocumentDirectory?
Thanks
-Konstantin
I have a constant that I increment with new builds, say kDatabaseVersion.
At startup, I check for the following:
Does the DB exist in the Documents directory? If not, copy it. This probably means a first launch has occurred.
If the DB does exist, check the version from the constant against a NSUserDefaults entry with the same key. If the constant is greater, copy the database over. If not, do not. Update NSUserDefaults accordingly.
Of course, if the database contains data from your users also, you have to work out how to migrate that to a new data store. If you are using Core Data, you might even consider multiple persistent stores to separate user and default data.
Keep a database version number in your App.
When the app starts, check if the database exists in document directory. If not, copy it to the doc. directory.
If the database already exists, compare the database version number from your app with the number stored in a database table. if the numbers are equal everything is fine, otherwise you have to "upgrade" the existing database (modify database schema or whatever). So you can upgrade the database with every version of your app. Code a simple function "CheckForDatabaseUpdate" that contains all the neccessary logic. And make a "UpgradeToDatabaseVersion" function with a version number as parameter. This function will handle the upgrade of the database schema from one version to another.