Let's say you have a CloudKit type called Steps, and each step record has a value and a date. If you have 500 records, is there anyway to have CloudKit pass back the sum of all the values (ex: 5,490)?
I haven't found a way besides doing a CKQueryOperation for all the steps and then manually adding them. My concern with this is that if I ended up with a lot of step records, say 5,000, that CKQueryOperation might start to limit my results.
No, this isn't possible with CloudKit. You'd have to use something that allows server functions, like Parse.
But this is a problem that is easily solved with your local device cache. If you sync your Step records with a local database (Realm, CoreData, SQLite, etc.) you can simply query all your steps in your database and add them up. This is the approach I would take.
Related
I want to periodically export data from db2 and load it in another database for analysis.
In order to do this, I would need to know which rows have been inserted/updated since the last time I've exported things from a given table.
A simple solution would probably be to add a timestamp to every table and use that as a reference, but I don't have such a TS at the moment, and I would like to avoid adding it if possible.
Is there any other solution for finding the rows which have been added/updated after a given time (or something else that would solve my issue)?
There is an easy option for a timestamp in Db2 (for LUW) called
ROW CHANGE TIMESTAMP
This is managed by Db2 and could be defined as HIDDEN so existing SELECT * FROM queries will not retrieve the new row which would cause extra costs.
Check out the Db2 CREATE TABLE documentation
This functionality was originally added for optimistic locking but can be used for such situations as well.
There is a similar concept for Db2 z/OS - you have to check that out as I have not tried this one.
Of cause there are other ways to solve it like Replication etc.
That is not possible if you do not have a timestamp column. With a timestamp, you can know which are new or modified rows.
You can also use the TimeTravel feature, in order to get the new values, but that implies a timestamp column.
Another option, is to put the tables in append mode, and then get the rows after a given one. However, this option is not sure after a reorg, and affects the performance and space utilisation.
One possible option is to use SQL replication, but that needs extra tables for staging.
Finally, another option is to read the logs, with the db2ReadLog API, but that implies a development. Also, just appliying the archived logs into the new database is possible, however the database will remain in roll forward pending.
I have a database that syncs completely every 2 hours. All data is dropped and populated from the main data source.
I have some queries coming from client app, that have the same response for the current 2-hours dataset. So, if 100 clients run their apps, I will have to run this query 100 times for each of them, even though they don't differ.
How do I avoid running this real query against my database every time, but just keep its response somewhere and return it instead?
I think I can run this query after each sync and save to its own table then return from it.
What are other options, probably provided by Postgres itself?
You should use something like redis to store the result or your query in memory. It comes with many clients. You can invalidate the result of this query when it's time to.
There are other memory caching like memcache, easy to install & to use.
Note these are specific to postgres.
So I'm planning an app that will involve having a master db on a server, lets say 3,000 CDs, with the columns Title, Artist, and Release Date.
1)When a user adds a CD to their collection, it will add it to the apps local SQLite DB. But lets say I spelled a CD title wrong, so I make an update to it. When the user goes to sync, how should I go about handling an updated row? Should I have a column 'IsUpdated' that is just a numeric value that increase by one every time I update that row? That way when the app sees IsUpdated on the server is larger than the local IsUpdated for that particular item, it will now to replace the contents. Does that make sense? Is it even practical? What other option would there be?
2) How would I do about handling the addition of brand new columns? Like adding a Barcode or Price? Do I just push an update for the app that adds the new columns locally, then do the same on the server, and let the rest take its run? Which would also trickle to number 1 with the syncing issue.
First you have to give more detail than that. Is the entire 3000 master list also replicated down to the remote db?
Sounds like it.
Ok so if that the case, this isn't a DB design issue so much as it is replication.
It's a bad idea to update every row in a table, especially one that makes the row longer. You'll be better off just dropping the table and recreating. <--- that's how it works in RDBMS on servers, no idea if that concept changes on a client db. And now we get into more iPhone questions of replication than simple db replication. Would it be better to just republish the app? Is the user data segregated from the server data. Can DDL be done on the local/remote tables after published?
Instead of searching the entire list for changes as you outline in #1. I would keep a dated delta table. The local app would store a last_updated_Datetime, any records in the delta table after that datetime would need to be brought down. Once downloaded the local system can determine how to apply them. Again this is inappropriate for mass changes.
I'm working with SQL 2000 and I need to determine which of these databases are actually being used.
Is there a SQL script I can used to tell me the last time a database was updated? Read? Etc?
I Googled it, but came up empty.
Edit: the following targets issue of finding, post-facto, the last access date. With regards to figuring out who is using which databases, this can definitively monitored with the right filters in the SQL profiler. Beware however that profiler traces can get quite big (and hence slow/hard to analyze) when the filters are not adequate.
Changes to the database schema, i.e. addition of table, columns, triggers and other such objects typically leaves "dated" tracks in the system tables/views (can provide more detail about that if need be).
However, and unless the data itself includes timestamps of sorts, there are typically very few sure-fire ways of knowing when data was changed, unless the recovery model involves keeping all such changes to the Log. In that case you need some tools to "decompile" the log data...
With regards to detecting "read" activity... A tough one. There may be some computer-forensic like tricks, but again, no easy solution I'm afraid (beyond the ability to see in server activity the very last query for all still active connections; obviously a very transient thing ;-) )
I typically run the profiler if I suspect the database is actually used. If there is no activity, then simply set it to read-only or offline.
You can use a transaction log reader to check when data in a database was last modified.
With SQL 2000, I do not know of a way to know when the data was read.
What you can do is to put a trigger on the login to the database and track when the login is successful and track associated variables to find out who / what application is using the DB.
If your database is fully logged, create a new transaction log backup, and check it's size. The log backup will have a fixed small lengh, when there were no changes made to the database since the previous transaction log backup has been made, and it will be larger in case there were changes.
This is not a very exact method, but it can be easily checked, and might work for you.
I know it is xmas eve, so it is a perfect time to find hardcore programers online :).
I have a sqlite db fiel that contains over 10 K record, I generate the db from a mysql database, I have built the sqlite db within my iphone application the usual way.
The records contains information about products and their prices, shops and the like, this info of course is not static, I use an automatic scheme to populate and keep updating my mysql db.
Now, how can I update the iphoen app sqlite database with the new information available in the mysql db, the db structure is still the same, but the records contains new information.
Thanks.
Ahed
info:
libsqlite3.0,
iphone OS 3.1,
mysql 2005,
Mac OS X 10.6.2
There is a question you need to answer first; How do you determine the set of changed records in your MySQL database?
Or, more specifically, given that the MySQL database is in state A, some transactions occur and now it is in state B, how do you know what changed between A and B?
Bottom line; you need a schema in MySQL that enables this. Once you have answered that question, then you can answer the "how do I sync problem?".
I have a similar application.
I am using Push Notification to let my users know there is new or updated data available.
Each time a record on the server is updated, I store a sequential record-number alongside the record.
Each UDID that's registered has a "last updated" number associated with it that contains the highest record-number it has ever downloaded.
When any given device comes to get it's updates, all database records greater than the UDID's last updated record-number as stored on the server are sent to the device. If everything goes OK, the last updated record-number for the UDID is set to the last record number sent.
The user has the option to fetch all records and refresh his database if he feels any need to sync his device to the entire database.
Seems to be working well.
-t
You can find many other similar questions by searching for "iphone synchronization":
https://stackoverflow.com/search?q=iphone+synchronization
I'm going to assume that the data is going only from mysql to sqlite, and not the reverse direction.
Three are a few ways that I could imagine doing this. The first is to just redownload
the entire database during every update. Another way, which I'm describing below, would be to create a "log" table to record the modifications to your master table, and then download just the new logs when doing the update.
I would creat a new "log" table in your SQL database to log changes to the table needing synchronization. The log could contain a "revision" column to track in what order changes were made, a "type" column to specify if it was a insert, update, or or delete, a the row-id if your affected row, and finally have the entire set of columns from your master table.
You could automate the creation of log entries by using stored procedures as wrappers to modify your master table.
With only 10k records, I wouldn't expect this log table to grow to be that huge.
You would then in sqlite keep track of the latest revision downloaded from mysql. To update the table, you would download all log entries after the latest update, and then apply them to your sqlite table.