Cloudkit Production Records Disappeared - cloudkit

An entire table of Cloudkit records just disappeared from my public production database. The other table of records was intact. I didn't do anything to delete these records. I looked at the logs and I couldn't see anything that looked out of the ordinary. I have no idea how this could have possibly happened. Is CloudKit so unstable that this can just happen randomly? Can anyone give me some insight on this - how can I go about tracking down the source of this issue? Should I assume this is a CloudKit issue? Is there a way to tell what actually happened?

Per one of my dev friends on Twitter, it's a known issue regarding indexes (not data), and Apple's working on it. No ETA I'm aware of, as of 11PM CST.
My app has 8 tables and 3 of them are 'missing' record data.

Related

How to use sharing with CoreData/CloudKit synching

I have an app that works well synchronizing the local core data records with a private database. I would like to make the CloudKit database a shared database and then selectively share records with users. Is this even possible.
In other words, I'd like to continue having the Core Data/CloudKit sync working, but with the records that get synched limited to the user's share(s).
Does anyone have examples or a link to a discussion of this?
Apple solved this last year at WWDC21
Here is a link to the video
https://developers.apple.com/videos/play/wwdc2021/10015/
and some useful example code:
https://github.com/delawaremathguy/CoreDataCloudKitShare

G Suite Email Migration Does Not Complete, Stuck on 99%

I'm currently experiencing something rather weird: while migrating emails from a GoDaddy email server to a new G Suite set up for a number of users, I was able to successfully move for a couple of emails, as confirmed by Google's 'Complete' tick beside them. I was able to observe the migration too as it went on.
However, for one of the emails, the number of emails read just seems to keep increasing, and it still hasn't displayed 'Complete', but remains stuck on '99%'.
See screenshots I literally took just now below: as of the first latest screenshot, it says 'Successfully migrated 3230 emails', while stuck on 99%:
Then I hit refresh, check the status of that same account, and now it says '...3250 emails', while still stuck on 99%:
This isn't how it's supposed to behave, at least that isn't the behaviour I experienced with the previous 4 emails in that list. Ideally, it should say 'Migrating X out of fixed_amount emails'. In this case, that fixed_amount was
about 2,000 emails. It has now since passed that figure, but instead of showing 'Complete', it instead shows 'Successfully migrated new_amount' where new_amount keeps increasing.
This has been ongoing for almost 24 hours now. Honestly, I don't know if this is a bug or not. I really just need some helpful info to know if I should be concerned or not, perhaps maybe if someone else has run into this. Anyone?
Stumbled on to Google's documentation: https://support.google.com/a/answer/7032598?hl=en
To quote the 'Why does my migration look like it's stuck at 99%?' section:
You’ll see 99% when all email is migrated. After everything is
migrated, the data migration service applies any labels to the
migrated email, which can take time. When the labels are applied, you
should see that the migration is complete (100%).
You might also see this issue if the estimated number of emails to
migrate exceeds the actual number of messages. The migration will
report 99% until the migration completes. This process might take some
time.
You shouldn't be concerned. I was migrating around 29.000 emails from a personal gmail to Google Workspace gmail and the migration took 4 days (migrating only one user), from which the last 1.5 days the migration was "stuck" at 99%. No need to restart the migration, eventually it indeed finishes. I also got several error codes (e.g. 17009 - 'Generating an access token with the supplied credentials was unsuccessful...'), but none proved valid, I haven't actioned on them as, like in your case, I saw the number of migrated emails increasing.

Possible big mistake. What exactly does "db.repairDatabase()" do? MONGODB

I have a mongodb database with several million users.
I wanted to free space and I created a bot to remove inactive users of more than 6 months.
I have been looking at the disk for several minutes
and I have seen that it varied but it will not release large space, not even 1 mb. That's weird.
I've read that "remove" does not actually delete the disc if it does not simply mark that it can be deleted or overwritten. It is true?
That seemed to make a lot of sense to me. So, I've looked for something that forces space to really free up...
I've applied repairDatabase() and I think I've done wrong.
Everything has been blocked!
I have tried the luck and I have restarted the server.
There is a MongoDB service working but its status is maintained in "Starting" (not Running).
I'm reading from other sites that repairDatabase() requires twice as much space as the original size of the database, it does not have it.
I do not know, what is doing, and this could in several hours, days ...
Is the database lost? I think I will stop all services and delete the database.
repairDatabase is similar to fsck. That is, it attempts to clean up the database of any corrupt documents which may be preventing MongoDB to start up. How it works in detail is different depending on your storage engine, but repairDatabase could potentially remove documents from the database.
The details of what the command does is outlined quite clearly (with all the warnings) in the MongoDB documentation page: https://docs.mongodb.com/manual/reference/command/repairDatabase/
I would suggest that next time it's better to read the official documentation first rather than reading what people said in forums. Second-hand information like these could be outdated, or just plain wrong.
Having said that, you should leave the process running until completion, and perform any troubleshooting if the database cannot be started. It may require 2x the disk space of your data, but it's also possible that the command just needs time to finish.

Cloudant / CouchDB "pull" replication for 600+ documents to iPhone

I'm using Cloudant and I'm struggling to pull/replicate 600 documents from server to my iPhone. First, it's pretty slow because it has to go one-document-at-a-time, and Second Cloudant was giving me "timeouts" after the 100th-or-so REST request. (I have a ticket with Cloudant for this one, as it's unacceptable!)
I was wondering if anyone has found a way / hack to "bulk" replicate when pulling. I was thinking, perhaps it's possible to "zip up" all of the changes, send them in one file, and fast-forward the iPhone database to the last-change seq.
Any helps is great -- thanks!
Can you not hit _all_docs?include_docs=true to get everything in one shot? http://wiki.apache.org/couchdb/HTTP_Document_API#all_docs
I don't know couchcoccoa but it looks like the API supports this: http://couchbaselabs.github.com/CouchCocoa/docs/interfaceCouchDatabase.html#a49d0904f438587b988860891e8049885
Actually, why not make a view. Make a view that gives you your list and make sure your id is there. With your id, you can then go to the document and get all the rest of the required information that you need in order to update it if you need to.
There really is no reason you would ever need to hit every document individually. They have views and search2.0 for that. Keep in mind you are using a cloud based technology. This stuff is not sitting in your basement, you can't just hit it a million times per device in a few seconds and expect anyone to not notice and/or get upset (an exaggeration, yes I know).
What I do not understand is that you are trying to replicate it to an iPhone? Are you running apache and couchdb in your app? Why not just read the JSON data and throw it into a database. or just throw it into a file if it updates that much and keep overwriting it. There is so many options that are a whole lot less messy.

Managing Core Data iCloud Transaction Logs

I am using iCloud with Core Data, based on the SQLite "Library-style" application design as specified by Apple. While the basic functionality works very well, I am concerned about the transaction logs and how they are managed.
While the database for my app is not large, it is very active and the core data stack is saved many times while the app is in use. I have noticed that a new transaction log is created for every core data save. The end result is that I have a TON of transaction logs and they take up much more space than the actual database.
1) Do the transaction logs ever get automatically pruned / coalesced, or will they continue to grow indefinitely, eventually numbering in the thousands and taking up many megabytes? It seems that the only way to manually purge the transaction logs and recreate a .baseline archive would be to disable and then re-enable iCloud (removing the ubiquity container and starting fresh). But this is obviously not a good solution.
2) My current architecture saves the core data stack often, even for minor changes. In general, this makes sense as my database is small and saving often ensures that the database file is always up-to-date. However, given the above issues with transaction logs, I am thinking that I should perhaps minimize saves to the database. Maybe doing so on a timed basis and/or on app transition states.
3) Even if I minimize the number of transaction logs by reducing how often I save the database, there seems to be an issue here, as the logs will continue to grow in number over time. Eventually the benefit of the "transaction log" design will become a burden in terms of the amount of iCloud storage used and the initial iCloud sync as a new device is added.
As Apple has provided very sparse information on iCloud and almost nothing in the form of "best practices", I would appreciate any insight from the community.
I filed a radar on this issue and received the following reply. They noted that it should be working correctly in iOS 5.1, though I have not yet verified this myself.
A clarification for any who might misunderstand the following. The transaction logs will be cleaned up by the core data internals. This is NOT something that should be performed by the application itself.
Engineering has provided the following feedback regarding this issue:
The transaction logs are intended to be deleted after all the active
peers have had a chance to read them, and they exceed a threshold of
space consumed. There was a previous issue that prevented the devices
from correctly doing so.