For testing purposes, I created a specific node on my Firebase database. I copy a user over to that node and then can futz with it without worrying about corrupting data or ruining a user's info. It works really well for my purposes.
I've run into a problem, however. If a user has an extremely large set of data, the copy function won't work. It just stalls. I don't get any errors, though. I read that Firebase has copy limits of 1MB, and I'm guessing that's the problem. I'm running up against that wall, I think.
Here is my code:
func copyToTestingNode() {
let start = Date()
// 1 . create copy of user and then modify the copy
guard var copiedUser = user else { print("copied user error"); return }
copiedUser.userID = MP.adminID
copiedUser.householdInfo.subscriptionExpiryDate = 2500000000
// 2. get a snapshot of the copied user's info
ref.child(user.userID).observeSingleEvent(of: .value) { (userSnapshot) in
print("Step 2 TRT:", Date().timeIntervalSince(start))
// 3. remove any existing data at admin node, and then...
self.ref.child(MP.adminID).removeValue { (error, dbRef) in
print("Step 3 TRT:", Date().timeIntervalSince(start))
// 4. ...copy the new user info to the admin node
self.ref.child(MP.adminID).setValue(userSnapshot.value, withCompletionBlock: { (error, adminRef) in
print("Step 4 TRT:", Date().timeIntervalSince(start))
// 5. then send user alert and stop activity indicator
self.activityIndicator.stopAnimating()
self.showSimpleAlert(alertTitle: "Copy Complete", alertMessage: "Your copy of \(copiedUser.householdInfo.userName) is complete and can be found under the new node:\n\n\(copiedUser.householdInfo.userName) Family")
})
}
}
}
Options:
Is there a simple way to check the size of the DataSnapshot to alert me that the dataset is too large to copy over?
Is there a simple way to split up the snapshot into smaller pieces and overcome the 1MB limit that way?
Should I use Cloud Functions instead of trying to trigger this on a device?
Is there a way to somehow "compress" the snapshot to be smaller so that I can copy it easier?
I'm open to suggestions.
UPDATE #1
I read about the size limitation HERE. Judging from Frank's reaction, I'm guessing my understanding of that limitation is wrong.
I downloaded the node from the Firebase console and checked its size. It's 799 KB on my hard drive. It's a large JSON tree, and so I thought that its size must be the reason why it won't copy over. The smaller nodes copy over no problem. Just the large ones have trouble.
UPDATE #2
I'm not sure how to show the actual data, other than a screenshot, seeing how large the JSON tree is. So here is a screenshot:
As you can see, the data has multiple nodes, some of which are larger than others. I suppose I can cut down the 'Job Jar' node, but the rest really need to be that size for everything to work properly.
Granted, this is one of the largest datasets I have among all my users, but the structure doesn't change.
As for the speed of execution for each line of code, here are the simulator times for each numbered step:
Step 2 TRT: 0.5278879404067993
Step 3 TRT: 0.6249579191207886
Step 4 TRT: 1.8466829061508179
ALL DONE COPYING!!
This only works for the smaller datasets. For the larger ones, I never get to step 4. It just hangs. I let it run for several minutes, but no change.
Final version that seems to work:
func copyToTestingNode() {
// 1 . create copy of user and then modify the copy
guard var copiedUser = user else { print("copied user error"); return }
let adminRef = ref.child(MP.adminID)
copiedUser.userID = MP.adminID
copiedUser.householdInfo.subscriptionExpiryDate = 2500000000
// 2. get a snapshot of the copied user's info
ref.child(user.userID).observeSingleEvent(of: .value) { (userSnapshot) in
// 3. remove any existing data at admin node, and then...
adminRef.removeValue { (error, dbRef) in
if (error != nil) { print("Yikes!") }
// 4. ...copy the new user info to the admin node one node at a time (if user has a lot of data)
var totalNodesCopied = 0
for item in userSnapshot.children {
guard let snap = item as? DataSnapshot else { print("snap error"); return }
self.ref.child(MP.adminID).child(snap.key).setValue(snap.value) { (error, adminRef) in
totalNodesCopied += 1
if totalNodesCopied == userSnapshot.childrenCount {
print("ALL DONE COPYING!!")
}
}
}
}
}
}
Related
While retrieving metadata from media files, I've run into a memory issue I cannot figure out.
I want to retrieve metadata for media files either stored in the local app storage or in the iTunes area. For this I use AVAsset. While looping through these files I can see the memory consumption rising constantly. And not just a little. It is significant and end up stalling the app when I enumerate my iTunes library on the phone.
The problem seems to be accessing the metadata property on the AVAsset class. I've narrowed in down to one line of code: 'let meta = ass.metadata'. Having that line of code (without any references) makes the app consume memory. I've included an example of my code structure.
func processFiles(_ files:Array)
{
var lastalbum : String = ""
var i : Int = 0
for file in files
{
i += 1
view.setProgressPosition(CGFloat(i)/CGFloat(files.count))
lastalbum = updateFile(file.url,lastalbum,
{ (album,title,artist,composer) in
view.setProgressNote(album,title,artist+" / "+composer)
})
}
}
func updateFile(_ url:URL,_ lastalbum:String,iPod:Bool=false,
_ progress:(String,String,String,String) -> Void) -> String
{
let ass = AVAsset(url:url)
let meta = ass.metadata
for item in meta
{
// Examine metadata
}
// Use metadata
// Callback with status
}
It seems that memory allocated in the updateFile method, is kept, even when the function is ended. However, once the processFile function completes and the app returns to a normal state, all memory is released again.
So in conclusion, this is not a real leak, but still a significant problem. Any good ideas as to what goes wrong? Is there any way I can force the memory management to run a cleanup?
As suggested in the comment on the post, the solution for this is to wrap the specific code in a 'autoreleasepool' block. I've tested this both with a small set of local media files and also with my rather large iTunes media library (70GB). After implementing the 'autoreleasepool' the memory buildup is eliminated.
func updateFile(_ url:URL,_ lastalbum:String,iPod:Bool=false,
_ progress:(String,String,String,String) -> Void) -> String
{
autoreleasepool
{
let ass = AVAsset(url:url)
let meta = ass.metadata
for item in meta
{
// Examine metadata
}
// Use metadata
// Callback with status
}
}
In my new app (Project Control, iOS App Store ;)) I want users to take part of development decisions. For this I have added a path in my Firebase database called "claps". I would like to enter the number of the following in my TableView for the different concepts. I have tried the following
self.posts.append(Post(title: post_title, des: post_description, info: "\(post_date) - \(post_user) - \(post_claps) 👏", claps: Int(post_claps)))
for var item in self.posts {
g.ref.child("concepts").child(item.title).queryOrdered(byChild: "claps").observe(.childAdded) { (snapshotClaps: DataSnapshot!) in
item.claps = Int(snapshotClaps.childrenCount)
}
}
DispatchQueue.main.async() {
self.tableView.reloadData()
}
However, it does not yet represent the right one, but is one before it. I don't know how to make the reference more specific to really get only what's under claps.
This ist my Database:
Currently my output is 5 but it should be 4. You see its observing one "layer" to early. Help will be appreciated. Improvements too :)
UPDATE:
Through testing I could reveal that the problem is in the reference. The Int five is coming from the 5 Childs of "top-layer" "Journal". My problem is that I cant get any deeper in the structure because I don't have a specific String for .child()
Since you're observing the .childAdded event, your closure gets called for each matching child node. If you want to count the number of matching child nodes, you'll want to observe the .value event, which ensures your closure gets called for all matching nodes at once.
Something like:
g.ref.child("concepts").child(item.title).observe(.value) { (snapshotClaps: DataSnapshot!) in
item.claps = Int(snapshotClaps.childrenCount)
}
Note that I also removed the orderBy clause, since that has no useful meaning if all you use is the count.
create an Array and allow the firebase to populate it. or do something like
g.ref.child("concepts").child(item.title).observe(.value) { (snapshotClaps: DataSnapshot!) in
item.claps = Int(snapshotClaps.childrenCount)
}
observing value makes sure your closure gets its matching nodes.
There's a couple of great solutions but the issue in reading a node by .value is it reads in everything in that node.
While that would be fine for nodes that have a limited amount of data, it would overwhelm the device when the node contains a lot of data.
So another option is to leverage that Firebase executes all .childAdded events before .value events. That way, we can use a .value as a trigger that all nodes have been read.
Here's a function that uses .childAdded to iterate and count all of the users in the users node. Also, there's a .value observer that reads in just the last node, removes the .childAdded observer and passes the count back to the calling function via a completion handler. Remember that even though we are attaching both observers, the .childAdded events will all fire before the .value event.
func countUsers( completion: #escaping(Int) -> Void) {
var count = 0
let usersRef = self.ref.child("users")
usersRef.observe(.childAdded, with: { snapshot in
count+=1
})
let query = usersRef.queryOrderedByKey().queryLimited(toLast: 1)
query.observeSingleEvent(of: .value, with: { snapshot in
usersRef.removeAllObservers()
completion(count)
})
}
to call the function, here's the code
func getUserCount() {
self.countUsers(completion: { userCount in
print("number of users: \(userCount)")
})
}
I was hoping I could get some help optimising my code. I´m new to development so please be kind.
Currently it works, but it uses quite some time (10-15 sec) to load the first table view I need in my app.
First I thought that I had not activated "persistence" properly, but I am starting to suspect that it is the way I am loading data that is suboptimal.
The "large" (12k + items) data set I use dont change that frequently, so the ideal solution would be to load that once, then listen for changes. I thought that was what I am doing, but if so I dont understand why it is so slow? So I now suspect that it is the way that I append the data every time, instead of just "reading/loading" from "somewhere local" and then listen for changes from the sever?
Any help is appreciated
//read From Firebase adjusted to whiskies
func startObservingDB () {
dbRef.queryOrdered(byChild: "brand_name").observe(.value, with: { (snapshot:FIRDataSnapshot) in
var newWhisky = [WhiskyItem]()
//forloop to iterate through the snapshot
for whiskyItem in snapshot.children {
let whiskyObject = WhiskyItem(snapshot: whiskyItem as! FIRDataSnapshot)
newWhisky.append(whiskyObject)
}
//update
self.whiskies = newWhisky
print("WhiskyItem")
self.tableView.reloadData()
}) { (error: Error) in
print(error.localizedDescription)
}
}
Firebase structure: /Results/Index/name: xxx, "other thing1": xxxx,..., "other thing32": xxxx
I'm not sure, that it is a good idea to store all 12 000 items on your phone.
May be it will be good solution for you:
You can use this lib for:
(example)
1) load data for 100 rows
2) scroll to the end
3) do another load of 100 rows.
Hope it helps
I am facing a strange issue with CoreData. I am starting a operation to fill initial data in a table. I am starting the operation in applicationDidBecomeActive.
// Creating child context
let context = NSManagedObjectContext(concurrencyType: .PrivateQueueConcurrencyType)
let delegate = UIApplication.sharedApplication().delegate as! AppDelegate
context.parentContext = delegate.managedObjectContext
// Reading data from database and printing here, shows zero number of entities always
context.performBlockAndWait({
// Performing batch delete, to remove duplicacy
})
context.performBlockAndWait({
// Creating entities from the JSON read from App bundle
...
...
do {
// Saving local context
try context.save()
context.parentContext?.performBlockAndWait({
do {
try context.parentContext?.save()
// Reading data from database and printing here, shows correct number of entities
} catch {
printDebug("Unable to save main context: \(error)")
}
})
} catch {
printDebug("Unable to save main context: \(error)")
}
})
// Reading data from database and printing here also, shows correct number of entities
I am starting this operation only from once place i.e applicationDidBecomeActive, and also accessing the entity from this operation only.
Any idea, what is the problem ?
So the problem was batch delete using NSBatchDeleteRequest. I was performing same code for multiple type of NSManagedObjectContext, and all are sub-classes of a single NSManagedObjectContext. So that might be the issue.
When I perform fetch-all-and-delete-in-loop, everything works fine, i.e entities are getting stored. But when I use NSBatchDeleteRequest to delete all at once, the entities that are inserted before the batch-delete operation of the next type of NSManagedObjectContext are getting deleted.
So the culprit was NSBatchDeleteRequest. And I don't know why ? I searched but didn't find any solution. So I will post another question regarding this issue.
We have the need to switch between different databases, i.e., realms. There is an active database at a specific folder (e.g., ".../database/some.realm"), and this database may change (e.g., to ".../database/other.realm").
What we are currently doing is this:
(1) for the previous operational database: Commit any open transactions and invalidate.
if realm.inWriteTransaction {
do {
try realm.commitWrite()
} catch {
...
}
} else {
...
}
realm.invalidate()
(2) move the previous database to a new place.
(3) move the new database to the operational place.
(4) for the new database: create a new configuration and create a new realm.
public func openDatabaseAtURL(url: NSURL) {
let config = Realm.Configuration(
fileURL: url,
inMemoryIdentifier: nil,
encryptionKey: nil,
readOnly: false,
schemaVersion: self.currentSchemaVersion,
migrationBlock: nil,
deleteRealmIfMigrationNeeded: false,
objectTypes: nil)
do {
let realm = try Realm(configuration: config)
self.realm = realm
} catch let error as NSError {
...
} catch {
...
}
}
Although there are no errors, and the new database is used properly when the app is started the next time, we want to have the database switch operational immediately. However, realm seems not to notice that there was something happening, as in the folder none of the realm's additional files (.lock etc) are created, and the app still shows old data after refreshing.
What are we doing wrong, and what should we do instead? (In other words: how do we properly "close" the old database and "open" the new one?)
Thanks a lot for your help!
Hardy
Realm internally holds references to Realm instances across threads so a new copy isn't created each time it is called.
Unfortunately, what this means in practice is that once a Realm instance is touched, it will remain in memory and will be re-used until some time later when the system implicitly releases it. Until then, if you move the physical file on disk, this will cause issues.
The general recommendation is to only perform file operations on Realm files before you create any Realm() instances pointing at it. But in other cases where you can't avoid that, you can explicitly control when the Realm copies are evicted by placing each call you make to Realm() inside an #autoreleasepool block.