Performance for retrieving PHAssets with moments - swift

I am currently building a gallery of user photos into an app. So far I simply listed all the user's photos in a UICollectionView. Now I would like to add moment clusters as sections, similar to the iOS Photos app.
What I am doing (a bit simplified):
let momentClusters = PHCollectionList.fetchMomentLists(with: .momentListCluster, options: options)
momentClusters.enumerateObjects { (momentCluster, _, _) in
let moments = PHAssetCollection.fetchMoments(inMomentList: momentCluster, options: nil)
var assetFetchResults: [PHFetchResult<PHAsset>] = []
moments.enumerateObjects { (moment, _, _) in
let fetchResult = PHAsset.fetchAssets(in: moment, options: options)
assetFetchResults.append(fetchResult)
}
//save assetFetchResults somewhere and use it in UICollectionView methods
}
Turns out this is A LOT more time-intensive than what I did before - up to a minute compared to about 2 seconds on my iPhone X (with a gallery of about 15k pictures). Obviously, this is unacceptable.
Why is the performance of fetching moments so bad, and how can I improve it? Am I using the API wrong?
I tried loading assets on-demand, but it's very difficult, since I then have to work with estimated item counts per moment, and reload sections while the user is scrolling - I couldn't get this to work in a way that is satisfactory (smooth scrolling, no noticable reload).
Any help? How is this API supposed to work? Am I using it wrong.
Update / Part solution
So after playing around, it turns out that the following was a big part of the problem:
I was fetching assets using options with a sort descriptor:
let options = PHFetchOptions()
options.sortDescriptors = [NSSortDescriptor(key: "creationDate", ascending: true)]
options.predicate = NSPredicate(format: "mediaType = %d", PHAssetMediaType.image.rawValue)
let assets = PHAsset.fetchAssets(in: moment, options: options)
It seems sorting doesn't allow PhotoKit to make use of indices or caches it has internally. Removing the sortDescriptors speeds up the fetch significantly. It's still slower than before and any further tips are appreciated, but this makes loading times way more bearable.
Note that, without the sort descriptor, assets will be returned oldest ones first, but this can be easily fixed manually by retrieving assets in reversed order in cellForItemAt: (so the cell at 0,0 will get the last asset of the first moment)

Disclaimer: Performance-related answers are necessarily speculative...
You've described two extremes:
Prefetch all assets for all moments before displaying the collection
Fetch all assets lazily, use estimated counts and reload
But there are in-between options. For example, you can fetch only the moments at first, then let fetching assets per moment be driven by the collection view. But instead of waiting until moments are visible before fetching their contents, use the UICollectionViewDataSourcePrefetching protocol to decide what to start fetching before those items are due to be visible on screen.

Related

SwiftUI List memory issue, images not deallocating from RAM causing crash

I am loading images into a SwiftUI List. When too many images are scrolled down, RAM skyrockets and crashes app. Why are images not deallocated as user scrolls down past them?
I am loading images as so:
List(allProducts, id: \.self) { product in
Image(uiImage: UIImage(data: dataFromRealmDB[product]))
}
My intuition tells me that there must be something to deallocate it manually from memory, so I am trying the following. Please let me know if you how to fill in the blank.
List(allProducts, id: \.self) { product in
Image(uiImage: UIImage(data: dataFromRealmDB[product])).onDisappear(perform: {
"WHAT SHOULD GO HERE TO MAKE THE IMAGE GET PURGED FROM RAM?"
}
}
If my suggested solution is not possible, please let me know as well.
UPDATE
I have changed the way images are stored. Now they are stored with FileManager instead of saving them to the RealmDB. This is my function to get the image. Still memory usage increase causing a crash, there is no deallocation from SwiftUI.
func getImage(link: String) -> Image? {
let lastPath = URL(string: link)?.lastPathComponent
if let dir = try? FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: false) {
let image : UIImage? = UIImage(contentsOfFile: URL(fileURLWithPath: dir.absoluteString).appendingPathComponent(lastPath!).path)
if image != nil {
return Image(uiImage: UIImage(contentsOfFile: URL(fileURLWithPath: dir.absoluteString).appendingPathComponent(lastPath!).path)!)
}
}
return Image("noImg")
}
The code in the question is a bit incomplete but the answer really isn't a coding answer, it's more of a overall design answer.
Realm objects are lazily loaded and will essentially never overrun memory if used properly; they are only in memory when in use; e.g. a Realm Results object having 10,000 objects is easily handled, memory is allocated and deallocated automatically.
On that note, if you store Realm objects in other, non-Realm objects like an array, that totally changes the memory impact and can overwhelm the device.
But, and more importantly:
Realm is not a good way to store Blob data (full size pictures). A Realm property has a finite amount of storage of 16Mb and an image can easily go way beyond that.
There are other options for picture storage from MongoDB and Firebase.
See my answer to this question for more details
Lastly, you should be using pagination to control how many images are loaded at a time from whatever source you use; that will allow you to more easily control the memory allocation and UI.
So that last part is important, no matter what the technique is, loading a bunch of images into memory is going to eventually overwhelm the device so as you can see, switching to using FileManger to load the images from disk instead of Realm is not going to be a long term solution; pagination is the way to go.
There are some third party libraries available to help with that and/or you can craft your own. Do a search here on SO for 'image pagination' as it's discussed a lot
Oh - one other thing; please use thumbnails for your UI! You only really need to display full size images if the user taps or selects it - then you can load it from disk. Firebase Storage can actually do that for you; when you upload an full size image, it can (on their server) create a thumbnail for your UI.
Likewise, thumbnails are tiny and Realm can easily handle those; the design would be an object like this
class MyImageObject: Object {
#Persisted var imageURL: //a string or url to where it's stored on disk
#Persisted var thumbnail: Data!
#Persisted var image_name = "" //the image name
}
Are you sure it is not related to the fact you fetch the image raw data from the Database? According this question a SwiftUI List works like a tableview - i.e reusing cells.
I think the fact you are using a data base to hold the raw data of the images causes the spike in memory.
This is kind of an opinion based answer but I’d recommend on either bundling the images in advance in the application (if your business logic supports it), using names in the DB and init them by name.
Or hosting them remotely and fetching on demand.

Capture if a barcode was scanned inside application

I am developing an application that creates a barcode using CIFilter. The question I have is if there is a way to capture at the app level every time the barcode is scanned? The barcode is a way for the device holder to redeem some sort of discount at different businesses. After it is scanned I would want to hit an API that captures that the barcode was scanned by the device holder. Without having to tie into the business systems. What would be the best approach for this? If there is one.
Just a snippet of how I'm creating these barcodes
func generateBarcode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
if let filter = CIFilter(name: "CICode128BarcodeGenerator") {
filter.setValue(data, forKey: "inputMessage")
let transform = CGAffineTransform(scaleX: 3, y: 3)
if let output = filter.outputImage?.transformed(by: transform) {
return UIImage(ciImage: output)
}
}
return nil
}
Thanks
The short answer is: most likely not.
The screen that is displaying the barcode has (physically) no way of telling if it's being looked at or scanned. It only emits light, it doesn't receive any information.
I can only think of two ways of getting that information:
Use the sensors of the device, like the front camera, to determine if the screen is being scanned. But this is a very hard task since you have to analyze the video stream for signs of scanning (probably with machine learning of some sort). It would also require that the user gives permission to use the camera just for... some kind of feedback?
The scanner needs to somehow communicate with the device through some other means, like a local network or through the internet. This an API of some sort, however.
Maybe it's enough for your use case to just track when the user opened the barcode inside the app, assuming this will most likely only happen when they let it be scanned.

Don't want to download all objects

I'm having a hard time understanding how I should structure my iOS app with regards to how Firebase works. I've got a few thousand users with my current implementation (not using firebase currently), but using .childAdded will give me all of the items in my db (of course to start with) but I'm trying to build something with .childAdded that allows me to say download the first 20 items, then as they scroll the tableview it downloads the next 20 items. My users post a lot of photos and their feed would be blown up with the amount of posts that .childAdded returns.
Thoughts on what to do?
This may help you better understand how to convert Firebase Queries into equivalent SQL Queries.
https://firebase.googleblog.com/2013/10/queries-part-1-common-sql-queries.html#paginate
// fetch page 2 of messages
new Firebase("https://examples-sql-queries.firebaseio.com/messages")
.startAt(2) // assumes the priority is the page number
.endAt(2)
.once('value', function(snap) {
console.log('messages in range', snap.val());
});
Here you can see the documentation that Explains Queries
You can go to the part that says limitToFirst.
For Example this says to limit to last 10:
var rootRef = firebase.database.ref();
var usersRef = rootRef.child("users");
var usersQuery = usersRef.limitToLast(10);
usersQuery.isEqual(usersRef); // false
usersQuery.isEqual(usersRef.limitToLast(10)); // true
usersQuery.isEqual(rootRef.limitToLast(10)); // false
usersQuery.isEqual(usersRef.orderByKey().limitToLast(10)); // false

Google Map in tableView

I am facing an issue while displaying google maps in a table view cell with swift. I want to display check In (Latitude and Longitude) based on this I want to display google map in table view. I will latitude and longitude from server if no location available means I will get null. so, In one scenario I am getting correct, but while reloading the top-level I am getting map when and where their latitude and longitude is null also. Please guide me.
A map view is an expensive view to instantiate. Even when using dequeueReusableCellWithIdentifier it will still be resource-heavy with a large number of items.
I believe using a way to generate a static map image from your longitude/latitude combination is your best bet.
You can take a look here and you can see how to easily construct an API call to many popular map providers such as Google for a static image map like for example:
http://maps.googleapis.com/maps/api/staticmap?center=40.714728,-73.998672&zoom=13&scale=false&size=600x300&maptype=roadmap&format=png&visual_refresh=true
Also worth mentioning that some of those providers might have limitations (Like Google might need an API key in case of high traffic). So choose wisely after some research.
Looks like you are facing two issues
1) Already mentioned one "Map appears even when data is nil"
2) Performance issue (It is not mentioned here though)
The first one is due to dequeueing of cell. When you reload table , the cell will be reused(Not created again).
When you return your cell for row in
tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath)
the cell with map view may get return, that is why you get unwanted map there. You should make it nil to prevent this.
2:
When came to performance, its a not the right approach to show mapview in every cell. An alternate approach is to make image out of your map url
use this
let mapUrl: String = NSURL(string:"YourMapURL")
let mapImage = UIImage.imageWithData(NSData.dataWithContentsOfURL(mapUrl))
let latitude:Double = 17.3850
let longitude:Double = 78.4867
let imageURL = NSURL(string: "http://maps.googleapis.com/maps/api/staticmap?center=\(latitude),\(longitude)&zoom=12&scale=false&size=600x300&maptype=roadmap&format=png&visual_refresh=true")
let imagedData = NSData(contentsOfURL: imageURL!)!
staticMapImgView.image = UIImage(data: imagedData)

How to best load Locations for MapView from Webserver

I have like 2000 locations in a web database which a user should be able to select on a Map. I can ask the web database to give me only a certain number of locations originating from a current location.
To make everything smooth and elegant I would first instantiate MKMapView, start CLLocationManager and wait until I get a didUpdateLocations. Then I would try to get my data from a Database with a completion handler.
Should I
a) get all data at once
b) get the data in little pieces or chunks ?
What it the best way?
func locationManager(manager: CLLocationManager, didUpdateLocations locations: [CLLocation]) {
self.gotoCurrentLocation()
if let userLocation = manager.location {
GroundHelper.getAllGroundLocations(userLocation) { self.handleWaypoints($0!) }
}
}
private func handleWaypoints(grounds: [Ground]) {
mapView.addAnnotations(grounds)
}
// MARK: - Helper Methods
typealias GPXCompletionHandler = ([Ground]?) -> Void
class func getAllGroundLocations(userlocation: CLLocation, completionHandler: GPXCompletionHandler) {
let priority = DISPATCH_QUEUE_PRIORITY_DEFAULT
dispatch_async(dispatch_get_global_queue(priority, 0), { ()->() in
var results = RestApiManager.sharedInstance.getGPS(userlocation, limit: 50)
// return first 50 results
dispatch_async(dispatch_get_main_queue(), {
var grounds = [Ground]()
for result in results {
let (_,jsonGround) = result
let ground = Ground(json:jsonGround)
grounds.append(ground)
}
completionHandler(grounds)
})
// get the rest
results = RestApiManager.sharedInstance.getGPS(userlocation)
// return them
dispatch_async(dispatch_get_main_queue(), {
var grounds = [Ground]()
for result in results {
let (_,jsonGround) = result
let ground = Ground(json:jsonGround)
grounds.append(ground)
}
completionHandler(grounds)
})
})
}
Getting all data at once is not scalable. You might make it work on a timely manner with 2000 entries, but what if the data set grows? Can you handle 3000? 5000? 10000 annotations?
Getting your data in chunks, returning only entries near the location the map is centered in, and displaying those entries as the user moves around the map makes more sense. However, this approach is very slow, as there is usually a long delay between the user dragging the map and the annotations appearing on screen (network requests are slow in nature).
Thus, the recommended approach for a great user experience is to cache the results locally. You can do this if you have a local database (Core Data, for example) or you could do this with NSCache.
With this approach, you will be hitting the server with new requests as the user moves around the map, but the amount of results returned can be limited to 20, 50 or 100 (something that gets you the highest amounts of data while being very responsive).
Next, you would be rendering all annotations from your cached results on the map, so the number of annotations will grow as the user moves around the map.
The guys from http://realm.io have a pretty nice video that explains this approach: https://www.youtube.com/watch?v=hNDNXECD84c While you do not need to use their mobile database (Realm), you can get the idea of the application architecture and design.
First of all, you should think if 2000 annotations are going to fit at the same time on the MKMapView. Maybe you should try to only recover the locations that fit in the map you are presenting.
After that, in your code, you are adding annotations everytime didUpdateLocations is called. ¿Are you removing the older annotations in any other place?. If the annotations are not changing, and are on the map, you should't be adding them again.
I think that a good approach should be:
1.- First time didUpdateLocations is called: ask your web service for the annotations that fit in a distance equal to two times the area that is showing your map. Save that location as locationOrigin, save that distance as distanceOrigin
2.- Next time didUpdateLocations is called: If the distance moved from current location to locationOrigin is equal to half the distanceOrigin. You should query the web service again, update your 'Origin' variables, and add only the new annotations to the mapView.
3.- If regionWillChange is called (MKMapViewDelegate): the user is zooming, moving the map, or rotating the device.... the simple approach is reloading the annotations as if we were on step 1. A more smart approach is adding a gesture recognizer to detect if the user is zooming in (in that case the annotations don't change), or zooming out or moving (the annotations may change)
Load them in chunks near the current user locations, if user zoom out load another chunk, if user move the map load another chunk and so on. this way your app would scale better if your annotations become more and more like 1M annotations.
I have actually written an app that can place 7000 annotations on an MKMapView with no performance problems... and that's on an iPad 1. You really don't need to worry about the map.
The only limiting factor I can think of is how long the network call would take and whether that will be a problem. In my case, I was pulling addresses out of the contacts database, geocoding them and then storing the lat-longs in core data. Yes geocoding 7000 addresses takes forever, but once you have the lat-longs putting them on the map is easy stuff and the map code can handle it just fine...