I have like 2000 locations in a web database which a user should be able to select on a Map. I can ask the web database to give me only a certain number of locations originating from a current location.
To make everything smooth and elegant I would first instantiate MKMapView, start CLLocationManager and wait until I get a didUpdateLocations. Then I would try to get my data from a Database with a completion handler.
Should I
a) get all data at once
b) get the data in little pieces or chunks ?
What it the best way?
func locationManager(manager: CLLocationManager, didUpdateLocations locations: [CLLocation]) {
self.gotoCurrentLocation()
if let userLocation = manager.location {
GroundHelper.getAllGroundLocations(userLocation) { self.handleWaypoints($0!) }
}
}
private func handleWaypoints(grounds: [Ground]) {
mapView.addAnnotations(grounds)
}
// MARK: - Helper Methods
typealias GPXCompletionHandler = ([Ground]?) -> Void
class func getAllGroundLocations(userlocation: CLLocation, completionHandler: GPXCompletionHandler) {
let priority = DISPATCH_QUEUE_PRIORITY_DEFAULT
dispatch_async(dispatch_get_global_queue(priority, 0), { ()->() in
var results = RestApiManager.sharedInstance.getGPS(userlocation, limit: 50)
// return first 50 results
dispatch_async(dispatch_get_main_queue(), {
var grounds = [Ground]()
for result in results {
let (_,jsonGround) = result
let ground = Ground(json:jsonGround)
grounds.append(ground)
}
completionHandler(grounds)
})
// get the rest
results = RestApiManager.sharedInstance.getGPS(userlocation)
// return them
dispatch_async(dispatch_get_main_queue(), {
var grounds = [Ground]()
for result in results {
let (_,jsonGround) = result
let ground = Ground(json:jsonGround)
grounds.append(ground)
}
completionHandler(grounds)
})
})
}
Getting all data at once is not scalable. You might make it work on a timely manner with 2000 entries, but what if the data set grows? Can you handle 3000? 5000? 10000 annotations?
Getting your data in chunks, returning only entries near the location the map is centered in, and displaying those entries as the user moves around the map makes more sense. However, this approach is very slow, as there is usually a long delay between the user dragging the map and the annotations appearing on screen (network requests are slow in nature).
Thus, the recommended approach for a great user experience is to cache the results locally. You can do this if you have a local database (Core Data, for example) or you could do this with NSCache.
With this approach, you will be hitting the server with new requests as the user moves around the map, but the amount of results returned can be limited to 20, 50 or 100 (something that gets you the highest amounts of data while being very responsive).
Next, you would be rendering all annotations from your cached results on the map, so the number of annotations will grow as the user moves around the map.
The guys from http://realm.io have a pretty nice video that explains this approach: https://www.youtube.com/watch?v=hNDNXECD84c While you do not need to use their mobile database (Realm), you can get the idea of the application architecture and design.
First of all, you should think if 2000 annotations are going to fit at the same time on the MKMapView. Maybe you should try to only recover the locations that fit in the map you are presenting.
After that, in your code, you are adding annotations everytime didUpdateLocations is called. ¿Are you removing the older annotations in any other place?. If the annotations are not changing, and are on the map, you should't be adding them again.
I think that a good approach should be:
1.- First time didUpdateLocations is called: ask your web service for the annotations that fit in a distance equal to two times the area that is showing your map. Save that location as locationOrigin, save that distance as distanceOrigin
2.- Next time didUpdateLocations is called: If the distance moved from current location to locationOrigin is equal to half the distanceOrigin. You should query the web service again, update your 'Origin' variables, and add only the new annotations to the mapView.
3.- If regionWillChange is called (MKMapViewDelegate): the user is zooming, moving the map, or rotating the device.... the simple approach is reloading the annotations as if we were on step 1. A more smart approach is adding a gesture recognizer to detect if the user is zooming in (in that case the annotations don't change), or zooming out or moving (the annotations may change)
Load them in chunks near the current user locations, if user zoom out load another chunk, if user move the map load another chunk and so on. this way your app would scale better if your annotations become more and more like 1M annotations.
I have actually written an app that can place 7000 annotations on an MKMapView with no performance problems... and that's on an iPad 1. You really don't need to worry about the map.
The only limiting factor I can think of is how long the network call would take and whether that will be a problem. In my case, I was pulling addresses out of the contacts database, geocoding them and then storing the lat-longs in core data. Yes geocoding 7000 addresses takes forever, but once you have the lat-longs putting them on the map is easy stuff and the map code can handle it just fine...
Related
im trying to create a route that follows a gps trace i provide.
The gps trace is cleaned up and it has no loops in it and is in correct order.
I checked it with other services.
It has 1920 points.
You can find the trace here GPX Files
Sadly if i create a route based on provided sdk example (github) i get loops in my path.
I was hoping you could help me to solve following problems:
how do i avoid loops while creating route by using HERE ios Swift SDK
how do i set route options is such way to follow provided point array and not create a fastest or balanced route.
Since i could not find those functions in Ios sdk i used additional REST API to filter the route a bit to remove all points that were not matched correctly according to here maps... before drawing the route.. ie everything with low probability, warnings, big distance to the road... yet the result is still not good. Here is a cleaned up file.. the file is being created after the original was maped / run once through HERE Maps. In this file all points that have low confidence or produce warnings or have big distance to original points .. are removed. This is the one i use to create a route and it still have the same issues like loops and weird turns.
Thank you very much in advance!
BR.
So far i have this code:
private lazy var router = NMACoreRouter()
#objc func do_routing_stuff( gps_trace :[NMAWaypoint]) {
var stops = [Any]()
stops = gps_trace
let routingMode = NMARoutingMode(routingType: .fastest,
transportMode: .car,
routingOptions: .avoidHighway)
// Trigger the route calculation
router.calculateRoute(withStops: stops ,
routingMode: routingMode)
{ [weak self] routeResult, error in
guard error == .none else {
self?.showMessage("Error:route calculation returned error code \(error.rawValue)")
return
}
guard let result = routeResult, let routes = result.routes, routes.count > 0 else {
self?.showMessage("Error:route result returned is not valid")
return
}
// Let's add the 1st result onto the map
self?.route = routes[0]
self?.updateMapRoute(with: self?.route)
// self?.startNavigation()
}
}
private func updateMapRoute(with route: NMARoute?) {
// remove previously created map route from map
if let previousMapRoute = mapRoute {
mapView.remove(mapObject:previousMapRoute)
}
guard let unwrappedRoute = route else {
return
}
mapRoute = NMAMapRoute(unwrappedRoute)
mapRoute?.traveledColor = .clear
_ = mapRoute.map{ mapView?.add(mapObject: $0) }
// In order to see the entire route, we orientate the
// map view accordingly
if let boundingBox = unwrappedRoute.boundingBox {
geoBoundingBox = boundingBox
mapView.set(boundingBox: boundingBox, animation: .linear)
}
}
in comparison same route presented with leaflet maps.
I believe the problem you have is that you are feeding the Routing API a large number of waypoints, all of which are in close proximity to each other.
You have almost 2000 waypoints in your GPX file (and ~1300 in your cleaned one). Each of these waypoints is less than 10 meters distance from their closest neighbors. This is not the type of data that the Routing API is really designed to work with.
I've experimented with your GPX Trace and I have come up with the following solution: simply skip a bunch of coordinates in your trace.
First, clean up your trace using the Route Matching API (which I believe you have been doing).
Second, pick the first trkpt in the GPX file as your first waypoint for the Routing call. Then skip the next 20 points. Pick the following trkpoint as the second waypoint. Repeat this until you are at the end of the file. Then add the last trkpt in the trace as the final waypoint.
Then call the Routing API and you should get a good match between your trace and your route, without any loops or other weird routing artefacts.
Some notes:
I have picked 20 as the number of traces to skip, because this would put about 200m in between each waypoint. That should be close enough to ensure that the Routing API does not deviate too much from the traced route. For larger traces you may wish to increase that number. For traces in urban environments with lots alternate routes, you may want to use a smaller number.
It's important to clean the data with the Route Matching API first, to avoid picking outliers as waypoints.
Also, you may not wish to use the "avoidHighways" option. Given your use case, there doesn't seem to be a benefit and I could see it causing additional problems.
By now you probably worked it out, but your waypoints are likely landing on bridges or tunnels that are along your route but not on the road you want. I.e. the waypoint is intended to be on the road under the bridge but the routing engine perceives that you want to drive on the bridge.
The routing engine is looping around those roads to drive you on that waypoint on the bridge or in the tunnel.
There is no simple solution to this that I have found.
I am currently building a gallery of user photos into an app. So far I simply listed all the user's photos in a UICollectionView. Now I would like to add moment clusters as sections, similar to the iOS Photos app.
What I am doing (a bit simplified):
let momentClusters = PHCollectionList.fetchMomentLists(with: .momentListCluster, options: options)
momentClusters.enumerateObjects { (momentCluster, _, _) in
let moments = PHAssetCollection.fetchMoments(inMomentList: momentCluster, options: nil)
var assetFetchResults: [PHFetchResult<PHAsset>] = []
moments.enumerateObjects { (moment, _, _) in
let fetchResult = PHAsset.fetchAssets(in: moment, options: options)
assetFetchResults.append(fetchResult)
}
//save assetFetchResults somewhere and use it in UICollectionView methods
}
Turns out this is A LOT more time-intensive than what I did before - up to a minute compared to about 2 seconds on my iPhone X (with a gallery of about 15k pictures). Obviously, this is unacceptable.
Why is the performance of fetching moments so bad, and how can I improve it? Am I using the API wrong?
I tried loading assets on-demand, but it's very difficult, since I then have to work with estimated item counts per moment, and reload sections while the user is scrolling - I couldn't get this to work in a way that is satisfactory (smooth scrolling, no noticable reload).
Any help? How is this API supposed to work? Am I using it wrong.
Update / Part solution
So after playing around, it turns out that the following was a big part of the problem:
I was fetching assets using options with a sort descriptor:
let options = PHFetchOptions()
options.sortDescriptors = [NSSortDescriptor(key: "creationDate", ascending: true)]
options.predicate = NSPredicate(format: "mediaType = %d", PHAssetMediaType.image.rawValue)
let assets = PHAsset.fetchAssets(in: moment, options: options)
It seems sorting doesn't allow PhotoKit to make use of indices or caches it has internally. Removing the sortDescriptors speeds up the fetch significantly. It's still slower than before and any further tips are appreciated, but this makes loading times way more bearable.
Note that, without the sort descriptor, assets will be returned oldest ones first, but this can be easily fixed manually by retrieving assets in reversed order in cellForItemAt: (so the cell at 0,0 will get the last asset of the first moment)
Disclaimer: Performance-related answers are necessarily speculative...
You've described two extremes:
Prefetch all assets for all moments before displaying the collection
Fetch all assets lazily, use estimated counts and reload
But there are in-between options. For example, you can fetch only the moments at first, then let fetching assets per moment be driven by the collection view. But instead of waiting until moments are visible before fetching their contents, use the UICollectionViewDataSourcePrefetching protocol to decide what to start fetching before those items are due to be visible on screen.
I'm working on a project in Swift that requires strict control of image display and removal timings in certain sections. I'm finding that when I set the image property of an NSImageView from inside a block that's fired by a Timer, the actual display of the image on the screen is delayed by up to a second after the assignment is complete. (This is measured by eyeballing it and using a stopwatch to gauge the time between when an NSLog line is written and when the image actually appears on-screen.)
Triggering image display with a click appears to happen instantaneously, whether it's done by setting the image property of an existing NSImageView, or constructing one on the spot and adding it as a subview.
I have attempted to reduce the behavior down to a fairly simple test case, which, after basic setup of the view (loading the images into variables and laying out several target image locations, which are NSImageViews stored to the targets array), sets a Timer, with an index into the targets array stored in its userInfo property, to trigger the following method:
#objc func testATimer(fromTimer: Timer) {
if let targetLocation = fromTimer.userInfo as? Int {
NSLog("Placing target \(targetLocation)")
targets[targetLocation].image = targetImage
OperationQueue.main.addOperation {
let nextLocation = targetLocation + 1
if (nextLocation < self.targets.count) {
NSLog("Setting timer for target \(nextLocation)")
let _ = Timer.scheduledTimer(timeInterval: 2.0, target: self, selector: #selector(ViewController.testATimer(fromTimer:)), userInfo: nextLocation, repeats: false)
}
}
}
}
The amount of time observed between the appearance of the log entry "Placing target x" and that of the image that is set to be displayed in the very next line seems to average between 0.3 and 0.6 seconds. This is far beyond the delay that this project can tolerate, as the image will only be on screen for a total of 1 second at a time.
My best guess as to why this is happening is that it is something to do with Timers running on a separate thread from the main display thread; however, I do not have enough experience with multi-threaded programming to know exactly how to mitigate that.
Any assistance would be most appreciated, and if I've left out information that would make answering easier (or possible) I'm more than happy to give it.
Well, after poking at it with some helpful people on in #macdev on irc.freenode.net, I found that the source of the problem was the program scaling the image down on the fly every time it set it to the NSImageView. Reducing the size of the image ahead of time solved the problem. (As did setting the image once, then hiding and showing the NSImageView instead.)
I am facing an issue while displaying google maps in a table view cell with swift. I want to display check In (Latitude and Longitude) based on this I want to display google map in table view. I will latitude and longitude from server if no location available means I will get null. so, In one scenario I am getting correct, but while reloading the top-level I am getting map when and where their latitude and longitude is null also. Please guide me.
A map view is an expensive view to instantiate. Even when using dequeueReusableCellWithIdentifier it will still be resource-heavy with a large number of items.
I believe using a way to generate a static map image from your longitude/latitude combination is your best bet.
You can take a look here and you can see how to easily construct an API call to many popular map providers such as Google for a static image map like for example:
http://maps.googleapis.com/maps/api/staticmap?center=40.714728,-73.998672&zoom=13&scale=false&size=600x300&maptype=roadmap&format=png&visual_refresh=true
Also worth mentioning that some of those providers might have limitations (Like Google might need an API key in case of high traffic). So choose wisely after some research.
Looks like you are facing two issues
1) Already mentioned one "Map appears even when data is nil"
2) Performance issue (It is not mentioned here though)
The first one is due to dequeueing of cell. When you reload table , the cell will be reused(Not created again).
When you return your cell for row in
tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath)
the cell with map view may get return, that is why you get unwanted map there. You should make it nil to prevent this.
2:
When came to performance, its a not the right approach to show mapview in every cell. An alternate approach is to make image out of your map url
use this
let mapUrl: String = NSURL(string:"YourMapURL")
let mapImage = UIImage.imageWithData(NSData.dataWithContentsOfURL(mapUrl))
let latitude:Double = 17.3850
let longitude:Double = 78.4867
let imageURL = NSURL(string: "http://maps.googleapis.com/maps/api/staticmap?center=\(latitude),\(longitude)&zoom=12&scale=false&size=600x300&maptype=roadmap&format=png&visual_refresh=true")
let imagedData = NSData(contentsOfURL: imageURL!)!
staticMapImgView.image = UIImage(data: imagedData)
I need to simulate how my application will look when a user is driving around for a demo. I have a MKMapView, how can I simulate the look of a user driving around which will use the map.userLocation functionality, which obviously will not be available in the demo.
Thanks!
No way to simulate in iPhone simulator. You'll need to load it onto your device and move around.
Well I got something going, I just did essentially this
- (void)moveIcon:(MKAnnotationView*)locationView toLocation:(CLLocation*)newLoc
{
LocationAnnotation* annotation = [[[LocationAnnotation alloc] initWithCoordinate:newLoc.coordinate] autorelease];
[locationView setAnnotation:annotation];
[map setCenterCoordinate:newLoc.coordinate animated:YES];
}
Then I call this guy in a loop between all of my vertices with a slight delay. Works quite qell.
I'm not an iPhone dev expert, but how does the map view receive the coordinates? If it's through a function that calls the CoreLocation API, could you possibly just write a function that randomly generates longitude and latitude values at a certain time interval and have your map view pull the coordinates from there instead? Just a thought.
You could also check out iSimulate which claims to be able to simulate several features only available on the iPhone in the iPhone simulator include CoreLocation. I have not tried this myself so your mileage may vary.
In order to simulate driving you'll need to establish 2 basic functionalities:
Reading CLLocations from an archive (which you'd log during the drive test with a device). Ideally you'll do this based on the timestamps on the locations, i.e. reproducing the exact same location updates which were received during the drive test.
Updating your MKAnnotationView's position on the map based on the locations read from log.
For part 1, take a look at CLLocationDispatch, a handy class which provides archiving/unarchiving of CLLocations and dispatches them to one or more listeners (using CLLocationManagerDelegate protocol).
For part 2, take a look at Moving-MKAnnotationView.
I found a better way would be to subclass MKUserLocation:
class SimulatedUserLocation: MKUserLocation {
private var simulatedCoordinate = CLLocationCoordinate2D(latitude: 39, longitude: -76)
override dynamic var coordinate: CLLocationCoordinate2D {
get {
return simulatedCoordinate
}
set {
simulatedCoordinate = newValue
}
}
}
Then add it as an annotation mapView.addAnnotation(SimulatedUserLocation()). (You might also want to hide the real location first mapView.showsUserLocation = false)
iOS would render the annotation exactly like the real user location.
dynamic is used on the property so that changing coordinate triggers KVO and moves it on the map.
The answer is NO. Then, how about adding an abstraction layer between your code and MKMapKit? You can do xUnit tests for your objective.