Optimizing Objective-c code - iphone

I have a block of code that I am trying to optimize. The method is being used a lot so any little improvement would greatly increase performance.
- (CGRect)calculateRectForItemAtIndex:(NSIndexPath*)path {
//Get the x position stored in an NSMutableArray
double x = [self.sectionXPlacementArray[path.section] doubleValue];
double y = 0;
double height = 0;
//If this is the first row it is a header so treat it different
if (path.row == 0) {
height = self.defaultHeaderHeight;
y = 0;
}
else {
height = self.defaultHeight;
//Calculate the Y placement
y = (path.row-1)*self.defaultHeight+self.defaultHeaderHeight;
}
//Build and return a CGRect
return CGRectMake(x, y, [self.headerSizes[self.headers[path.section]] doubleValue],height);
}
Here is some more information:
1) headerSizes is a NSMutableDictionary that looks like so:
{
Header1 = 135;
Header2 = 130;
Header3 = 130;
}
2) headers is a NSMutableArray that looks like this:
(
Header1,
Header2,
Header3
)
These values in the app will not be Header_. They will be dynamic NSStrings like "City" or "State". headerSizes will contain the width that should be used for each header.

As another person commented, this does NOT look like a method that would be slowing anything down. It's involved in laying something out, right? Or drawing something? That should not be happening very often (i.e. 60 times per second at worst). Do you actually have any evidence that this is the bottleneck? Like, have you run your code through the Profiler template in Instruments? And this showed up as the #1 top method in an inverted-call-tree view of the data?
That said, there's not much to pare down here. I did my best...
- (CGRect)calculateRectForItemAtIndex:(NSIndexPath*)path
{
//Get the x position stored in an NSMutableArray
const NSUInteger pathSection = path.section;
const NSUInteger pathRow = path.row;
const float x = [self.sectionXPlacementArray[pathSection] floatValue];
float y = 0;
float height = 0;
//If this is the first row it is a header so treat it different
if (pathRow == 0) {
height = self.defaultHeaderHeight;
y = 0;
}
else {
const float defaultHeight = self.defaultHeight;
height = defaultHeight;
//Calculate the Y placement
y = (pathRow-1)*defaultHeight+self.defaultHeaderHeight;
}
//Build and return a CGRect
return CGRectMake(x, y, [self.headerSizes[self.headers[pathSection]] floatValue], height);
}

Related

iOS ARKit AVDepthData Returns Enormous Numbers

Not sure what I'm doing wrong here, but AVDepthData, which is supposed to be in meters for non-disparity data is returning HUGE numbers. I have the code...
-(bool)getCurrentFrameDepthBufferIntoBuffer:(ARSession*)session buffer:(BytePtr)buffer width:(int)width height:(int)height bytesPerPixel:(int)bytesPerPixel
{
// do we have a current frame
if (session.currentFrame != nil)
{
// do we have a captured image?
if (session.currentFrame.capturedDepthData != nil)
{
// get depth data parameters
int ciImageWidth = (int)CVPixelBufferGetWidth(session.currentFrame.capturedDepthData.depthDataMap);
int ciImageHeight = (int)CVPixelBufferGetHeight(session.currentFrame.capturedDepthData.depthDataMap);
// how many bytes per pixel
int bytesPerPixel;
if (session.currentFrame.capturedDepthData.depthDataType == kCVPixelFormatType_DisparityFloat16 ||
session.currentFrame.capturedDepthData.depthDataType == kCVPixelFormatType_DepthFloat16)
bytesPerPixel = 2;
else
bytesPerPixel = 4;
// copy to passed buffer
CVPixelBufferLockBaseAddress(session.currentFrame.capturedDepthData.depthDataMap, kCVPixelBufferLock_ReadOnly);
memcpy(buffer, session.currentFrame.capturedDepthData.depthDataMap, ciImageWidth*ciImageHeight*bytesPerPixel);
float *floatBuffer = (float*)buffer;
float maxDepth = 0.0f;
float minDepth = 0.0f;
for (int i=0; i < ciImageWidth*ciImageHeight; i++)
{
if (floatBuffer[i] > maxDepth)
maxDepth = floatBuffer[i];
if (floatBuffer[i] < minDepth)
minDepth = floatBuffer[i];
}
NSLog(#"In iOS, max depth is %f min depth is %f", maxDepth, minDepth);
CVPixelBufferUnlockBaseAddress(session.currentFrame.capturedDepthData.depthDataMap, kCVPixelBufferLock_ReadOnly);
}
}
return true;
}
But it's returning min and max values like...
2019-06-27 12:32:32.167868+0900 AvatarBuilder[13577:2650159] In iOS, max depth is 3531476501829561451725831270301696000.000000 min depth is -109677129931746407817494761329131520.000000
Which looks nothing like meters.
Ugh, it was my memcpy. I had to do...
float *bufferAddress = (float*)CVPixelBufferGetBaseAddress(session.currentFrame.capturedDepthData.depthDataMap);
memcpy(buffer, bufferAddress, ciImageWidth*ciImageHeight*bytesPerPixel);

cut a sprite sheet on cocos2d for animation

I want to cut a sprite in 6 equivalent parts just with one image, a .png file which I found on the web, no with texturepacker, (the image below by example)
I can take other way, but I want to know if I can do that. any one haves idea?
I'll revise my answer if this isn't what you were asking about, but I think you are asking how to 'manually run an animation' using a spritesheet without a plist.
Here's one way. It would be better if you encapsulated this into it's own class, but this can push you in the right direction I think:
ManualAnimationTest.h
#interface ManualAnimationTest : CCLayer
{
CCSprite *animatedSprite;
int x,y;
float animatedSpriteWidth, animatedSpriteHeight;
int animatedSpriteColumns, animatedSpriteRows;
}
#end
ManualAnimationTest.m
#import "ManualAnimationTest.h"
#implementation ManualAnimationTest
-(id) init
{
if( (self=[super init]))
{
CGSize s = [CCDirector sharedDirector].winSize;
x = 0;
y = 0;
animatedSpriteColumns = 3;
animatedSpriteRows = 2;
animatedSpriteWidth = 95.0f;
animatedSpriteHeight = 125.0f;
animatedSprite = [CCSprite spriteWithFile:#"animal_animation.png" rect:CGRectMake(x * animatedSpriteWidth,y * animatedSpriteHeight,animatedSpriteWidth,animatedSpriteHeight)];
[self addChild:animatedSprite];
[animatedSprite setPosition:ccp(s.width / 2.0f, s.height / 2.0f)];
[self schedule:#selector(animateAnimatedSprite) interval:0.5f];
}
return self;
}
-(void) animateAnimatedSprite
{
[animatedSprite setTextureRect:CGRectMake(x * animatedSpriteWidth, y * animatedSpriteHeight, animatedSpriteWidth, animatedSpriteHeight)];
x +=1;
if(x > (animatedSpriteColumns - 1))
{
x = 0;
y +=1;
}
if(y > (animatedSpriteRows - 1))
{
y = 0;
}
}
#end

iPhone:How can I shuffle buttons in view

I have many no. of buttons on my view and I want to shuffle the buttons(change in position) then how can I shuffle the button in my view.
use arc4random()
and change position of your buttons
In your .h file : UIButton *buttonsarray[10];
In your .m file :
// Make array of button using following way.I used this method to creates button and you can use yours.
- (void)viewDidLoad {
[super viewDidLoad];
float y = 5;
int x = 0;
int count = 0;
for (int i = 0 ; i < 10; i++)
{
count ++;
buttonsarray[i] = [UIButton buttonWithType:UIButtonTypeRoundedRect];
buttonsarray[i].frame = CGRectMake(x, y, 100, 100);
[buttonsarray[i] setTitle:[NSString stringWithFormat:#"%d",i+1] forState:UIControlStateNormal];
x = x + 105;
[self.view addSubview:b[i]];
if(count == 3)
{
count = 0;
x = 0;
y = y+ 105;
}
}
}
// This function will soufflé your buttons
- (IBAction)btnClicked:(id)sender
{
int n = 10;
int swaper;
for (int i = 0 ; i < 10 ; i++)
{
int r = arc4random()%n;
if(r != swaper){
swaper = r;
CGRect r1 = buttonsarray[i].frame;
buttonsarray[i].frame = buttonsarray[swaper].frame;
buttonsarray[swaper].frame = r1;
n--;
}
}
}
Hope,this will help you..
You can do this
Make an Array with a group of CGPoints you will need to store the points as strings.
In the layoutSubViews method set something like this:
-(void)layoutSubViews{
[self.subviews enumerateObjectsUsingBlock:^(id object, NSUInteger idx, BOOL *stop) {
CGPoint newPoint = CGPointFromString([positionsArray objectAtIndex:idx]);
object.frame = CGRectMake(newPoint.x,newPoint.y,object.frame.size.width,object.frame.size.height);
}];
}
Then you will need to shuffle the positions in the positions array, you can see an example method here :How to Shuffle an Array
Now every time you need to Shuffle the positions you can call -(void)setNeedsLayout on your view.
There are more options but that was the first I thought of.
Good Luck

How to get actual pdf page size in iPad?

How can I get PDF page width and height in iPad? Any document or suggestions on how I can find this information?
I was using the answers here until I realised a one-liner for this
CGRect pageRect = CGPDFPageGetBoxRect(pdf, kCGPDFMediaBox);
// Which you can convert to size as
CGSize size = pageRect.size;
Used in Apple's sample ZoomingPDFViewer app
Here's the code to do it; also to convert a point on the page from PDF coordinates to iOS coordinates. See also Get PDF hyperlinks on iOS with Quartz
#import <CoreGraphics/CoreGraphics.h>
. . . . . . . . . . .
NSString *pathToPdfDoc = [[NSBundle mainBundle] pathForResource:#"Test2" offType:#"pdf"];
NSURL *pdfUrl = [NSURL fileURLWithPath:pathToPdfDoc];
CGPDFDocumentRef document = CGPDFDocumentCreateWithURL((CFURLRef)pdfUrl);
CGPDFPageRef page = CGPDFDocumentGetPage(document, 1); // assuming all the pages are the same size!
// code from https://stackoverflow.com/questions/4080373/get-pdf-hyperlinks-on-ios-with-quartz,
// suitably amended
CGPDFDictionaryRef pageDictionary = CGPDFPageGetDictionary(page);
//******* getting the page size
CGPDFArrayRef pageBoxArray;
if(!CGPDFDictionaryGetArray(pageDictionary, "MediaBox", &pageBoxArray)) {
return; // we've got something wrong here!!!
}
int pageBoxArrayCount = CGPDFArrayGetCount( pageBoxArray );
CGPDFReal pageCoords[4];
for( int k = 0; k < pageBoxArrayCount; ++k )
{
CGPDFObjectRef pageRectObj;
if(!CGPDFArrayGetObject(pageBoxArray, k, &pageRectObj))
{
return;
}
CGPDFReal pageCoord;
if(!CGPDFObjectGetValue(pageRectObj, kCGPDFObjectTypeReal, &pageCoord)) {
return;
}
pageCoords[k] = pageCoord;
}
NSLog(#"PDF coordinates -- bottom left x %f ",pageCoords[0]); // should be 0
NSLog(#"PDF coordinates -- bottom left y %f ",pageCoords[1]); // should be 0
NSLog(#"PDF coordinates -- top right x %f ",pageCoords[2]);
NSLog(#"PDF coordinates -- top right y %f ",pageCoords[3]);
NSLog(#"-- i.e. PDF page is %f wide and %f high",pageCoords[2],pageCoords[3]);
// **** now to convert a point on the page from PDF coordinates to iOS coordinates.
double PDFHeight, PDFWidth;
PDFWidth = pageCoords[2];
PDFHeight = pageCoords[3];
// the size of your iOS view or image into which you have rendered your PDF page
// in this example full screen iPad in portrait orientation
double iOSWidth = 768.0;
double iOSHeight = 1024.0;
// the PDF co-ordinate values you want to convert
double PDFxval = 89; // or whatever
double PDFyval = 520; // or whatever
// the iOS coordinate values
int iOSxval, iOSyval;
iOSxval = (int) PDFxval * (iOSWidth/PDFWidth);
iOSyval = (int) (PDFHeight - PDFyval) * (iOSHeight/PDFHeight);
NSLog(#"PDF: %f %f",PDFxval,PDFyval);
NSLog(#"iOS: %i %i",iOSxval,iOSyval);
The following method returns the height of the first pdf page scaled according to a webview:
-(NSInteger) pdfPageSize:(NSData*) pdfData{
CGPDFDocumentRef document = CGPDFDocumentCreateWithProvider(CGDataProviderCreateWithCFData((__bridge CFDataRef)pdfData));
CGPDFPageRef page = CGPDFDocumentGetPage(document, 1);
CGRect pageRect = CGPDFPageGetBoxRect(page, kCGPDFMediaBox);
double pdfHeight, pdfWidth;
pdfWidth = pageRect.size.width;
pdfHeight = pageRect.size.height;
double iOSWidth = _webView.frame.size.width;
double iOSHeight = _webView.frame.size.height;
double scaleWidth = iOSWidth/pdfWidth;
double scaleHeight = iOSHeight/pdfHeight;
double scale = scaleWidth > scaleHeight ? scaleWidth : scaleHeight;
NSInteger finalHeight = pdfHeight * scale;
NSLog(#"MediaRect %f, %f, %f, %f", pageRect.origin.x, pageRect.origin.y, pageRect.size.width, pageRect.size.height);
NSLog(#"Scale: %f",scale);
NSLog(#"Final Height: %i", finalHeight);
return finalHeight;
}
It's a lot smaller than Donal O'Danachair's and does basically the same thing. You could easily adapt it to your View size and to return the width along with the height.
Swift 3.x (based on answer provided by Ege Akpinar)
let pageRect = pdfPage.getBoxRect(CGPDFBox.mediaBox) // where pdfPage is a CGPDFPage
let pageSize = pageRect.size
A more full example showing how to extract the dimensions of page in a PDF file:
/**
Returns the dimensions of a PDF page.
- Parameter pdfURL: The URL of the PDF file.
- Parameter pageNumber: The number of the page to obtain dimensions (Note: page numbers start from `1`, not `0`)
*/
func pageDimension(pdfURL: URL, pageNumber: Int) -> CGSize? {
// convert URL to NSURL which is toll-free-briged with CFURL
let url = pdfURL as NSURL
guard let pdf = CGPDFDocument(url) else { return nil }
guard let pdfPage = pdf.page(at: pageNumber) else { return nil }
let pageRect = pdfPage.getBoxRect(CGPDFBox.mediaBox)
let pageSize = pageRect.size
return pageSize
}
let url = Bundle.main.url(forResource: "file", withExtension: "pdf")!
let dimensions = pageDimension(pdfURL: url, pageNumber: 1)
print(dimensions ?? "Cannot get dimensions")
I took Donal O'Danachair's answer and made a few modifications so the rect size is also scaled to the pdf's size. This code snipped actually gets all the annotations off a pdf page and creates the CGRect from the PDF rect. Part of the code is form the answer to a question Donal commented on his.
CGPDFDictionaryRef pageDictionary = CGPDFPageGetDictionary(pageRef);
CGFloat boundsWidth = pdfView.bounds.size.width;
CGFloat boundsHeight = pdfView.bounds.size.height;
CGRect cropBoxRect = CGPDFPageGetBoxRect(pageRef, kCGPDFCropBox);
CGRect mediaBoxRect = CGPDFPageGetBoxRect(pageRef, kCGPDFMediaBox);
CGRect effectiveRect = CGRectIntersection(cropBoxRect, mediaBoxRect);
CGFloat effectiveWidth = effectiveRect.size.width;
CGFloat effectiveHeight = effectiveRect.size.height;
CGFloat widthScale = (boundsWidth / effectiveWidth);
CGFloat heightScale = (boundsHeight / effectiveHeight);
CGFloat pdfScale = (widthScale < heightScale) ? widthScale : heightScale;
CGFloat x_offset = ((boundsWidth - (effectiveWidth * pdfScale)) / 2.0f);
CGFloat y_offset = ((boundsHeight - (effectiveHeight * pdfScale)) / 2.0f);
y_offset = (boundsHeight - y_offset); // Co-ordinate system adjust
//CGFloat x_translate = (x_offset - effectiveRect.origin.x);
//CGFloat y_translate = (y_offset + effectiveRect.origin.y);
CGPDFArrayRef outputArray;
if(!CGPDFDictionaryGetArray(pageDictionary, "Annots", &outputArray)) {
return;
}
int arrayCount = CGPDFArrayGetCount( outputArray );
if(!arrayCount) {
//continue;
}
self.annotationRectArray = [[NSMutableArray alloc] initWithCapacity:arrayCount];
for( int j = 0; j < arrayCount; ++j ) {
CGPDFObjectRef aDictObj;
if(!CGPDFArrayGetObject(outputArray, j, &aDictObj)) {
return;
}
CGPDFDictionaryRef annotDict;
if(!CGPDFObjectGetValue(aDictObj, kCGPDFObjectTypeDictionary, &annotDict)) {
return;
}
CGPDFDictionaryRef aDict;
if(!CGPDFDictionaryGetDictionary(annotDict, "A", &aDict)) {
return;
}
CGPDFStringRef uriStringRef;
if(!CGPDFDictionaryGetString(aDict, "URI", &uriStringRef)) {
return;
}
CGPDFArrayRef rectArray;
if(!CGPDFDictionaryGetArray(annotDict, "Rect", &rectArray)) {
return;
}
int arrayCount = CGPDFArrayGetCount( rectArray );
CGPDFReal coords[4];
for( int k = 0; k < arrayCount; ++k ) {
CGPDFObjectRef rectObj;
if(!CGPDFArrayGetObject(rectArray, k, &rectObj)) {
return;
}
CGPDFReal coord;
if(!CGPDFObjectGetValue(rectObj, kCGPDFObjectTypeReal, &coord)) {
return;
}
coords[k] = coord;
}
char *uriString = (char *)CGPDFStringGetBytePtr(uriStringRef);
//******* getting the page size
CGPDFArrayRef pageBoxArray;
if(!CGPDFDictionaryGetArray(pageDictionary, "MediaBox", &pageBoxArray)) {
return; // we've got something wrong here!!!
}
int pageBoxArrayCount = CGPDFArrayGetCount( pageBoxArray );
CGPDFReal pageCoords[4];
for( int k = 0; k < pageBoxArrayCount; ++k )
{
CGPDFObjectRef pageRectObj;
if(!CGPDFArrayGetObject(pageBoxArray, k, &pageRectObj))
{
return;
}
CGPDFReal pageCoord;
if(!CGPDFObjectGetValue(pageRectObj, kCGPDFObjectTypeReal, &pageCoord)) {
return;
}
pageCoords[k] = pageCoord;
}
#if DEBUG
NSLog(#"PDF coordinates -- bottom left x %f ",pageCoords[0]); // should be 0
NSLog(#"PDF coordinates -- bottom left y %f ",pageCoords[1]); // should be 0
NSLog(#"PDF coordinates -- top right x %f ",pageCoords[2]);
NSLog(#"PDF coordinates -- top right y %f ",pageCoords[3]);
NSLog(#"-- i.e. PDF page is %f wide and %f high",pageCoords[2],pageCoords[3]);
#endif
// **** now to convert a point on the page from PDF coordinates to iOS coordinates.
double PDFHeight, PDFWidth;
PDFWidth = pageCoords[2];
PDFHeight = pageCoords[3];
// the size of your iOS view or image into which you have rendered your PDF page
// in this example full screen iPad in portrait orientation
double iOSWidth = 768.0;
double iOSHeight = 1024.0;
// the PDF co-ordinate values you want to convert
double PDFxval = coords[0]; // or whatever
double PDFyval = coords[3]; // or whatever
double PDFhval = (coords[3]-coords[1]);
double PDFwVal = coords[2]-coords[0];
// the iOS coordinate values
CGFloat iOSxval, iOSyval,iOShval,iOSwval;
iOSxval = PDFxval * (iOSWidth/PDFWidth);
iOSyval = (PDFHeight - PDFyval) * (iOSHeight/PDFHeight);
iOShval = PDFhval *(iOSHeight/PDFHeight);// here I scale the width and height
iOSwval = PDFwVal *(iOSWidth/PDFWidth);
#if DEBUG
NSLog(#"PDF: { {%f %f }, { %f %f } }",PDFxval,PDFyval,PDFwVal,PDFhval);
NSLog(#"iOS: { {%f %f }, { %f %f } }",iOSxval,iOSyval,iOSwval,iOShval);
#endif
NSString *uri = [NSString stringWithCString:uriString encoding:NSUTF8StringEncoding];
CGRect rect = CGRectMake(iOSxval,iOSyval,iOSwval,iOShval);// create the rect and use it as you wish

iPhone Map Kit cluster pinpoints

Regarding iPhone Map Kit cluster pinpoints:
I have 1000's of marks that I want to show on the map but it's just too many to handle so I want to cluster them.
Are there frameworks available or proof of concepts? That this is possible or is already been done?
You can use REVClusterMap to cluster
Note: This is a commercial product I'm affiliated with, but it solves this very problem.
I solved this problem in few of my apps and decided to extract it into a reusable framework. It's called Superpin and it is an (commercial, license costs $149) iOS Framework that internally uses quadtrees for annotation storage and performs grid-based clustering. The algorithm is quite fast, the included sample app is showing airports of the world (more than 30k+ annotations) and it's running quite smooth on an 3G iPhone.
This might be a bit like using a chainsaw to mow the lawn, but here is an excerpt from Algorithms in a Nutshell
Creating a KD-Tree...
public class KDFactory {
// Known comparators for partitioning points along dimensional axes.
private static Comparator<IMultiPoint> comparators[ ] ;
// Recursively construct KDTree using median method on input points.
public static KDTree generate (IMultiPoint [ ] points) {
if (points. length == 0) { return null; }
// median will be the root.
int maxD = points[ 0] . dimensionality( );
KDTree tree = new KDTree(maxD) ;
// Make dimensional comparators that compare points by ith dimension
comparators = new Comparator[ maxD+1] ;
for (int i = 1; i <= maxD; i++) {
comparators[ i] = new DimensionalComparator(i) ;
}
tree. setRoot(generate (1, maxD, points, 0, points. length-1) ) ;
return tree;
}
// generate the node for the d-th dimension (1 <= d <= maxD)
// for points[ left, right]
private static DimensionalNode generate (int d, int maxD,
IMultiPoint points[ ] ,
int left, int right) {
// Handle the easy cases first
if (right < left) { return null; }
if (right == left) { return new DimensionalNode (d, points[ left] ) ; }
// Order the array[ left, right] so the mth element will be the median
// and the elements prior to it will all be <=, though they won' t
// necessarily be sorted; similarly, the elements after will all be >=
int m = 1+(right-left) /2;
Selection. select(points, m, left, right, comparators[ d] ) ;
// Median point on this dimension becomes the parent
DimensionalNode dm = new DimensionalNode (d, points[ left+m-1] ) ;
// update to the next dimension, or reset back to 1
if (++d > maxD) { d = 1; }
// recursively compute left and right sub-trees, which translate
// into ' below' and ' above' for n-dimensions.
dm. setBelow(maxD, generate (d, maxD, points, left, left+m-2) ) ;
dm. setAbove(maxD, generate (d, maxD, points, left+m, right) ) ;
return dm;
}
}
Finding nearest neighbors best: O(log n) worst O(n)
// method in KDTree
public IMultiPoint nearest (IMultiPoint target) {
if (root == null) return null;
// find parent node to which target would have been inserted. This is our
// best shot at locating closest point; compute best distance guess so far
DimensionalNode parent = parent(target) ;
IMultiPoint result = parent. point;
double smallest = target. distance(result) ;
// now start back at the root, and check all rectangles that potentially
// overlap this smallest distance. If better one is found, return it.
double best[ ] = new double[ ] { smallest };
double raw[ ] = target. raw( );
IMultiPoint betterOne = root. nearest (raw, best) ;
if (betterOne ! = null) { return betterOne; }
return result;
}
// method in DimensionalNode. min[ 0] contains best computed shortest distance.
IMultiPoint nearest (double[ ] rawTarget, double min[ ] ) {
// Update minimum if we are closer.
IMultiPoint result = null;
// If shorter, update minimum
double d = shorter(rawTarget, min[ 0] ) ;
if (d >= 0 && d < min[ 0] ) {
min[ 0] = d;
result = point;
}
// determine if we must dive into the subtrees by computing direct
// perpendicular distance to the axis along which node separates
// the plane. If d is smaller than the current smallest distance,
// we could "bleed" over the plane so we must check both.
double dp = Math. abs(coord - rawTarget[ dimension-1] ) ;
IMultiPoint newResult = null;
if (dp < min[ 0] ) {
// must dive into both. Return closest one.
if (above ! = null) {
newResult = above. nearest (rawTarget, min) ;
if (newResult ! = null) { result = newResult; }
}
if (below ! = null) {
newResult = below. nearest(rawTarget, min) ;
if (newResult ! = null) { result = newResult; }
}
} else {
// only need to go in one! Determine which one now.
if (rawTarget[ dimension-1] < coord) {
if (below ! = null) {
newResult = below. nearest (rawTarget, min) ;
}
} else {
if (above ! = null) {
newResult = above. nearest (rawTarget, min) ;
}
}
// Use smaller result, if found.
if (newResult ! = null) { return newResult; }
}
return result;
}
More on KD-Trees at Wikipedia
I tried the others suggested here, and I also found OCMapView which has worked the best.
Its free and allows for easy grouping of annotations, which is what I needed. Its a bit newer & more updated than Revolver and to me is easier to implement.
A proof of concept is the Offline Maps app "OffMaps" ;)
http://itunes.apple.com/us/app/offmaps/id313854422?mt=8
I recently had to implement annotation clustering with MapKit. The solutions mentioned above are good, depending on your use case. I ended up going with FBAnnotationClustering (Objective-C) because it was free, and had lots of stars and few issues on github:
https://github.com/infinum/FBAnnotationClustering
The app I was working on was very map-centric, so it made sense to translate FBAnnotationClustering into Swift. Here's a blog post on the approach, which includes a link to the sample project on github.
http://ribl.co/blog/2015/05/28/map-clustering-with-swift-how-we-implemented-it-into-the-ribl-ios-app/
Inspired by WWDC 2011 video, this code works very well for me. Maybe not the fastest of all proposed here but it's for free and it's definitely the simplest.
It basically use 2 maps. One is hidden and hold every single annotation (allAnnotationMapView in my code). One is visible and show only the clusters or the annotations if single (mapView in my code).
- (void)didZoom:(UIGestureRecognizer*)gestureRecognizer {
if (gestureRecognizer.state == UIGestureRecognizerStateEnded){
[self updateVisibleAnnotations];
}
}
- (void)updateVisibleAnnotations {
static float marginFactor = 2.0f;
static float bucketSize = 50.0f;
MKMapRect visibleMapRect = [self.mapView visibleMapRect];
MKMapRect adjustedVisibleMapRect = MKMapRectInset(visibleMapRect, -marginFactor * visibleMapRect.size.width, -marginFactor * visibleMapRect.size.height);
CLLocationCoordinate2D leftCoordinate = [self.mapView convertPoint:CGPointZero toCoordinateFromView:self.view];
CLLocationCoordinate2D rightCoordinate = [self.mapView convertPoint:CGPointMake(bucketSize, 0) toCoordinateFromView:self.view];
double gridSize = MKMapPointForCoordinate(rightCoordinate).x - MKMapPointForCoordinate(leftCoordinate).x;
MKMapRect gridMapRect = MKMapRectMake(0, 0, gridSize, gridSize);
double startX = floor(MKMapRectGetMinX(adjustedVisibleMapRect) / gridSize) * gridSize;
double startY = floor(MKMapRectGetMinY(adjustedVisibleMapRect) / gridSize) * gridSize;
double endX = floor(MKMapRectGetMaxX(adjustedVisibleMapRect) / gridSize) * gridSize;
double endY = floor(MKMapRectGetMaxY(adjustedVisibleMapRect) / gridSize) * gridSize;
gridMapRect.origin.y = startY;
while(MKMapRectGetMinY(gridMapRect) <= endY) {
gridMapRect.origin.x = startX;
while (MKMapRectGetMinX(gridMapRect) <= endX) {
NSSet *allAnnotationsInBucket = [self.allAnnotationMapView annotationsInMapRect:gridMapRect];
NSSet *visibleAnnotationsInBucket = [self.mapView annotationsInMapRect:gridMapRect];
NSMutableSet *filteredAnnotationsInBucket = [[allAnnotationsInBucket objectsPassingTest:^BOOL(id obj, BOOL *stop) {
BOOL isPointMapItem = [obj isKindOfClass:[PointMapItem class]];
BOOL shouldBeMerged = NO;
if (isPointMapItem) {
PointMapItem *pointItem = (PointMapItem *)obj;
shouldBeMerged = pointItem.shouldBeMerged;
}
return shouldBeMerged;
}] mutableCopy];
NSSet *notMergedAnnotationsInBucket = [allAnnotationsInBucket objectsPassingTest:^BOOL(id obj, BOOL *stop) {
BOOL isPointMapItem = [obj isKindOfClass:[PointMapItem class]];
BOOL shouldBeMerged = NO;
if (isPointMapItem) {
PointMapItem *pointItem = (PointMapItem *)obj;
shouldBeMerged = pointItem.shouldBeMerged;
}
return isPointMapItem && !shouldBeMerged;
}];
for (PointMapItem *item in notMergedAnnotationsInBucket) {
[self.mapView addAnnotation:item];
}
if(filteredAnnotationsInBucket.count > 0) {
PointMapItem *annotationForGrid = (PointMapItem *)[self annotationInGrid:gridMapRect usingAnnotations:filteredAnnotationsInBucket];
[filteredAnnotationsInBucket removeObject:annotationForGrid];
annotationForGrid.containedAnnotations = [filteredAnnotationsInBucket allObjects];
[self.mapView addAnnotation:annotationForGrid];
//force reload of the image because it's not done if annotationForGrid is already present in the bucket!!
MKAnnotationView* annotationView = [self.mapView viewForAnnotation:annotationForGrid];
NSString *imageName = [AnnotationsViewUtils imageNameForItem:annotationForGrid selected:NO];
UILabel *countLabel = [[UILabel alloc] initWithFrame:CGRectMake(15, 2, 8, 8)];
[countLabel setFont:[UIFont fontWithName:POINT_FONT_NAME size:10]];
[countLabel setTextColor:[UIColor whiteColor]];
[annotationView addSubview:countLabel];
imageName = [AnnotationsViewUtils imageNameForItem:annotationForGrid selected:NO];
annotationView.image = [UIImage imageNamed:imageName];
if (filteredAnnotationsInBucket.count > 0){
[self.mapView deselectAnnotation:annotationForGrid animated:NO];
}
for (PointMapItem *annotation in filteredAnnotationsInBucket) {
[self.mapView deselectAnnotation:annotation animated:NO];
annotation.clusterAnnotation = annotationForGrid;
annotation.containedAnnotations = nil;
if ([visibleAnnotationsInBucket containsObject:annotation]) {
CLLocationCoordinate2D actualCoordinate = annotation.coordinate;
[UIView animateWithDuration:0.3 animations:^{
annotation.coordinate = annotation.clusterAnnotation.coordinate;
} completion:^(BOOL finished) {
annotation.coordinate = actualCoordinate;
[self.mapView removeAnnotation:annotation];
}];
}
}
}
gridMapRect.origin.x += gridSize;
}
gridMapRect.origin.y += gridSize;
}
}
- (id<MKAnnotation>)annotationInGrid:(MKMapRect)gridMapRect usingAnnotations:(NSSet *)annotations {
NSSet *visibleAnnotationsInBucket = [self.mapView annotationsInMapRect:gridMapRect];
NSSet *annotationsForGridSet = [annotations objectsPassingTest:^BOOL(id obj, BOOL *stop) {
BOOL returnValue = ([visibleAnnotationsInBucket containsObject:obj]);
if (returnValue) {
*stop = YES;
}
return returnValue;
}];
if (annotationsForGridSet.count != 0) {
return [annotationsForGridSet anyObject];
}
MKMapPoint centerMapPoint = MKMapPointMake(MKMapRectGetMinX(gridMapRect), MKMapRectGetMidY(gridMapRect));
NSArray *sortedAnnotations = [[annotations allObjects] sortedArrayUsingComparator:^(id obj1, id obj2) {
MKMapPoint mapPoint1 = MKMapPointForCoordinate(((id<MKAnnotation>)obj1).coordinate);
MKMapPoint mapPoint2 = MKMapPointForCoordinate(((id<MKAnnotation>)obj2).coordinate);
CLLocationDistance distance1 = MKMetersBetweenMapPoints(mapPoint1, centerMapPoint);
CLLocationDistance distance2 = MKMetersBetweenMapPoints(mapPoint2, centerMapPoint);
if (distance1 < distance2) {
return NSOrderedAscending;
}
else if (distance1 > distance2) {
return NSOrderedDescending;
}
return NSOrderedSame;
}];
return [sortedAnnotations objectAtIndex:0];
}
I think Foto Brisko (iTunes link) does this.
I do not think there is a Cocoa Touch framework for it.