hitTest overlapping CALayers - iphone

I have a UIView that contains a drawing that I've made using CALayers added as sublayers. It is a red square with a blue triangle centered inside. I am able to determine which shape has been touched using the following code:
CGPoint location = [gesture locationInView:self.view];
CALayer* layerThatWasTapped = [self.view.layer hitTest:location];
NSLog(#"Master Tap Location: %#", NSStringFromCGPoint(location));
NSLog(#"Tapped Layer Name: %#", layerThatWasTapped.name);
NSLog(#"Tapped Layer Parent: %#", layerThatWasTapped.superlayer.name);
int counter = layerThatWasTapped.superlayer.sublayers.count;
NSArray * subs = layerThatWasTapped.superlayer.sublayers;
//Loop through all sublayers of the picture
for (int i=0; i<counter; i++) {
CALayer *layer = [subs objectAtIndex:i];
CAShapeLayer* loopLayer = (CAShapeLayer*)layerThatWasTapped.modelLayer;
CGPathRef loopPath = loopLayer.path;
CGPoint loopLoc = [gesture locationInView:cPage];
loopLoc = [self.view.layer convertPoint:loopLoc toLayer:layer];
NSLog(#"loopLoc Tap Location: %#", NSStringFromCGPoint(loopLoc));
//determine if hit is on a layer
if (CGPathContainsPoint(loopPath, NULL, loopLoc, YES)) {
NSLog(#"Layer %i Name: %# Hit",i, layer.name);
} else {
NSLog(#"Layer %i Name: %# No Hit",i, layer.name);
}
}
My problem lies with areas where the bounds of the triangle overlap the square.
This results in the triangle registering the hit even when the hit is outside of the
triangles path. This is a simplified example (I may have many overlapping shapes stacked in the view)
Is there a way to loop through all of the sublayers and hittest each one to see if it lies under the tapped point?
OR
Is there a way to have the bounds of my layers match their paths so the hit occurs only on a visible area?

Since you're using CAShapeLayer, this is pretty easy. Make a subclass of CAShapeLayer and override its containsPoint: method, like this:
#implementation MyShapeLayer
- (BOOL)containsPoint:(CGPoint)p
{
return CGPathContainsPoint(self.path, NULL, p, false);
}
#end
Make sure that wherever you were allocating a CAShapeLayer, you change it to allocate a MyShapeLayer instead:
CAShapeLayer *triangle = [MyShapeLayer layer]; // this way
CAShapeLayer *triangle = [[MyShapeLayer alloc] init]; // or this way
Finally, keep in mind that when calling -[CALayer hitTest:], you need to pass in a point in the superlayer's coordinate space:
CGPoint location = [gesture locationInView:self.view];
CALayer *myLayer = self.view.layer;
location = [myLayer.superlayer convertPoint:location fromLayer:myLayer];
CALayer* layerThatWasTapped = [myLayer hitTest:location];

Related

Free hand Draw polyline overlay in ios

I'm trying to trace the route by free hand on a MKMapView using overlays (MKOverlay).
Each time when we move the finger i extend the polyline with last coordinate with new coordinate,all are working fine except when extending polyline overlay the whole overlay is blinking in device(only sometimes),so i can,t trace the problem.
The code i have tried is given below.
- (void)viewDidLoad
{
j=0;
coords1 = malloc(2* sizeof(CLLocationCoordinate2D));
coordinatearray=[[NSMutableArray alloc]init];
UIPanGestureRecognizer *GestureRecogonized = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(gestureDetacted:)];
[self.myMapView addGestureRecognizer:GestureRecogonized];
}
- (void)gestureDetacted:(UIPanGestureRecognizer *)recognizer
{
if(UIGestureRecognizerStateBegan==recognizer.state)
{
CGPoint point = [recognizer locationInView:self.myMapView];
CLLocationCoordinate2D tapPoint = [self.myMapView convertPoint:point toCoordinateFromView:self.view];
CLLocation *curLocation = [[CLLocation alloc] initWithLatitude:tapPoint.latitude longitude:tapPoint.longitude];
[coordinatearray addObject:curLocation];
}
coords1[0]=[[coordinatearray objectAtIndex:j] coordinate];
if(UIGestureRecognizerStateChanged==recognizer.state)
{
j++;
CGPoint point = [recognizer locationInView:self.myMapView];
CLLocationCoordinate2D tapPoint = [self.myMapView convertPoint:point toCoordinateFromView:self.view];
CLLocation *curLocation = [[CLLocation alloc] initWithLatitude:tapPoint.latitude longitude:tapPoint.longitude];
[coordinatearray addObject:curLocation];
coords1[1]=CLLocationCoordinate2DMake(tapPoint.latitude,tapPoint.longitude);
polyLine = [MKPolyline polylineWithCoordinates:coords1 count:2];
[self.myMapView addOverlay:polyLine];
}
}
in overlay delegate
- (MKOverlayView *)mapView:(MKMapView *)mapView viewForOverlay:(id <MKOverlay>)overlay {
if([overlay isKindOfClass:[MKPolyline class]]){
MKPolylineView *polylineView = [[MKPolylineView alloc] initWithPolyline:overlay];
polylineView.strokeColor = [UIColor orangeColor];
polylineView.lineWidth = 20;
polylineView.fillColor=[[UIColor orangeColor] colorWithAlphaComponent:.1];
return polylineView;
}
}
can anybody know why that flickering or blinking effect is coming and how to remove it.
Thanks in advance.
Rather than adding hundreds of really small views (which is really computationally intensive), i would rather remove the polyline overlay and add a new one with all the points on the map in it at each change in the pan recognizer (for a smoother effect, you can first add the new one and then remove the old one). Use your coordinateArray to create the MKPolyline overlay that contains all of the points, rather than the last 2 points.
You could do something like:
[coordinatearray addObject:curLocation];;
CLLocationCoordinate2D* coordArray = malloc(sizeof(CLLocationCoordinate2D)*[coordinatearray count]);
memcpy(coordArray, self.coordinates, sizeof(CLLocationCoordinate2D)*([coordinatearray count]-1));
coordArray[[coordinatearray count]-1] = curLocation;
free(self.coordinates);
self.coordinates = coordArray;
MKPolyline *polyline = [MKPolyline polylineWithCoordinates:coordArray count:[coordinatearray count]];
MKPolyline *old = [[self.mapView overlays] lastObject];
[self.mapView addOverlay:polyline];
[self.mapView removeOverlay:old];
Where self.coordinate is a property of type CLLocationCoordinate2D*, this way you can memcpy the existing array into a new one (memcpy is really efficent) and only need to add the last point to the array, rather than having to loop through each point of the NSArray *coordinatearray.
You also have to change your if(UIGestureRecognizerStateBegan==recognizer.state) part, to add in self.coordinates the first tapped point.
Just something like:
self.coordinates = malloc(sizeof(CLLocationCoordinate2D));
self.coordinates[0] = curLocation;
EDIT: I think the problem is that map overlays draw themselves for different discrete values of zoom levels. At a certain zoom level it would appear that the overlay first draws itself at the bigger zoom level, and then at the smaller zoom level (in fact drawing over the previously drawn overlay). When you try to show an animation such as that from drawing the user panning gesture at this zoom level, that is why the overlay keeps flickering. A possible solution would be to use a transparent view that you put on top of the map view, where the drawing is performed while the user keeps moving the finger. Only once the panning gesture ends you then create a map overlay that you "pin" to the map and you clean the drawing view. You also need to be careful when redrawing the view, as you should each time specify only the rect to be redrawn and then redraw only in that rect, as redrawing the whole thing each time would cause a flicker in this view as well. This is definitely much more coding involved, but it should work. Check this question for how to incrementally draw on a view.

Drag UIView around Shape Comprised of CGMutablePaths

I have a simple oval shape (comprised of CGMutablePaths) from which I'd like the user to be able to drag an object around it. Just wondering how complicated it is to do this, do I need to know a ton of math and physics, or is there some simple built in way that will allow me to do this? IE the user drags this object around the oval, and it orbits it.
This is an interesting problem. We want to drag an object, but constrain it to lie on a CGPath. You said you have “a simple oval shape”, but that's boring. Let's do it with a figure 8. It'll look like this when we're done:
So how do we do this? Given an arbitrary point, finding the nearest point on a Bezier spline is rather complicated. Let's do it by brute force. We'll just make an array of points closely spaced along the path. The object starts out on one of those points. As we try to drag the object, we'll look at the neighboring points. If either is nearer, we'll move the object to that neighbor point.
Even getting an array of closely-spaced points along a Bezier curve is not trivial, but there is a way to get Core Graphics to do it for us. We can use CGPathCreateCopyByDashingPath with a short dash pattern. This creates a new path with many short segments. We'll take the endpoints of each segment as our array of points.
That means we need to iterate over the elements of a CGPath. The only way to iterate over the elements of a CGPath is with the CGPathApply function, which takes a callback. It would be much nicer to iterate over path elements with a block, so let's add a category to UIBezierPath. We start by creating a new project using the “Single View Application” template, with ARC enabled. We add a category:
#interface UIBezierPath (forEachElement)
- (void)forEachElement:(void (^)(CGPathElement const *element))block;
#end
The implementation is very simple. We just pass the block as the info argument of the path applier function.
#import "UIBezierPath+forEachElement.h"
typedef void (^UIBezierPath_forEachElement_Block)(CGPathElement const *element);
#implementation UIBezierPath (forEachElement)
static void applyBlockToPathElement(void *info, CGPathElement const *element) {
__unsafe_unretained UIBezierPath_forEachElement_Block block = (__bridge UIBezierPath_forEachElement_Block)info;
block(element);
}
- (void)forEachElement:(void (^)(const CGPathElement *))block {
CGPathApply(self.CGPath, (__bridge void *)block, applyBlockToPathElement);
}
#end
For this toy project, we'll do everything else in the view controller. We'll need some instance variables:
#implementation ViewController {
We need an ivar to hold the path that the object follows.
UIBezierPath *path_;
It would be nice to see the path, so we'll use a CAShapeLayer to display it. (We need to add the QuartzCore framework to our target for this to work.)
CAShapeLayer *pathLayer_;
We'll need to store the array of points-along-the-path somewhere. Let's use an NSMutableData:
NSMutableData *pathPointsData_;
We'll want a pointer to the array of points, typed as a CGPoint pointer:
CGPoint const *pathPoints_;
And we need to know how many of those points there are:
NSInteger pathPointsCount_;
For the “object”, we'll have a draggable view on the screen. I'm calling it the “handle”:
UIView *handleView_;
We need to know which of the path points the handle is currently on:
NSInteger handlePathPointIndex_;
And while the pan gesture is active, we need to keep track of where the user has tried to drag the handle:
CGPoint desiredHandleCenter_;
}
Now we have to get to work initializing all those ivars! We can create our views and layers in viewDidLoad:
- (void)viewDidLoad {
[super viewDidLoad];
[self initPathLayer];
[self initHandleView];
[self initHandlePanGestureRecognizer];
}
We create the path-displaying layer like this:
- (void)initPathLayer {
pathLayer_ = [CAShapeLayer layer];
pathLayer_.lineWidth = 1;
pathLayer_.fillColor = nil;
pathLayer_.strokeColor = [UIColor blackColor].CGColor;
pathLayer_.lineCap = kCALineCapButt;
pathLayer_.lineJoin = kCALineJoinRound;
[self.view.layer addSublayer:pathLayer_];
}
Note that we haven't set the path layer's path yet! It's too soon to know the path at this time, because my view hasn't been laid out at its final size yet.
We'll draw a red circle for the handle:
- (void)initHandleView {
handlePathPointIndex_ = 0;
CGRect rect = CGRectMake(0, 0, 30, 30);
CAShapeLayer *circleLayer = [CAShapeLayer layer];
circleLayer.fillColor = nil;
circleLayer.strokeColor = [UIColor redColor].CGColor;
circleLayer.lineWidth = 2;
circleLayer.path = [UIBezierPath bezierPathWithOvalInRect:CGRectInset(rect, circleLayer.lineWidth, circleLayer.lineWidth)].CGPath;
circleLayer.frame = rect;
handleView_ = [[UIView alloc] initWithFrame:rect];
[handleView_.layer addSublayer:circleLayer];
[self.view addSubview:handleView_];
}
Again, it's too soon to know exactly where we'll need to put the handle on screen. We'll take care of that at view layout time.
We also need to attach a pan gesture recognizer to the handle:
- (void)initHandlePanGestureRecognizer {
UIPanGestureRecognizer *recognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handleWasPanned:)];
[handleView_ addGestureRecognizer:recognizer];
}
At view layout time, we need to create the path based on the size of the view, compute the points along the path, make the path layer show the path, and make sure the handle is on the path:
- (void)viewDidLayoutSubviews {
[super viewDidLayoutSubviews];
[self createPath];
[self createPathPoints];
[self layoutPathLayer];
[self layoutHandleView];
}
In your question, you said you're using a “simple oval shape”, but that's boring. Let's draw a nice figure 8. Figuring out what I'm doing is left as an exercise for the reader:
- (void)createPath {
CGRect bounds = self.view.bounds;
CGFloat const radius = bounds.size.height / 6;
CGFloat const offset = 2 * radius * M_SQRT1_2;
CGPoint const topCenter = CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds) - offset);
CGPoint const bottomCenter = { topCenter.x, CGRectGetMidY(bounds) + offset };
path_ = [UIBezierPath bezierPath];
[path_ addArcWithCenter:topCenter radius:radius startAngle:M_PI_4 endAngle:-M_PI - M_PI_4 clockwise:NO];
[path_ addArcWithCenter:bottomCenter radius:radius startAngle:-M_PI_4 endAngle:M_PI + M_PI_4 clockwise:YES];
[path_ closePath];
}
Next we're going to want to compute the array of points along that path. We'll need a helper routine to pick out the endpoint of each path element:
static CGPoint *lastPointOfPathElement(CGPathElement const *element) {
int index;
switch (element->type) {
case kCGPathElementMoveToPoint: index = 0; break;
case kCGPathElementAddCurveToPoint: index = 2; break;
case kCGPathElementAddLineToPoint: index = 0; break;
case kCGPathElementAddQuadCurveToPoint: index = 1; break;
case kCGPathElementCloseSubpath: index = NSNotFound; break;
}
return index == NSNotFound ? 0 : &element->points[index];
}
To find the points, we need to ask Core Graphics to “dash” the path:
- (void)createPathPoints {
CGPathRef cgDashedPath = CGPathCreateCopyByDashingPath(path_.CGPath, NULL, 0, (CGFloat[]){ 1.0f, 1.0f }, 2);
UIBezierPath *dashedPath = [UIBezierPath bezierPathWithCGPath:cgDashedPath];
CGPathRelease(cgDashedPath);
It turns out that when Core Graphics dashes the path, it can create segments that slightly overlap. We'll want to eliminate those by filtering out each point that's too close to its predecessor, so we'll define a minimum inter-point distance:
static CGFloat const kMinimumDistance = 0.1f;
To do the filtering, we'll need to keep track of that predecessor:
__block CGPoint priorPoint = { HUGE_VALF, HUGE_VALF };
We need to create the NSMutableData that will hold the CGPoints:
pathPointsData_ = [[NSMutableData alloc] init];
At last we're ready to iterate over the elements of the dashed path:
[dashedPath forEachElement:^(const CGPathElement *element) {
Each path element can be a “move-to”, a “line-to”, a “quadratic-curve-to”, a “curve-to” (which is a cubic curve), or a “close-path”. All of those except close-path define a segment endpoint, which we pick up with our helper function from earlier:
CGPoint *p = lastPointOfPathElement(element);
if (!p)
return;
If the endpoint is too close to the prior point, we discard it:
if (hypotf(p->x - priorPoint.x, p->y - priorPoint.y) < kMinimumDistance)
return;
Otherwise, we append it to the data and save it as the predecessor of the next endpoint:
[pathPointsData_ appendBytes:p length:sizeof *p];
priorPoint = *p;
}];
Now we can initialize our pathPoints_ and pathPointsCount_ ivars:
pathPoints_ = (CGPoint const *)pathPointsData_.bytes;
pathPointsCount_ = pathPointsData_.length / sizeof *pathPoints_;
But we have one more point we need to filter. The very first point along the path might be too close to the very last point. If so, we'll just discard the last point by decrementing the count:
if (pathPointsCount_ > 1 && hypotf(pathPoints_[0].x - priorPoint.x, pathPoints_[0].y - priorPoint.y) < kMinimumDistance) {
pathPointsCount_ -= 1;
}
}
Blammo. Point array created. Oh yeah, we also need to update the path layer. Brace yourself:
- (void)layoutPathLayer {
pathLayer_.path = path_.CGPath;
pathLayer_.frame = self.view.bounds;
}
Now we can worry about dragging the handle around and making sure it stays on the path. The pan gesture recognizer sends this action:
- (void)handleWasPanned:(UIPanGestureRecognizer *)recognizer {
switch (recognizer.state) {
If this is the start of the pan (drag), we just want to save the starting location of the handle as its desired location:
case UIGestureRecognizerStateBegan: {
desiredHandleCenter_ = handleView_.center;
break;
}
Otherwise, we need to update the desired location based on the drag, and then slide the handle along the path toward the new desired location:
case UIGestureRecognizerStateChanged:
case UIGestureRecognizerStateEnded:
case UIGestureRecognizerStateCancelled: {
CGPoint translation = [recognizer translationInView:self.view];
desiredHandleCenter_.x += translation.x;
desiredHandleCenter_.y += translation.y;
[self moveHandleTowardPoint:desiredHandleCenter_];
break;
}
We put in a default clause so clang won't warn us about the other states that we don't care about:
default:
break;
}
Finally we reset the translation of the gesture recognizer:
[recognizer setTranslation:CGPointZero inView:self.view];
}
So how do we move the handle toward a point? We want to slide it along the path. First, we have to figure out which direction to slide it:
- (void)moveHandleTowardPoint:(CGPoint)point {
CGFloat earlierDistance = [self distanceToPoint:point ifHandleMovesByOffset:-1];
CGFloat currentDistance = [self distanceToPoint:point ifHandleMovesByOffset:0];
CGFloat laterDistance = [self distanceToPoint:point ifHandleMovesByOffset:1];
It's possible that both directions would move the handle further from the desired point, so let's bail out in that case:
if (currentDistance <= earlierDistance && currentDistance <= laterDistance)
return;
OK, so at least one of the directions will move the handle closer. Let's figure out which one:
NSInteger direction;
CGFloat distance;
if (earlierDistance < laterDistance) {
direction = -1;
distance = earlierDistance;
} else {
direction = 1;
distance = laterDistance;
}
But we've only checked the nearest neighbors of the handle's starting point. We want to slide as far as we can along the path in that direction, as long as the handle is getting closer to the desired point:
NSInteger offset = direction;
while (true) {
NSInteger nextOffset = offset + direction;
CGFloat nextDistance = [self distanceToPoint:point ifHandleMovesByOffset:nextOffset];
if (nextDistance >= distance)
break;
distance = nextDistance;
offset = nextOffset;
}
Finally, update the handle's position to our newly-discovered point:
handlePathPointIndex_ += offset;
[self layoutHandleView];
}
That just leaves the small matter of computing the distance from the handle to a point, should the handle be moved along the path by some offset. Your old buddy hypotf computes the Euclidean distance so you don't have to:
- (CGFloat)distanceToPoint:(CGPoint)point ifHandleMovesByOffset:(NSInteger)offset {
int index = [self handlePathPointIndexWithOffset:offset];
CGPoint proposedHandlePoint = pathPoints_[index];
return hypotf(point.x - proposedHandlePoint.x, point.y - proposedHandlePoint.y);
}
(You could speed things up by using squared distances to avoid the square roots that hypotf is computing.)
One more tiny detail: the index into the points array needs to wrap around in both directions. That's what we've been relying on the mysterious handlePathPointIndexWithOffset: method to do:
- (NSInteger)handlePathPointIndexWithOffset:(NSInteger)offset {
NSInteger index = handlePathPointIndex_ + offset;
while (index < 0) {
index += pathPointsCount_;
}
while (index >= pathPointsCount_) {
index -= pathPointsCount_;
}
return index;
}
#end
Fin. I've put all of the code in a gist for easy downloading. Enjoy.

iOS Face detection transformation

I have followed a tutorial to detect a face within an image, it works. It creates a red rectangle around the face by making a UIView *faceView. Now i am trying to obtain the coordinates of the face detected however the results returned are off slightly in the y-axis. How can i fix this? where am i going wrong.
This is what i have attempted :
CGRect newBounds = CGRectMake(faceFeature.bounds.origin.x,
imageView.bounds.size.height - faceFeature.bounds.origin.y - faceFeature.bounds.size.height,
faceFeature.bounds.size.width,
faceFeature.bounds.size.height);
This is the source code for the detection :
-
(void)markFaces:(UIImageView *)facePicture
{
// draw a CI image with the previously loaded face detection picture
CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];
// create a face detector - since speed is not an issue we'll use a high accuracy
// detector
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// create an array containing all the detected faces from the detector
NSArray* features = [detector featuresInImage:image];
// we'll iterate through every detected face. CIFaceFeature provides us
// with the width for the entire face, and the coordinates of each eye
// and the mouth if detected. Also provided are BOOL's for the eye's and
// mouth so we can check if they already exist.
for(CIFaceFeature* faceFeature in features)
{
// get the width of the face
CGFloat faceWidth = faceFeature.bounds.size.width;
// create a UIView using the bounds of the face
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
// add a border around the newly created UIView
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
CGRect newBounds = CGRectMake(faceFeature.bounds.origin.x,
imageView.bounds.size.height - faceFeature.bounds.origin.y - faceFeature.bounds.size.height,
faceFeature.bounds.size.width,
faceFeature.bounds.size.height);
NSLog(#"My view frame: %#", NSStringFromCGRect(newBounds));
[self.view addSubview:faceView];
if(faceFeature.hasLeftEyePosition)
{
}
if(faceFeature.hasRightEyePosition)
{
}
if(faceFeature.hasMouthPosition)
{
}
}
}
-(void)faceDetector
{
// Load the picture for face detection
UIImageView* image = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"jolie.jpg"]];
// Draw the face detection image
[self.view addSubview:image];
// flip image on y-axis to match coordinate system used by core image
[image setTransform:CGAffineTransformMakeScale(1, -1)];
// flip the entire window to make everything right side up
[self.view setTransform:CGAffineTransformMakeScale(1, -1)];
// Execute the method used to markFaces in background
[self performSelectorInBackground:#selector(markFaces:) withObject:image];
}
CoreImage Coordination system and UIKit coordination system are quite different. CIFaceFeature provides coordinates in coreimage coordination system and for your work you need to convert them into uikit coordinate system:
// CoreImage coordinate system origin is at the bottom left corner and UIKit is at the top left corner
// So we need to translate features positions before drawing them to screen
// In order to do so we make an affine transform
// **Note**
// Its better to convert CoreImage coordinates to UIKit coordinates and
// not the other way around because doing so could affect other drawings
// i.e. In the original sample project you see the image and the bottom, Isn't weird?
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform, 0, -_pickerImageView.bounds.size.height);
for(CIFaceFeature* faceFeature in features)
{
// Translate CoreImage coordinates to UIKit coordinates
const CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform);
// create a UIView using the bounds of the face
UIView* faceView = [[UIView alloc] initWithFrame:faceRect];
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
// get the width of the face
CGFloat faceWidth = faceFeature.bounds.size.width;
// add the new view to create a box around the face
[_pickerImageView addSubview:faceView];
if(faceFeature.hasLeftEyePosition)
{
// Get the left eye position: Translate CoreImage coordinates to UIKit coordinates
const CGPoint leftEyePos = CGPointApplyAffineTransform(faceFeature.leftEyePosition, transform);
// Note1:
// If you want to add this to the the faceView instead of the imageView we need to translate its
// coordinates a bit more {-x, -y} in other words: {-faceFeature.bounds.origin.x, -faceFeature.bounds.origin.y}
// You could do the same for the other eye and the mouth too.
// Create an UIView to represent the left eye, its size depend on the width of the face.
UIView* leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(leftEyePos.x - faceWidth*EYE_SIZE_RATE*0.5f /*- faceFeature.bounds.origin.x*/, // See Note1
leftEyePos.y - faceWidth*EYE_SIZE_RATE*0.5f /*- faceFeature.bounds.origin.y*/, // See Note1
faceWidth*EYE_SIZE_RATE,
faceWidth*EYE_SIZE_RATE)];
leftEyeView.backgroundColor = [[UIColor magentaColor] colorWithAlphaComponent:0.3];
leftEyeView.layer.cornerRadius = faceWidth*EYE_SIZE_RATE*0.5;
//[faceView addSubview:leftEyeView]; // See Note1
[_pickerImageView addSubview:leftEyeView];
}
}

How to test if a point is in a view

I have a UIImageView and I have a CGPoint on the screen. I want to be able to test that point to see if it is in the UIImageView. What would be the best way to do this?
CGPoint is no good with a reference point. If your point is in window's coordinates then you can get it using
CGPoint locationInView = [imageView convertPoint:point fromView:imageView.window];
if ( CGRectContainsPoint(imageView.bounds, locationInView) ) {
// Point lies inside the bounds.
}
You may also call pointInside:withEvent: method
if ( [imageView pointInside:locationInView withEvent:nil] ) {
// Point lies inside the bounds
}
Tested in Swift 4
view.frame.contains(point)
if(CGRectContainsPoint([myView frame], point))
where point is your CGPoint and myView is your UIImageView
I'll assume you have a full-screen window (pretty reasonable, I think). Then you can transform the point from the window's coordinate space to the UIImageView's using:
CGPoint point = ...
UIWindow window = ...
UIImageView imageView = ...
CGPoint transformedPoint = [window convertPoint:point toView:imageView];
Then, you can test if the point is in the image view's frame as follows:
if(CGRectContainsPoint(imageView.frame, transformedPoint))
{
// do something interesting....
}
In Swift 3
let isPointInFrame = UIScreen.main.bounds.contains(newLocation)

Applying Zoom Effect In cocos2D gaming environment?

I'm working on a game with cocos2D game engine and make load all the sprites while it load the level, now as because some of sprites (obstacles) are taller than 320 pixel, thus it seems difficult to check them out. So for the convenience sake I want to apply ZOOM IN and ZOOM out effect, which minimizes entire level's all sprites at once, and in zoom out case these will resided to there old position.
Can I achieve this?
If yes, then how?
Please tell about pinch zoom also.
Zooming, is fairly simple, simply set the scale property of your main game layer... but there are a few catches.
When you scale the layer, it will shift the position of the layer. It won't automatically zoom towards the center of what you're currently looking at. If you have any type of scrolling in your game, you'll need to account for this.
To do this, set the anchorPoint of your layer to ccp(0.0f, 0.0f), and then calculate how much your layer has shifted, and reposition it accordingly.
- (void) scale:(CGFloat) newScale scaleCenter:(CGPoint) scaleCenter {
// scaleCenter is the point to zoom to..
// If you are doing a pinch zoom, this should be the center of your pinch.
// Get the original center point.
CGPoint oldCenterPoint = ccp(scaleCenter.x * yourLayer.scale, scaleCenter.y * yourLayer.scale);
// Set the scale.
yourLayer.scale = newScale;
// Get the new center point.
CGPoint newCenterPoint = ccp(scaleCenter.x * yourLayer.scale, scaleCenter.y * yourLayer.scale);
// Then calculate the delta.
CGPoint centerPointDelta = ccpSub(oldCenterPoint, newCenterPoint);
// Now adjust your layer by the delta.
yourLayer.position = ccpAdd(yourLayer.position, centerPointDelta);
}
Pinch zoom is easier... just detect the touchesMoved, and then call your scaling routine.
- (void) ccTouchesMoved:(NSSet*)touches withEvent:(UIEvent*)event {
// Examine allTouches instead of just touches. Touches tracks only the touch that is currently moving...
// But stationary touches still trigger a multi-touch gesture.
NSArray* allTouches = [[event allTouches] allObjects];
if ([allTouches count] == 2) {
// Get two of the touches to handle the zoom
UITouch* touchOne = [allTouches objectAtIndex:0];
UITouch* touchTwo = [allTouches objectAtIndex:1];
// Get the touches and previous touches.
CGPoint touchLocationOne = [touchOne locationInView: [touchOne view]];
CGPoint touchLocationTwo = [touchTwo locationInView: [touchTwo view]];
CGPoint previousLocationOne = [touchOne previousLocationInView: [touchOne view]];
CGPoint previousLocationTwo = [touchTwo previousLocationInView: [touchTwo view]];
// Get the distance for the current and previous touches.
CGFloat currentDistance = sqrt(
pow(touchLocationOne.x - touchLocationTwo.x, 2.0f) +
pow(touchLocationOne.y - touchLocationTwo.y, 2.0f));
CGFloat previousDistance = sqrt(
pow(previousLocationOne.x - previousLocationTwo.x, 2.0f) +
pow(previousLocationOne.y - previousLocationTwo.y, 2.0f));
// Get the delta of the distances.
CGFloat distanceDelta = currentDistance - previousDistance;
// Next, position the camera to the middle of the pinch.
// Get the middle position of the pinch.
CGPoint pinchCenter = ccpMidpoint(touchLocationOne, touchLocationTwo);
// Then, convert the screen position to node space... use your game layer to do this.
pinchCenter = [yourLayer convertToNodeSpace:pinchCenter];
// Finally, call the scale method to scale by the distanceDelta, pass in the pinch center as well.
// Also, multiply the delta by PINCH_ZOOM_MULTIPLIER to slow down the scale speed.
// A PINCH_ZOOM_MULTIPLIER of 0.005f works for me, but experiment to find one that you like.
[self scale:yourlayer.scale - (distanceDelta * PINCH_ZOOM_MULTIPLIER)
scaleCenter:pinchCenter];
}
}
If all the sprites have the same parent you can just scale their parent and they will be scaled with it, keeping their coordinates relative to the parent.
this code scale my Layer by 2 to specific location
[layer setScale:2];
layer.position=ccp(240/2+40,160*1.5);
double dx=(touchLocation.x*2-240);
double dy=(touchLocation.y*2-160);
layer.position=ccp(inGamePlay.position.x-dx,inGamePlay.position.y-dy);
My code and it works better than other ones:
- (void)ccTouchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
NSArray* allTouches = [[event allTouches] allObjects];
CCLayer *gameField = (CCLayer *)[self getChildByTag:TAG_GAMEFIELD];
if (allTouches.count == 2) {
UIView *v = [[CCDirector sharedDirector] view];
UITouch *tOne = [allTouches objectAtIndex:0];
UITouch *tTwo = [allTouches objectAtIndex:1];
CGPoint firstTouch = [tOne locationInView:v];
CGPoint secondTouch = [tTwo locationInView:v];
CGPoint oldFirstTouch = [tOne previousLocationInView:v];
CGPoint oldSecondTouch = [tTwo previousLocationInView:v];
float oldPinchDistance = ccpDistance(oldFirstTouch, oldSecondTouch);
float newPinchDistance = ccpDistance(firstTouch, secondTouch);
float distanceDelta = newPinchDistance - oldPinchDistance;
NSLog(#"%f", distanceDelta);
CGPoint pinchCenter = ccpMidpoint(firstTouch, secondTouch);
pinchCenter = [gameField convertToNodeSpace:pinchCenter];
gameField.scale = gameField.scale - distanceDelta / 100;
if (gameField.scale < 0) {
gameField.scale = 0;
}
}
}