Faster way to pass touch through transparent views? - iphone

I have some objects which are just UIImageViews of PNGs with transparent areas that overlap. I stack a few images near each other where the transparent parts overlap. When one of them is touched, I play some audio.
In order to let touch pass through the transparent areas and thus let the proper object be recognized, I pass touch through transparent layers with this:
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event{
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel,
1, 1, 8, 1, NULL,
kCGImageAlphaOnly);
UIGraphicsPushContext(context);
[self.image drawAtPoint:CGPointMake(-point.x, -point.y)];
UIGraphicsPopContext();
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
BOOL transparent = alpha < 0.01;
if(transparent){
return NO;
} else {
return YES;
}
}
The problem: when around 3 or 4 get stacked on the screen, there starts being a noticeable delay for the touch. Ie. the top layer triggers immediately, but the bottom layer takes some milliseconds.
Is there a faster way of perhaps passing the touch through transparent areas of my PNG?
Thanks

Related

Can I get CGPath from image with alpha channel?

I managed to mask views using similar approach as here : How to mask UIViews in iOS
But now I need to detect touches in this masked view. Gesture recognisers and buttons would detect touched in bounds area, basically ignoring the mask. So I think if I could get CGPath from my mask image, it would be easy to handle touches. Any suggestions about how to get CGPath from mask image?
You could set the UIGestureRecognizer's delegate and override - (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldReceiveTouch:(UITouch *)touch
1 . Get the location of the touch in the recognizer's view
- (CGPoint)locationInView:(UIView *)view
2 . Find out the alpha channel of the image at that location. For that, use this code (for efficiency, you can recalculate the transparent values ahead of time, for the whole image, and store them into a buffer so you don't have to copy the alpha buffer every time. I kept it simple here and did it for a single pixel. If you want to see how to do the whole image at once, look at this post.
unsigned char pixelData[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixelData, 1, 1, 8, 1, NULL, kCGImageAlphaOnly);
UIGraphicsPushContext(context);
[im drawAtPoint:CGPointMake(-point.x, -point.y)];
UIGraphicsPopContext();
CGContextRelease(context);
CGFloat alphaChannel = pixel[0]/255.0;
BOOL transparent = alphaChannel < 0.01;
Based on that value, return YES or NO

Allowing touch to pass through transparent images

I'm using a piece of code I found to pass touch events through transparent parts of a UIImageView. I'm subclassing UIImageView and adding this:
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event
{
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel,
1, 1, 8, 1, NULL,
kCGImageAlphaOnly);
UIGraphicsPushContext(context);
[self.image drawAtPoint:CGPointMake(-point.x, -point.y)];
UIGraphicsPopContext();
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
BOOL transparent = alpha < 0.01;
if(transparent){
return NO;
} else {
return YES;
}
}
So far its working great in my iPad version. However, in my iPhone version, I set the image frames to be half of the image's actual size (self.image.frame.size.width/2). This code is interfering with my pan gestures in an odd way, where if I touch the top left of the image I cannot access my pan gesture, but I can access it to the far bottom right, past the actual image into a transparent zone.
Removing the code returns the pan gesture to normal. However, I would still like to keep the ability to ignore touches on transparent parts. Does anyone know what part of this code is messing with my touch points or any other reason its acting like this?
Thanks
I know this question is very old, but I just had to use this on a project and my problem was that I was setting the imageView frame manually. What helped was changing the code so resizing was only done through transforms.
If the imageView is initially the same size as the image inside it, you can use this code and it will work with any transform applied to the imageView afterwards.
So in your iPhone version, you could set the frame of the imageView to the size of the image inside it and then apply a scaling transform to it so it's half the size.

UI design - Best way to deal with many UIButtons

I'm running into problems when dealing with a large amount of UIButtons in my interface. I was wondering if anyone had first hand experience with this and how they did it?
When dealing with 30-80 buttons most simple, a couple of complex do you just use UIButton or do something different like drawRect, respond to touch events and get the coordinates of the touch event?
Best example is a calendar, similar to that of Apples Calendar App. Would you just draw most of the days using drawRect and then when you click a button replace it with an image or just use UIButtons? It's not so much the memory footprint or creating the buttons, just strange things are happening with them sometimes (previous question about it) and having performance issues animating them.
Thanks for any help.
If "strange things are happening" with your buttons, you need to get to the bottom of why. Switching architectures just to avoid a problem that you don't understand (and might crop up again) doesn't sound like a good idea.
-drawRect: works by drawing to a bitmap-backed context. This happens when -displayIfNeeded is called after -setNeedsDisplay (or doing something else that implicitly sets the needsDisplay flag, like resizing a view with contentMode = UIContentModeRedraw). The bitmap-backed context is then composited to screen.
Buttons work by putting the different components (background image, foreground image, text) in different layers. The text is drawn when it changes and composited to the screen; the images are just composited directly to the screen.
The "best" way to do things is usually a combination of the two. For example, you might draw text and a background image in -drawRect: so the different layers didn't need to be composited at render time (you get an additional speedup if your view is "opaque"). You probably want to avoid full-screen animations via drawRect: (and it won't integrate so well with CoreAnimation), since drawing tends to be more expensive than compositing.
But first, I'd find out what's going wrong with UIButton. There's little point worrying about how you could make things faster until you actually find out what the slow bits are. Write code so that it is easy to maintain. UIButton is not that expensive and -drawRect: is not that bad (presumably it's even better if you use -setNeedsDisplayInRect: for a smallish rect, but then you need to calculate the rect...), but if you want a button, use UIButton.
Instead of using 30-80 UIButtons I will prefer using images (if possible, a single image or as small number as possible) and compare the touch location.
And if I must create buttons, then obviously will not create 30-80 variables for them. I will set and get view tag to determine which one is tapped.
If this is all stuff you are animating then you could create a bunch of CALayers with their contents set to a CGImage. You would have to compare the touch location to identify the layer. CALayers have a useful style property that is an NSDictionary you can store meta-data in.
I just use the UIButtons unless there happens to be a specific performance issue that crops up. If they have similar functionality, however, such as a keyboard, I map them all to one IBAction and differentiate the behavior based on the sender.
What specific performance and animation issues are you running into?
I recently ran across this problem myself when developing a game for the iPhone. I was using UIButtons to hold game tiles, then stylized them with transparent images, background colors and text.
It all worked well for a small number of tiles. Once we got to about 50, however, the performance dropped significantly. After scouring Google I discovered that others had experienced the same problem. It seems the iPhone struggles with lots of transparent buttons onscreen at once. Not sure if it's a bug in the UIButton code or just a limitation of the graphics hardware on the device, but either way, it's beyond your control as a programmer.
My solution was to draw the board by hand using Core Graphics. It seemed daunting at first, but in reality it was pretty easy. I just placed one big UIImageView on my ViewController in Interface Builder, made it an IBOutlet so I could alter it from Objective-C, then constructed the image with Core Graphics.
Since a UIImageView doesn't handle taps, I used the touchesBegan method of my UIViewController, and then triangulated the x/y coordinates of the touch to the precise tile on my game board.
The board now renders in less than a tenth of a second. Bingo!
If you need sample code, just let me know.
UPDATE: Here's a simplified version of the code I'm using. Should be enough for you to get the gist.
// CoreGraphicsTestViewController.h
// CoreGraphicsTest
#import <UIKit/UIKit.h>
#interface CoreGraphicsTestViewController : UIViewController {
UIImageView *testImageView;
}
#property (retain, nonatomic) IBOutlet UIImageView *testImageView;
-(void) drawTile: (CGContextRef) ctx row: (int) rowNum col: (int) colNum isPressed: (BOOL) tilePressed;
#end
... and the .m file ...
// CoreGraphicsTestViewController.m
// CoreGraphicsTest
#import "CoreGraphicsTestViewController.h"
#import <QuartzCore/QuartzCore.h>
#import <CoreGraphics/CoreGraphics.h>
#implementation CoreGraphicsTestViewController
#synthesize testImageView;
int iTileSize;
int iBoardSize;
- (void)viewDidLoad {
int iRow;
int iCol;
iTileSize = 75;
iBoardSize = 3;
[testImageView setBounds: CGRectMake(0, 0, iBoardSize * iTileSize, iBoardSize * iTileSize)];
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
for (iRow = 0; iRow < iBoardSize; iRow++) {
for (iCol = 0; iCol < iBoardSize; iCol++) {
[self drawTile: context row: iRow col: iCol color: isPressed: NO];
}
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
[super viewDidLoad];
}
- (void)dealloc {
[testImageView release];
[super dealloc];
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint location = [touch locationInView: testImageView];
if ((location.x >= 0) && (location.y >= 0) && (location.x <= testImageView.bounds.size.width) && (location.y <= testImageView.bounds.size.height)) {
UIImage *theIMG = testImageView.image;
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[theIMG drawInRect: rect];
iRow = location.y / iTileSize;
iCol = location.x / iTileSize;
[self drawTile: context row: iRow col: iCol color: isPressed: YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
}
}
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
UIImage *theIMG = testImageView.image;
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[theIMG drawInRect: rect];
[self drawTile: context row: iRow col: iCol isPressed: NO];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
}
-(void) drawTile: (CGContextRef) ctx row: (int) rowNum col: (int) colNum isPressed: (BOOL) tilePressed {
CGRect rrect = CGRectMake((colNum * iTileSize), (rowNum * iTileSize), iTileSize, iTileSize);
CGContextClearRect(ctx, rrect);
if (tilePressed) {
CGContextSetFillColorWithColor(ctx, [[UIColor redColor] CGColor]);
} else {
CGContextSetFillColorWithColor(ctx, [[UIColor greenColor] CGColor]);
}
UIImage *theImage = [UIImage imageNamed:#"tile.png"];
[theImage drawInRect: rrect];
}

Poor performance of CGContextStrokePath while drawing a few lines and circles on iPhone

I need to draw a few hundred lines and circles on my view and they keep moving through a timer function, where I call [myView setNeedsDisplay] to update the view.
I subclass (myView) from UIView and implement drawRect function to do the following ...
-(void) drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGFloat red[4] = { 1, 0, 0, 1};
CGContextSetLineWidth(context, 1);
CGContextSetShouldAntialias(context, NO);
CGContextSetLineCap(context, kCGLineCapSquare);
CGContextSetStrokeColor(context, red);
// rects is an array of CGRect of size ~200
for (int i = 0; i < N; i++) {
CGContextAddEllipseInRect(context, rects[i]);
}
// points is an array of CGPoint of size ~100
CGContextAddLines(context, points, N);
CGContextStrokePath(context, color);
}
But this is dog slow. Is there something I am missing here?
It is taking almost 1 sec to do one complete drawing
Animating objects by redrawing them constantly is a bad way to go. Quartz drawing is one of the slowest things you can do, UI-wise, because of the way that the display system works.
Instead, what you will want to do is create individual layers or views for each element that will be animated. These layers or views will only be drawn once, and then cached. When the layers move around, they won't be redrawn, just composited. Done this way, even the slowest iOS devices (the original iPhone, iPhone 3G, and first generation iPod touch) can animate up to 100 layers at 60 frames per second.
Think of it like animating a cartoon. Rather than have the animators redraw by hand every part of every frame, they use cells to reuse elements between frames that stay the same or just move without changing form. This significantly reduces the effort of producing a cartoon.

MKAnnotationView disappearing on swipe and double-tap zoom

I have subclassed MKAnnotationView to create an annotation that basically draws a circle around a point on a map view through override of drawRect. The circle draws fine in the following situations (in the simulator):
On initial load of the map view
On swipe, but only when swipe motion is stopped before touch ends (so that map doesn't "coast" after touch ends)
On pinch zoom
The circle will disappear when any of the following actions occur:
Swipe where map "coasts" after touch ends
Double-tap zoom
The circle will reappear if any of the actions in the "working" group are taken after it has disappeared.
What might cause this? I'm not a draw/display/layout expert (frankly, I'm not an obj C or iPhone expert either).
Here is some slightly simplified code that seems most relevant from my MKAnnotationView subclass:
- (void)drawRect:(CGRect)rect {
// Drawing code
[self drawCircleAtPoint:CGPointMake(0,0) withRadius:self.radiusInPixels andColor:self.circleAnnotation.color];
}
- (void)drawCircleAtPoint:(CGPoint)p withRadius:(int)r {
CGContextRef contextRef = UIGraphicsGetCurrentContext();
float alpha = 0.75;
CGContextSetRGBFillColor(contextRef, 255, 0, 0, alpha);
CGContextSetRGBStrokeColor(contextRef, 255, 0, 0, alpha);
// Draw a circle (border only)
CGContextStrokeEllipseInRect(contextRef, CGRectMake(0, 0, 2*r, 2*r));
}
Did you add this method?
- (void)setAnnotation:(id <MKAnnotation>)annotation
{
[super setAnnotation:annotation];
[self setNeedsDisplay];
}
This is taken from Apple's sample code app called WeatherMap which was removed from Apple Developer Center, but can be found on github
https://github.com/acekiller/iOS-Samples/blob/master/WeatherMap/Classes/WeatherAnnotationView.m