I'm using a piece of code I found to pass touch events through transparent parts of a UIImageView. I'm subclassing UIImageView and adding this:
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event
{
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel,
1, 1, 8, 1, NULL,
kCGImageAlphaOnly);
UIGraphicsPushContext(context);
[self.image drawAtPoint:CGPointMake(-point.x, -point.y)];
UIGraphicsPopContext();
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
BOOL transparent = alpha < 0.01;
if(transparent){
return NO;
} else {
return YES;
}
}
So far its working great in my iPad version. However, in my iPhone version, I set the image frames to be half of the image's actual size (self.image.frame.size.width/2). This code is interfering with my pan gestures in an odd way, where if I touch the top left of the image I cannot access my pan gesture, but I can access it to the far bottom right, past the actual image into a transparent zone.
Removing the code returns the pan gesture to normal. However, I would still like to keep the ability to ignore touches on transparent parts. Does anyone know what part of this code is messing with my touch points or any other reason its acting like this?
Thanks
I know this question is very old, but I just had to use this on a project and my problem was that I was setting the imageView frame manually. What helped was changing the code so resizing was only done through transforms.
If the imageView is initially the same size as the image inside it, you can use this code and it will work with any transform applied to the imageView afterwards.
So in your iPhone version, you could set the frame of the imageView to the size of the image inside it and then apply a scaling transform to it so it's half the size.
Related
I managed to mask views using similar approach as here : How to mask UIViews in iOS
But now I need to detect touches in this masked view. Gesture recognisers and buttons would detect touched in bounds area, basically ignoring the mask. So I think if I could get CGPath from my mask image, it would be easy to handle touches. Any suggestions about how to get CGPath from mask image?
You could set the UIGestureRecognizer's delegate and override - (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldReceiveTouch:(UITouch *)touch
1 . Get the location of the touch in the recognizer's view
- (CGPoint)locationInView:(UIView *)view
2 . Find out the alpha channel of the image at that location. For that, use this code (for efficiency, you can recalculate the transparent values ahead of time, for the whole image, and store them into a buffer so you don't have to copy the alpha buffer every time. I kept it simple here and did it for a single pixel. If you want to see how to do the whole image at once, look at this post.
unsigned char pixelData[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixelData, 1, 1, 8, 1, NULL, kCGImageAlphaOnly);
UIGraphicsPushContext(context);
[im drawAtPoint:CGPointMake(-point.x, -point.y)];
UIGraphicsPopContext();
CGContextRelease(context);
CGFloat alphaChannel = pixel[0]/255.0;
BOOL transparent = alphaChannel < 0.01;
Based on that value, return YES or NO
I have some objects which are just UIImageViews of PNGs with transparent areas that overlap. I stack a few images near each other where the transparent parts overlap. When one of them is touched, I play some audio.
In order to let touch pass through the transparent areas and thus let the proper object be recognized, I pass touch through transparent layers with this:
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event{
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel,
1, 1, 8, 1, NULL,
kCGImageAlphaOnly);
UIGraphicsPushContext(context);
[self.image drawAtPoint:CGPointMake(-point.x, -point.y)];
UIGraphicsPopContext();
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
BOOL transparent = alpha < 0.01;
if(transparent){
return NO;
} else {
return YES;
}
}
The problem: when around 3 or 4 get stacked on the screen, there starts being a noticeable delay for the touch. Ie. the top layer triggers immediately, but the bottom layer takes some milliseconds.
Is there a faster way of perhaps passing the touch through transparent areas of my PNG?
Thanks
So I've got a basic drawing app in the process that allows me to draw lines. I draw to an off screen bitmap then present the image in drawRect. It works but its way too slow, updating about half a second after you've drawn it with your finger. I took the code and adapted it from this tutorial, http://www.youtube.com/watch?v=UfWeMIL-Nu8&feature=relmfu , as you can see in the comments people are also saying its too slow but the guy hasn't responded.
So how can I speed it up? or is there a better way to do it? any pointers will be appreciated.
Heres the code in my DrawView.m.
-(id)initWithCoder:(NSCoder *)aDecoder {
if ((self=[super initWithCoder:aDecoder])) {
[self setUpBuffer];
}
return self;
}
-(void)setUpBuffer {
CGContextRelease(offscreenBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
offscreenBuffer = CGBitmapContextCreate(NULL, self.bounds.size.width, self.bounds.size.height, 8, self.bounds.size.width*4, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(offscreenBuffer, 0, self.bounds.size.height);
CGContextScaleCTM(offscreenBuffer, 1.0, -1.0);
}
-(void)drawToBuffer:(CGPoint)coordA :(CGPoint)coordB :(UIColor *)penColor :(int)thickness {
CGContextBeginPath(offscreenBuffer);
CGContextMoveToPoint(offscreenBuffer, coordA.x,coordA.y);
CGContextAddLineToPoint(offscreenBuffer, coordB.x,coordB.y);
CGContextSetLineWidth(offscreenBuffer, thickness);
CGContextSetLineCap(offscreenBuffer, kCGLineCapRound);
CGContextSetStrokeColorWithColor(offscreenBuffer, [penColor CGColor]);
CGContextStrokePath(offscreenBuffer);
}
- (void)drawRect:(CGRect)rect {
CGImageRef cgImage = CGBitmapContextCreateImage(offscreenBuffer);
UIImage *image =[[UIImage alloc] initWithCGImage:cgImage];
CGImageRelease(cgImage);
[image drawInRect:self.bounds];
}
Works perfectly on the simulator but not device, I imagine that's something to do with processor speed.
I'm using ARC.
I tried to fix your code, however as you only seem to have posted half of it I couldn't get it working (Copy+pasting code results in lots of errors, let alone start performance tuning it).
However there are some tips you can use to VASTLY improve performance.
The first, and probably most noticeably, is -setNeedsDisplayInRect: rather then -setNeedsDisplay. This will mean that it only redraws the little rect that changed. For an iPad 3 with 1024*768*4 pixels that is a lot of work. Reducing that down to about 20*20 or less for each frame will massively improve performance.
CGRect rect;
rect.origin.x = minimum(coordA.x, coordB.x) - (thickness * 0.5);
rect.size.width = (maximum(coordA.x, coordB.x) + (thickness * 0.5)) - rect.origin.x;
rect.origin.y = minimum(coordA.y, coordB.y) - (thickness * 0.5);
rect.size.height = (maximum(coordA.y, coordB.y) + (thickness * 0.5)) - rect.origin.y;
[self setNeedsDisplayInRect:rect];
Another big improvement you could make is to only draw the CGPath for this current touch (which you do). However you then draw that saved/cached image in the draw rect. So, again, it is redrawn each frame. A better approach is to have the draw view being transparent and then to use a UIImageView behind that. UIImageView is the best way to display images on iOS.
- DrawView (1 finger)
-drawRect:
- BackgroundView (the image of the old touches)
-self.image
The draw view would itself then only ever draw the current touch only the part that changes each time. When the user lifts their finger you can cache that to a UIImage, draw that over the current background/cache UIImageView's image and set the imageView.image to the new image.
That final bit when combining the images involves drawing 2 full screen images into an off screen CGContext and so will cause lag if done on the main thread, instead this should be done in a background thread and then the result pushed back to the main thread.
* touch starts *
- DrawView : draw current touch
* touch ends *
- 'background thread' : combine backgroundView.image and DrawView.drawRect
* thread finished *
send resulting UIImage to main queue and set backgroundView.image to it;
Clear DrawView's current path that is now in the cache;
All of this combined can make a very smooth 60fps drawing app. However, views are not updated as quickly as we'd like so the drawing when moving the figure faster looks jagged. This can be improved by using UIBezierPath's instead of CGPaths.
CGPoint lastPoint = [touch previousLocationInView:self];
CGPoint mid = midPoint(currentPoint, lastPoint);
-[UIBezierPath addQuadCurveToPoint:mid controlPoint:lastPoint];
The reason it is slow is because every frame you are creating a bitmap and trying to draw that.
You asked for better ways of doing it? Have you looked at the apple sample code for a drawing app on iOS? If you don't like that, then you can always use cocos2d which provides a CCRenderTexture class (and sample code).
Currently, you are using a method which you already know is not efficient.
With this approach I suppose you should consider using background thread for all hard work of image rendering and main thread for UI updates only, i. e.
__block UIImage *__imageBuffer = nil;
- (UIImage *)drawSomeImage
{
UIGraphicsBeginImageContext(self.bounds);
// draw image with CoreGraphics
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
- (void)updateUI
{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// prepare image on background thread
__imageBuffer = [self drawSomeImage];
dispatch_async(dispatch_get_main_queue(), ^{
// calling drawRect with prepared image
[self setNeedsDisplay];
});
});
}
- (void)drawRect
{
// draw image buffer on current context
[__imageBuffer drawInRect:self.bounds];
}
I am omitting some details for making the optimization more clear. Even better to switch to UIImageView. This way you could get rid from critically important - (void)drawDect method and update image property of the UIImageView when the image is ready.
Well I think you need to change your logic. You may get some very good idea with the help of this link
http://devmag.org.za/2011/04/05/bzier-curves-a-tutorial/
and if you think that you have no time to make understanding then you may go directly to this code https://github.com/levinunnink/Smooth-Line-View :) I hop this will help you a lot.
Use CgLayer for caching your paths, read the docs, Its best for optimization.
I did something exactly like this. Check out the Pixelate app on AppStore. In order to draw , I used tiles in my code. After all , when you toch the screen and draw something you need to re-draw the entire image which is a very heavy operation. If you like the way Pixelate is moving , here's how I did it:
1)Split my image in n x m tiles. That was so I can change those values and obtain bigger/smaller tiles. In the worst case scenario (the user taps at the intersection of 4 tiles) you have to re-draw those 4 tiles. Not the entire image.
2) Make a 3 dimensional matrix in which I was storring the pixel information of each tile. So matrix[0][0][0] was the red value ( each pixel has a RGB or RGBA value depending if you are using pngs or jpgs) of the first pixel of the first tile.
3) Get the location the user pressed and calculate the tiles that need to be modified.
4) Modify the values in the matrix and update the tiles that need to update.
NOTE: This most certainly isn't the best option. It's just an alternative. I mentioned it because I think it is close to what you have right now. And it worked for me on an iPhone 3GS. If you are targeting >= iPhone 4 , you should be more than ok.
Regards,
George
Whatever the method u've suggested is way too inefficient, because creating the image every time you move the finger is inappropriate.
If its just paths that you need to draw, then have a CGMutablePathref as a member variable,
and in draw rect just move to the specified point using CGPath functions.
And more importantly, while refreshing the view, call setNeedsDisplayInRect passing only the area that you need to draw. Hope it will work for you.
Background: I have a custom scrollview (subclassed) that has uiimageviews on it that are draggable, based on the drags I need to draw some lines dynamically in a subview of the uiscrollview. (Note I need them in a subview as at a later point i need to change the opacity of the view.)
So before I spend ages developing the code (i'm a newbie so it will take me a while) I looked into what i need to do and found some possible ways. Just wondering what the right way to do this.
Create a subclass of UIView and use the drawRect method to draw the line i need (but unsure how to make it dynamically read in the values)
On the subview use CALayers and draw on there
Create a draw line method using CGContext functions
Something else?
Cheers for the help
Conceptually all your propositions are similar. All of them would lead to the following steps (some of them done invisibly by UIKit):
Setup a bitmap context in memory.
Use Core Graphics to draw the line into the bitmap.
Copy this bitmap to a GPU buffer (a texture).
Compose the layer (view) hierarchy using the GPU.
The expensive part of the above steps are the first three points. They lead to repeated memory allocation, memory copying, and CPU/GPU communication. On the other hand, what you really want to do is lightweight: Draw a line, probably animating start/end points, width, color, alpha, ...
There's an easy way to do this, completely avoiding the described overhead: Use a CALayer for your line, but instead of redrawing the contents on the CPU just fill it completely with the line's color (setting its backgroundColor property to the line's color. Then modify the layer's properties for position, bounds, transform, to make the CALayer cover the exact area of your line.
Of course, this approach can only draw straight lines. But it can also be modified to draw complex visual effects by setting the contents property to an image. You could, for example have fuzzy edges of a glow effect on the line, using this technique.
Though this technique has its limitations, I used it quite often in different apps on the iPhone as well as on the Mac. It always had dramatically superior performance than the core graphics based drawing.
Edit: Code to calculate layer properties:
void setLayerToLineFromAToB(CALayer *layer, CGPoint a, CGPoint b, CGFloat lineWidth)
{
CGPoint center = { 0.5 * (a.x + b.x), 0.5 * (a.y + b.y) };
CGFloat length = sqrt((a.x - b.x) * (a.x - b.x) + (a.y - b.y) * (a.y - b.y));
CGFloat angle = atan2(a.y - b.y, a.x - b.x);
layer.position = center;
layer.bounds = (CGRect) { {0, 0}, { length + lineWidth, lineWidth } };
layer.transform = CATransform3DMakeRotation(angle, 0, 0, 1);
}
2nd Edit: Here's a simple test project which shows the dramatical difference in performance between Core Graphics and Core Animation based rendering.
3rd Edit: The results are quite impressive: Rendering 30 draggable views, each connected to each other (resulting in 435 lines) renders smoothly at 60Hz on an iPad 2 using Core Animation. When using the classic approach, the framerate drops to 5 Hz and memory warnings eventually appear.
First, for drawing on iOS you need a context and when drawing on the screen you cannot get the context outside of drawRect: (UIView) or drawLayer:inContext: (CALayer). This means option 3 is out (if you meant to do it outside a drawRect: method).
You could go for a CALayer, but I'd go for a UIView here. As far as I have understood your setup, you have this:
UIScrollView
| | |
ViewA ViewB LineView
So LineView is a sibling of ViewA and ViewB, would need be big enough to cover both ViewA and ViewB and is arranged to be in front of both (and has setOpaque:NO set).
The implementation of LineView would be pretty straight forward: give it two properties point1 and point2 of type CGPoint. Optionally, implement the setPoint1:/setPoint2: methods yourself so it always calls [self setNeedsDisplay]; so it redraws itself once a point has been changed.
In LineView's drawRect:, all you need to is draw the line either with CoreGraphics or with UIBezierPath. Which one to use is more or less a matter of taste. When you like to use CoreGraphics, you do it like this:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
// Set up color, line width, etc. first.
CGContextMoveToPoint(context, point1);
CGContextAddLineToPoint(context, point2);
CGContextStrokePath(context);
}
Using NSBezierPath, it'd look quite similar:
- (void)drawRect:(CGRect)rect
{
UIBezierPath *path = [UIBezierPath bezierPath];
// Set up color, line width, etc. first.
[path moveToPoint:point1];
[path addLineToPoint:point2];
[path stroke];
}
The magic is now getting the correct coordinates for point1 and point2. I assume you have a controller that can see all the views. UIView has two nice utility methods, convertPoint:toView: and convertPoint:fromView: that you'll need here. Here's dummy code for the controller that would cause the LineView to draw a line between the centers of ViewA and ViewB:
- (void)connectTheViews
{
CGPoint p1, p2;
CGRect frame;
frame = [viewA frame];
p1 = CGPointMake(CGRectGetMidX(frame), CGRectGetMidY(frame));
frame = [viewB frame];
p2 = CGPointMake(CGRectGetMidX(frame), CGRectGetMidY(frame));
// Convert them to coordinate system of the scrollview
p1 = [scrollView convertPoint:p1 fromView:viewA];
p2 = [scrollView convertPoint:p2 fromView:viewB];
// And now into coordinate system of target view.
p1 = [scrollView convertPoint:p1 toView:lineView];
p2 = [scrollView convertPoint:p2 toView:lineView];
// Set the points.
[lineView setPoint1:p1];
[lineView setPoint2:p2];
[lineView setNeedsDisplay]; // If the properties don't set it already
}
Since I don't know how you've implemented the dragging I can't tell you how to trigger calling this method on the controller. If it's done entirely encapsulated in your views and the controller is not involved, I'd go for a NSNotification that you post every time the view is dragged to a new coordinate. The controller would listen for the notification and call the aforementioned method to update the LineView.
One last note: you might want to call setUserInteractionEnabled:NO on your LineView in its initWithFrame: method so that a touch on the line will go through to the view under the line.
Happy coding !
My goal is simple; I want to create a program that displays an UIImage, and when swiped from bottom to top, displays another UIImage. The images here could be a happy face/sad face. The sad face should be the starting point, the happy face the end point. When swiping your finger the part below the finger should be showing the happy face.
So far I tried solving this with the frame and bounds properties of the UIImageview I used for the happy face image.
What this piece of code does is wrong, because the transition starts in the center of the screen and not the bottom. Notice that the origin of both frame and bounds are at 0,0...
I have read numerous pages about frames and bounds, but I don't get it. Any help is appreciated!
The loadimages is called only once.
- (void)loadImages {
sadface = [UIImage imageNamed:#"face-sad.jpg"];
happyface = [UIImage imageNamed:#"face-happy.jpg"];
UIImageView *face1view = [[UIImageView alloc]init];
face1view.image = sadface;
[self.view addSubview:face1view];
CGRect frame;
CGRect contentRect = self.view.frame;
frame = CGRectMake(0, 0, contentRect.size.width, contentRect.size.height);
face1view.frame = frame;
face2view = [[UIImageView alloc]init];
face2view.layer.masksToBounds = YES;
face2view.contentMode = UIViewContentModeScaleAspectFill;
face2view.image = happyface;
[self.view addSubview:face2view];
frame = CGRectMake(startpoint.x, 0, contentRect.size.width, contentRect.size.height);
face2view.frame = frame;
face2view.clipsToBounds = YES;
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
CGPoint movepoint = [[touches anyObject] locationInView: self.view];
NSLog(#"movepoint: %f %f", movepoint.x, movepoint.y);
face2view.bounds = CGRectMake(0, 0, 320, 480 - movepoint.y);
}
The UIImages and UIImageViews are properly disposed of in the dealloc function.
Indeed, you seem to be confused about frames and bounds. In fact, they are easy. Always remember that any view has its own coordinate system. The frame, center and transform properties are expressed in superview's coordinates, while the bounds is expressed in the view's own coordinate system. If a view doesn't have a superview (not installed into a view hierarchy yet), it still has a frame. In iOS the frame property is calculated from the view's bounds, center and transform. You may ask what the hell frame and center mean when there's no superview. They are used when you add the view to another view, allowing to position the view before it's actually visible.
The most common example when a view's bounds differ from its frame is when it is not in the upper left corner of its superview: its bounds.origin may be CGPointZero, while its frame.origin is not. Another classic example is UIScrollView, which frequently modifies its bounds.origin to make subviews scroll (in fact, modifying the origin of the coordinate system automatically moves every subview without affecting their frames), while its own frame is constant.
Back to your code. First of all, when you already have images to display in image views, it makes sense to init the views with their images:
UIImageView *face1view = [[UIImageView alloc] initWithImage: sadface];
That helps the image view to immediately size itself properly. It is not recommended to init views with -init because that might skip some important code in their designated initializer, -initWithFrame:.
Since you add face1view to self.view, you should really use its bounds rather than its frame:
face1view.frame = self.view.bounds;
Same goes for the happier face. Then in -touchesMoved:… you should either change face2view's frame to move it inside self.view or (if self.view does not contain any other subviews besides faces) modify self.view's bounds to move both faces inside it together. Instead, you do something weird like vertically stretching the happy face inside face2view. If you want the happy face to slide from the bottom of self.view, you should initially set its frame like this (not visible initially):
face2view.frame = CGRectOffset(face2view.frame, 0, CGRectGetHeight(self.view.bounds));
If you choose to swap faces by changing image views' frames (contrasted with changing self.view's bounds), I guess you might want to change both the views' frame origins, so that the sad face slides up out and the happy face slides up in. Alternatively, if you want the happy face to cover the sad one:
face2view.frame = face1view.frame;
Your problem seems to have something to do with the face2view.bounds in touchesMoved.
You are setting the bounds of this view to the rect, x:0, y:0, width:320, height:480 - y
x = 0 == left on the x axis
y = 0 == top on the y axis
So you are putting this image frame at the upper left corner, and making it fill the whole view. That's not what you want. The image simply becomes centered in this imageView.