NSSet of touchesShouldBegin: of UIScrollView is just one NSString - iphone

I have a subclass of UIScrollView, it is also the delegate.
When I have the next protocol function called:
- (BOOL)touchesShouldBegin:(NSSet *)touches withEvent:(UIEvent *)event inContentView:(UIView *)view
{
id element;
NSEnumerator *setEnum = [touches objectEnumerator];
while ((element = [setEnum nextObject]) != nil)
{
NSLog(#"element:%#", element);
}
return [super touchesShouldBegin:touches withEvent:event inContentView:view];
}
The only thing that NSLog is showing is:
element:<UITouch: 0x15a9a0> phase: Ended tap count: 3 window: <UIWindow: 0x392390; frame = (0 0; 320 480); layer = <UIWindowLayer: 0x382cd0>> view: <UIButton: 0x3a5e90; frame = (0 0; 106 138); opaque = NO; layer = <CALayer: 0x3a3e90>> location in window: {228, 126} previous location in window: {227, 126} location in view: {89.6667, 77} previous location in view: {88.3333, 77}
The problem is, it shows the content of the NSSet as one big NSString. If I ask allObjects from the objectEnumerator, I just get one object in an NSArray. Exactly the same object (the NSString) as NSLog is showing.
Could someone please tell me if I am doing something wrong, or if it is not normal that the NSSet is just giving one NSString.
Thank you!

The NSLog() output is the result of the description method called on the object. NSLog() output is designed to be human readable for debugging only.
<UITouch: 0x15a9a0> indicates the the object is a UITouch at address 0x15a9a0. The rest of the description just displays some interesting values of the UITouch.
Try:
NSLog(#"tap count:%d", element.count);
etc. and you will se that it is not just a string.
As an aside there is no need for an explicit NSEnumerator, fast enumeration ca be used: you can simplify the code to:
for (UITouch *element in touches)
{
NSLog(#"element:%#", element);
}

The NSSet touches is a set of UITouch instances, not NSStrings. When you send it to NSLog, it calls the "description" method of the UITouch instance which does return an NSString.

Related

SpriteKit Touches - containsPoint

I'm having an issue where, within my touchesBegan method, I'm not getting back what I think I should.
I'm testing for a hit within a specific node. I've tried several methods, and none work. I've created a work-around, but would love to know if this is a bug or if I'm doing something wrong.
Here's the code:
Standard touchesBegan method:
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
/* Called when a touch begins */
SKSpriteNode *infoPanelNode = (SKSpriteNode *)[self childNodeWithName:#"infoPanelNode"];
UITouch *touch = [touches anyObject];
if(touch){
//e.g. infoPanelNode position:{508, 23} size:{446, 265.5} (also of note - infoPanelNode is a child of self)
//This solution works
CGPoint location = [touch locationInNode:self];
//location == (x=77, y=170)
bool withinInfoPanelNode = [self myPtInNode:infoPanelNode inPoint:location];
// withinInfoPanelNode == false (CORRECT)
//This one doesn't return the same result - returns true when hit is not in the cell
CGPoint infoLocation = [touch locationInNode:infoPanelNode];
//infoLocation == (x=-862, y=294)
bool withinInfoPanelNodeBuiltInResult = [infoPanelNode containsPoint:infoLocation];
// withinInfoPanelNodeBuiltInResult == true (WRONG)
// This one doesn't work either - returns an array with the infoPanelNode in it, even though the hit point and node location are the same shown above
// NSArray *nodes = [self nodesAtPoint:location];
// for (SKNode *node in nodes) {
// if(node==infoPanelNode)
// withinInfoPanelNode = true;
// }
//
//Code omitted - doing something with the withinInfoPanelNode now
}
My custom hit test code:
-(bool) myPtInNode:(SKSpriteNode *)node inPoint:(CGPoint)inPoint {
if(node.position.x < inPoint.x && (node.position.x+node.size.width) > inPoint.x){
if(node.position.y < inPoint.y && (node.position.y+node.size.height) > inPoint.y){
return true;
}
}
return false;
}
Anyone see what's going wrong here?
Thanks,
kg
I'm not sure exactly how SKCropNode will work, but in general, in order for containsPoint: to detect touches within it you need to give it a point relative to its parent node. The following code should work. Note the addition of .parent when calling locationInNode:
CGPoint infoLocation = [touch locationInNode:infoPanelNode.parent];
BOOL withinInfoPanelNodeBuiltInResult = [infoPanelNode containsPoint:infoLocation];
Solved this problem and wanted to update everyone.
It turns out that with this specific infoPanelNode (an SKSpriteNode), I have a child node within it that is an SKCropNode. This node then crops out a much larger node (obviously it's a child of the crop node) so only a small portion is viewable (allowing for scrolling to portions of that node). Unfortunately, the call to containsPoint apparently combines the boundaries of all child nodes with the receiving node's boundaries to use as the boundary test rect. This would be understandable if it would respect the SKCropNode's boundaries of IT'S children, but apparently, it doesn't so you have to roll your own if you have this type of setup.

UITouch returns UIWebBrowserView (Apple internal component) instead of UIWebView

I have a UIWebView and I want to detect any touch on that. (I don't want to use UITapGsture or any other thing)
I am using sendEvent: method of UIApplication for this purpose
and check if touch object contains webview.
Surprisingly it points to UIWebBrowserView. I have to check it's superview to get browser but it makes my code very inefficient because sendEvent is called every time when user makes a tap.
Code Snippet :
- (void)sendEvent:(UIEvent *)event {
[super sendEvent:event];
NSSet *touches = [event allTouches];
if (touches.count != 1)
return;
UITouch *touch = touches.anyObject
if([touch.view isKindOfClass:[UIWebView class]]){ // This fails
}
}
I want to know is there a way to make UITouch return WebView as an object instead of returning it's child views like UIPdfView or UIWebBrowser view?
Finally I found a solution to this :
We have to use :
if([touch.view isMemberOfClass:[UIWebView class]]){ // This Works
}
instead of :
if([touch.view isKindOfClass:[UIWebView class]]){ // This fails
}

-[CustomWindow hitTest:withEvent:] implementation to forward events

I have a custom window (that should be on top of everything, including the keyboard) to show an overlay thing, something like the overlay you see when pressing the volume up/down buttons in the device.
So I made a custom window OverlayWindow so far everything works fine and windows in the back are receiving their events normally. However hitTest:withEvent: is called several times and sometimes it even returns nil. I wonder if that is normal/correct? If not how can I fix that?
// A small (WIDTH_MAX:100) window in the center of the screen. If it matters
const CGSize screenSize = [[UIScreen mainScreen] bounds].size;
const CGRect rect = CGRectMake(((int)(screenSize.width - WIDTH_MAX)*0.5),
       ((int)(screenSize.height - WIDTH_MAX)*0.5), WIDTH_MAX, WIDTH_MAX);
overlayWindow = [[CustomWindow alloc] initWithFrame:rect];
overlayWindow.windowLevel = UIWindowLevelStatusBar; //1000.0
overlayWindow.hidden = NO; // I don't need it to be the key (no makeKeyAndVisible)
 
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event
{
// Find the front most window (with the highest window level) and
// call this method on that window. It should will make the event be
// forwarded to it
// Situation1: This method is called twice (or even more, it depend
// on the number of windows the app has) per event: Why? Is this the
// *normal* behaviour?
NSLog(#" ");
NSLog(#"Point: %# Event: %p\n", NSStringFromCGPoint(point), event);
UIView *view = nil;
if (CGRectContainsPoint(self.bounds, point)) {
NSLog(#"inside window\n");
NSArray *wins = [[UIApplication sharedApplication] windows];
__block UIWindow *frontMostWin = nil;
[wins enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
NSLog(#"win: %#\n", obj);
if ([obj windowLevel] >= [frontMostWin windowLevel] && obj != self) {
frontMostWin = obj;
}
}];
NSLog(#"frontMostWindow:%#\n finding a new view ...\n", frontMostWin);
CGPoint p = [frontMostWindow convertPoint:point fromWindow:self];
view = [frontMostWindow hitTest:p withEvent:event];
// Situation2: sometimes view is nil here, Is that correct?
}
NSLog(#"resultView: %#\n", view);
return view;
}
EDIT:
Also I've noticed that
if hitTest:withEvent: always returns nil it works too. This is only when I call overlayWindow.hidden = NO;
if I call [overlayWindow makeKeyAndVisible] returning nil in hitTest:withEvent: does not always work. It looks like a key window requires a proper implementation of the hit testing method?
Am I missing something about event forwarding here?
Do frontMostWindow means frontMostWin?
It looks like even if we use only one UIWindow, hitTest:withEvent: will be executed on it at least 2 times. So, I guess it is normal.
You can receive null at
view = [frontMostWindow hitTest:p withEvent:event];
due to following reasons:
frontMostWindow is null itself (as example, if you have only one window)
p is ouside frontMostWindow bounds (as example, when frontMostWindow is keyboard and your touch is somewhere else)
frontMostWindow have property userInteractionEnabled set to NO;
hitTest:withEvent: being call several times is normal. This is probably because you only display a UILabel or UIImageView on the overlayed window and thus touches are automatically forwareded.
However I think you don't really need another OverlayWindow, instead you might consider a UIView on the top of the keyWindow. This should make your application cleaner...
I did face the same problem.
I tried to solve it based your post and another solution was found.
This code implemented in uiwindow overlaying on the top.
This code will pass events through the under window when area has no view.
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event
{
NSLog(#" ");
NSLog(#"Point: %# Event: %p\n", NSStringFromCGPoint(point), event);
UIView *view = nil;
UIView *resultView = [super hitTest:point withEvent:event];
if (resultView == self) {
NSLog(#"touched in transparent window !!");
return nil;
}
NSLog(#"touched in view!!");
return resultView;
}
After all, Thanks. Your post is very helpful.

Why do CALayer's move slower than UIView's?

I have a UIView subclass that moved around 'event's when the user touches them, using the following overridden method:
// In a custom UIView...
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
CGPoint point = [[touches anyObject] locationInView:self];
UIView *eventView = [self.ringOfEvents hitTest:point withEvent:event];
for (EventView *e in self.events) {
if (e == eventView) {
event.position = point;
}
}
}
Why is it that when I make EventView a CALayer instead of UIView, the movement slows down to a creep? I can post that code too, but it is so similar that I don't think it is necessary.
I would think that abstracting to a lower level would speed up event handling, but I must be missing something.
By the way, either if *eventView is a subclass of UIView or CALayer, the position property is as follows:
- (void)setPosition:(CGPoint)pos {
self.layer.position = pos;
}
- (CGPoint)position {
return self.layer.position;
}
Not sure why I get a huge decrease in latency when using UIView as apposed to CALayer..
Most CALayer properties are changed with animation by default, so decrease in latency is probably caused by that.
You may want to disable animations when layer position is changed. Possible solutions are discussed for example in here and here
This is due to implicit animations.
I've implemented a category method which removes implicit animation for givven keys and can be used like this
[view.layer setNullAsActionForKeys:#[#"bounds", #"position"]];
Implementation
#implementation CALayer (Extensions)
- (void)setNullAsActionForKeys:(NSArray *)keys
{
NSMutableDictionary *dict = [self.actions mutableCopy];
if(dict == nil)
{
dict = [NSMutableDictionary dictionaryWithCapacity:[keys count]];
}
for(NSString *key in keys)
{
[dict setObject:[NSNull null] forKey:key];
}
self.actions = dict;
}
#end

Detect touch down & touch up in a view containing UIControls

I'm trying to detect when a user touches down and up in a view. The view contains some buttons and other UIKit controls. I would like to detect these touch events but not consume them.
I've tried two approaches, but neither has been sufficient:
First I added a transparent overlay overriding -touchesBegan:withEvent: and -touchesEnded:withEvent: and forward the event to the next responder with
[self.nextResponder touchesBegan:touches withEvent:event]
However, it seems that UIKit objects ignore all forwarded events.
Next, I tried overriding -pointInside:withEvent: and -hitTest:withEvent: . This worked great to detect touch down events, but pointInside:: and hitTest:: are not called on touch up (i.e. [[[event allTouches] anyObject] phase] is never equal to UITouchPhaseEnded ).
What's the best way to detect both touch down and touch up events without disturbing interaction with the underlying UIControls?
according to the Event Handling Guide for iOS
you have 3 options:
1) subclassing UIWindow to override sendEvent:
2) using an overlay view
3) designing so that you don't have to do that... so really more like 2 options.
Here is a simplification of apples example, using UIWindow subclass;
1) change the class of your window in the NIB to your subclass of UIWindow.
2) put this method in the .m file.
- (void)sendEvent:(UIEvent *)event
{
NSSet * allTouches = [event allTouches];
NSMutableSet *began = nil;
NSMutableSet *moved = nil;
NSMutableSet *ended = nil;
NSMutableSet *cancelled = nil;
// sort the touches by phase so we can handle them similarly to normal event dispatch
for(UITouch *touch in allTouches) {
switch ([touch phase]) {
case UITouchPhaseBegan:
if (!began) began = [NSMutableSet set];
[began addObject:touch];
break;
case UITouchPhaseMoved:
if (!moved) moved = [NSMutableSet set];
[moved addObject:touch];
break;
case UITouchPhaseEnded:
if (!ended) ended = [NSMutableSet set];
[ended addObject:touch];
break;
case UITouchPhaseCancelled:
if (!cancelled) cancelled = [NSMutableSet set];
[cancelled addObject:touch];
break;
default:
break;
}
// call our methods to handle the touches
if (began)
{
NSLog(#"the following touches began: %#", began);
};
if (moved)
{
NSLog(#"the following touches were moved: %#", moved);
};
if (ended)
{
NSLog(#"the following touches were ended: %#", ended);
};
if (cancelled)
{
NSLog(#"the following touches were cancelled: %#", cancelled);
};
}
[super sendEvent:event];
}
It has way too much output, but you will get the idea... and can make your logic fit where you want it to.