I am using UIPageViewController to load 10 webpages in a webview.
All the webpages are loading one by one properly. But I am facing a weird problem in
- (UIViewController *)pageViewController:(UIPageViewController *)pageViewController viewControllerAfterViewController:(UIViewController *)viewController
Right now I am in 1st page. So there are no possibility of going back to the
- (UIViewController *)pageViewController:(UIPageViewController *) pageViewController viewControllerBeforeViewController:(UIViewController *)viewController
After I load the 1st page, I was just scrolling up and down with the WebView. Suddenly the viewControllerAfterViewController method is getting called. And strange thing is , it is not moved to the next view i.e (2nd page) is not loaded.
What could be the problem ?
I'm not sure but, I think the problem is in the gesture recognizers, or the location of there CGRects. Basically the page view controller listens for a touches-move. So if the user swipes the scroll (on the webView) page view tries to execute a page turn. (I'm guessing) I think a solution would be to make sure the page view touch location (CGRect) does not overlap with the web view. Apparently you can set the position (region) for the touch for the UIPageViewController" to turn the page.
You can start by looking up "UIGestureRecognizer Class Reference" in the iOS Reference Library. I hope this helps
Here's what I found there:
locationInView:
Returns the point computed as the location in a given view of the gesture represented by the receiver.
(CGPoint)locationInView:(UIView *)view
Parameters
view
A UIView object on which the gesture took place. Specify nil to indicate the window.
Return Value
A point in the local coordinate system of view that identifies the location of the gesture. If nil is specified for view, the method returns the gesture location in the window’s base coordinate system.
Discussion
The returned value is a generic single-point location for the gesture computed by the UIKit framework. It is usually the centroid of the touches involved in the gesture. For objects of the UISwipeGestureRecognizer and UITapGestureRecognizer classes, the location returned by this method has a significance special to the gesture. This significance is documented in the reference for those classes.
Availability
Available in iOS 3.2 and later.
See Also
– locationOfTouch:inView:
Declared In
UIGestureRecognizer.h
locationOfTouch:inView:
Returns the location of one of the gesture’s touches in the local coordinate system of a given view.
(CGPoint)locationOfTouch:(NSUInteger)touchIndex inView:(UIView *)view
Parameters
touchIndex
The index of a UITouch object in a private array maintained by the receiver. This touch object represents a touch of the current gesture.
view
A UIView object on which the gesture took place. Specify nil to indicate the window.
Return Value
A point in the local coordinate system of view that identifies the location of the touch. If nil is specified for view, the method returns the touch location in the window’s base coordinate system.
Availability
Available in iOS 3.2 and later.
See Also
– locationInView:
Declared In
UIGestureRecognizer.h
Related
Whilst developing an app I have come up against a problem with having too many pan gesture recognisers.
My first pan gesture recogniser is on the MainViewController which is a parent of the RecipeSearchVC. This gesture recogniser slides the whole view left or right.
My second pan gesture recogniser in in the RecipeSearchParametersVC which is a parent of a Page View Controller.
The third pan gesture gesture recogniser is added to a UIControl Wheel nested inside of a view controller which is represented by the PageViewController.
I know this sounds insane and it could be argued that this is poor design. However, I believe that is this worked cohesively it would be fine.
When trying to rotate the wheel it will rotate for a second or two before the gesture is overtaken by either the PageViewController or the MainViewController. More often than not it is the MainViewController that takes over. What techniques could I employ to clearly separate each of these gesture recognisers?
EDIT:
Apologies for the vagueness of my description when it comes to the pan gesture recognisers.
The MainViewController has it's own UIPanGestureRecpgniser to allow it to move everything left or right.
The RecipeSearchParametersVC only has a UIPanGestureRecogniser because of the UIPageViewController it contains. It does not add the gesture recogniser itself, but simply takes them from the the pageViewController.
The UIControl's gesture recognisers allows it to track the rotation it should undergo.
In taking the advice given, I may remove the gestures from the page view controller and substitue them with buttons. I only intended this to work like the images (which can be scrolled to reveal more images) found in iBooks, and so I thought that it would work fine.
UIControl UIPanGestureRecogniser Code
/**
* sent to the control when a touch related to the given event enters the control’s bounds
*
* #param touch uitouch object that represents a touch on the receiving control during tracking
* #param event event object encapsulating the information specific to the user event
*/
- (BOOL)beginTrackingWithTouch:(UITouch *)touch
withEvent:(UIEvent *)event
{
[super beginTrackingWithTouch:touch withEvent:event];
CGPoint touchPoint = [touch locationInView:self];
// filter out touchs too close to centre of wheel
CGFloat magnitudeFromCentre = [self calculateDistanceFromCentre:touchPoint];
if (magnitudeFromCentre < 40) return NO;
// calculate distance from centre
CGFloat deltaX = touchPoint.x - _container.center.x;
CGFloat deltaY = touchPoint.y - _container.center.y;
// calculate the arctangent of the opposite (y axis) over the adjacent (x axis) to get the angle
_deltaAngle = atan2(deltaY, deltaX);
_startTransform = _container.transform;
// selection in limbo so set all sector image's to minimum value by changing current one
[self getSectorByValue:_currentSector].alpha = kMinimumAlpha;
return YES;
}
Unfortunately due to the nature of my controller hierarchy I was forced to rethink the design of my app.
The MainViewController with the UIPanGestureRecogniser has stayed as is.
The UIPageViewController with the UIControl has moved to a separate static view controller.
This works far better but is not yet ideal. The UIPageViewController steals any horizontal panning, however this can probably be fixed by implementing buttons as an alternative to the scrolling.
The UIControl did not have a gesture recogniser, but I override the beginTrackingWithTouch: and other methods to track the touches.
I suppose the answer should be: if you are layering too many gestures, you're doing it wrong.
You will need to add a container to the wheel, and then you can do something like that, if I am not missing something, this code must work.
UIPanGestureRecognizer* pan1 = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(pan1:)];
[self.view addGestureRecognizer:pan1];
UIPanGestureRecognizer* pan2 = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(pan2:)];
[view2 addGestureRecognizer:pan2];
UIPanGestureRecognizer* pan3 = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(pan3:)];
[view3 addGestureRecognizer:pan3];
- (void) pan1:(UIPanGestureRecognizer*)sender{
NSLog(#"%#",sender);
}
- (void) pan2:(UIPanGestureRecognizer*)sender{
NSLog(#"%#",sender);
}
- (void) pan3:(UIPanGestureRecognizer*)sender{
NSLog(#"%#",sender);
}
I believe you would have defined 3 different UIPanGesture objects and attached it to the appropriate views.
While technically this is correct it could cause some confusion, for example, if you're having several overlapping views (you have 3 here) and need a touch to be sent to a view that's not at the top of the view stack what would happen? The gesture could end up being confusing.
Instead, it's possible to attach a single gesture recognizer to the superview of several target views and delegate the gesture to the correct view based on the coordinates of where the user is touching. For that you need to normalize the touch coordinates originating from any of the subviews to the superview where the UIPanGesture is defined. That way you can know where the pan happend on the wheel or elsewhere.
PS: This being said, I strongly feel this is a design gotcha and it will hurt you. I have done things like this but eventually you would stumble on some corner case where the user interaction would be horrid. If this is the main view of your app I suggest you rethink the design.
What techniques could I employ to clearly separate each of these gesture recognisers?
You should look into the UIGestureRecognizerDelegate method:
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
Asks the delegate if two gesture recognizers should be allowed to recognize gestures simultaneously.
This method is called when recognition of a gesture by either gestureRecognizer or otherGestureRecognizer would block the other gesture recognizer from recognizing its gesture. Note that returning YES is guaranteed to allow simultaneous recognition; returning NO, on the other hand, is not guaranteed to prevent simultaneous recognition because the other gesture recognizer's delegate may return YES.
I believe this will solve your problem of gestures being 'overtaken' by other gestures.
It appears that all the touch methods of a UIView are only called if the touches began within the bounds of that view. Is there a way to have a view respond to a user who has touched outside the view, but then dragged his fingers into the view?
In case it matters, my specific application is for dragging a MKPinAnnotationView (using built-in 4.0 dragging). I want something to happen if the user drags a pin onto another view (which happens to be an AnnotationView as well, but it could be anything). No method for dragging is called until I let go of the pin; and no method no the UIView that's being dragged to seems to be called unless I started by touching from within the view.
Because the superview is a MKMapView, it is difficult to just use the touchesMoved event of that and check if the user is in the right location or not. Thanks!
So after playing around with it for a while, I found that the answer given here actually gave me what I needed, even though the question being asked was different.
It turns out you can subclass UIGestureRecognizer; and have it handle all the touches for the view that it has been added to (including an MKMapView). This allows all the normal MKMapView interactions to still behave without any problem; but also alerts me of the touches. In touchesMoved, I just check the location of the touch; and see if it is within the bounds of my other view.
From everything I tried; this seems to be the only way to intercept touchesMoved while the user is dragging an MKAnnotation.
You sure can:
(HitstateView.h)
#import <UIKit/UIKit.h>
#interface HitstateView : UIView {
id overrideObject;
}
#property (nonatomic, retain) id overrideObject;
#end
(HitstateView.m)
#import "HitstateView.h"
#implementation HitstateView
#synthesize overrideObject;
- (void)dealloc {
self.overrideObject = nil;
[super dealloc];
}
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event {
UIView *hitView = [super hitTest:point withEvent:event];
if (hitView == self) {
return overrideObject;
}
return hitView;
}
#end
Make this view the size of your touch area. Set the overideObject to the view you want the touches to go. IIRC it ought to be a subview of the HitstateView.
Every view inherits UIResponder so every view gets touchesBegan/Moved/Ended - I do not think starting the touch outside the view means the view gets no event when the touch moves over the view. If you want to get a notification that something has been dragged onto your MKMapView you should make a subclass that handles the touch but then passes the event to super, allowing the hierarchy to do whatever it needs to do with the touch. You don't need to capture or modify the event just observe it.
It depends on how your views are set up. Generally leveraging the responder chain is the best way to go. It allows you to play tricks, though it may be too specific to address your particular needs.
You can also play tricks with forward events by override hit testing:
http://developer.apple.com/library/ios/#documentation/EventHandling/Conceptual/EventHandlingiPhoneOS/MultitouchEvents/MultitouchEvents.html%23//apple_ref/doc/uid/TP40009541-CH3-SW3
Your particular case sounds pretty exotic, so you may have to play tricks like having a parent view whose frame is large enough to contain both views in question.
So when I see ccTouchesBegan (or touchesBegan for that fact of the matter) I see something like this usually:
- (void)ccTouchesBegan:(NSSet*)touches withEvent:(UIEvent*)event
{
UITouch* touch = [touches anyObject];
}
But, what I am not getting is how do you just detect if one object has been touched? For example how would I check if a specific CCSprite I declared has been touched?
Sorry if this is a newbish question but, I just don't understand and if you need more clarification just ask how I can clarify myself more.
I'm not familiar with cocoas2d but in the standard API it sends the touches first to the view touched and then up the view responder chain to a view that has a controller. If that controller does not handle the touch then it goes up to the next view until it ends up at the Window object.
See Responder Objects in the Responder Chain
The best place to trap touches for specific objects is in the object themselves. In the case of sprite-like view, the sprite itself most likely needs to respond to the touch e.g. by moving itself. If you need the touch to be communicated to another object, you should use the delegate pattern so that the sprite can tell its delegate how its been touched.
That last sentence sounded weird.
I don't have the samples in front of me but there should be an example in the Cocos2D download package which demonstrates a touch event and how it propagates down to sprites.
I have a MKMapView inside a UITableView as a custom cell (don't ask ;) - don't know if it matters really), for which I register a regionDidChangeAnimated delegate method. This method gets called three times when the UITableView is loaded - once with the actual region and then two more times with a region that is way off. In the simulator, I consistently get a region with center (+37.43997405, -97.03125000). On the device, it seems to depend on the location reported by the location manager, which initializes the map view.
Why am I getting three regionDidChangeAnimated calls? And why are the center coordinates for the last two of them off?
This is the code I use to get the center coordinates:
- (void)mapView:(MKMapView *)mapView regionDidChangeAnimated:(BOOL)animated {
CLLocation *l = [[CLLocation alloc] initWithLatitude:self.mapView.centerCoordinate.latitude longitude:self.mapView.centerCoordinate.longitude];
(....)
I have set up a map view inside a custom table view cell and added that cell to a table view (although it definitely should not matter where/how the map view is displayed).
I do not see any unexpected calls to the regionDidChangeAnimated: delegate method.
I see calls to this method only when:
The user changes the position/zoom of the map, OR
Some code changes the center/span of the map
Are you sure that you are seeing unexpected calls? You are not using code to setup the region (center/span) of the map?
I'm new to iphone development, and I wanted to know. exactly what does the UITouch instance method view do
[touch view];
I read up on the documentation and understand that it returns the view that the touch was 'in', but what if there exists a view hierarchy. I had originally assumed it would return the subview that is furthest up 'front'. Does this assumption always hold true?
What is the recommended way of determining if the touch was on a certain view or not.
Yes, it would return the view that was actually touched (the topmost view) regardless of where the touch is handled.
The only exception I can think of would be if the top-most view was invisible.
Actually, I believe it gives you where the touch 'started' even though the touch may now be in a different view. Something to keep an in mind if you are watching touchMove and touchEnd events.