Fix UIScrollView to pass events UP the chain rather than DOWN - iphone

UIView's that don't handle their events pass them up the chain. By default, this passes them to their parent View, and if not handled (ultimately) to their parent UIViewController.
UIScrollView breaks this (there's lots of questions on SO, variations on the theme of "why does my app stop working once I add a UIScrollView?)
UISV decides whether the event is for itself, and if not, it passes it DOWN (into its subviews); if they don't handle the event, UISV just throws it away. That's the bug.
In that case, it's supposed to throw them back up to its own parent view - and ultimately parent UIVC. AFAICT, this is why so many people get confused: it's not working as documented (NB: as views are documented; UISV simply is "undocumented" on this matter - it doesn't declare what it aims to do in this situation).
So ... is there an easy fix for this bug? Is there a category I could write that would fix UISV in general and avoid me having to create "fake" UIView subclasses who exist purely to capture events and hand them where they're supposed to go? (which makes for bug-prone code)
In particular, from Apple's docs:
If the time fires without a significant change in position, the scroll view sends tracking events to the touched subview of the content view. If the user then drags their finger far enough before the timer elapses, the scroll view cancels any tracking in the subview and performs the scrolling itself.
...if I could override that "if the timer fires" method, and implement it correctly, I believe I could fix all my UISV instances.
But:
- would apple consider this "using a private API" (their description of "private" is nonsensical in normal programming terms, and I can't understand what they do and don't mean by it)
- does anyone know what this method is, or a good way to go about finding it? (debugging the compiled ObjC classes to find the symbol names, perhaps?)

I've found a partial answer, that's correct, but not 100% useable :(.
iPhone OS 4.0 lets you remotely add listeners to a given view, via the UIGestureRecognizer class. That's great, and works neatly.
Only problem is ... it won't work on any 3.x iPhones and iPod Touches.
(but if you're targetting 4.0 and above, it's an easy way forwards)
EDIT:
On OS 3.x, I created a custom UIView subclass that has extra properties:
NSObject *objectToDelegateToOnTouch;
id touchSourceIdentifier;
Whenever a touch comes in, the view sends the touch message directly to the objectToDelegateToOnTouch, but with the extra parameter of the touchSourceIdentifier.
This way, whenever you get a touch, you know where it came from (you can use an object, or a string, or anything you want as the "identifier").

Related

How can I be notified of a banner notification in iOS?

I'm writing an iOS application and I'd like to pause my app's motion content when the operating system decides to show a Banner Notification like this one:
Is there a system NSNotification that I can observer or a method that gets called which I can react to? I've triedapplicationWillResignActive, but that isn't called in this case.
I took a stab at it this morning, and I'm inclined to say that there's no public API for this.
I tried using the code outlined here, and didn't catch any notifications. Then, I ran a bunch of "tests" to see if I could find anything.
To test, I created a pair of applications, one to schedule notifications (GitHub link), and one to try and "catch" them (GitHub link). In my Sender app, I can send N notifications every N seconds. I picked some arbitrarily high value and sent them.
In my catcher, I've tried looking at visibleRect values up and down the layer hierarchy. (The keyWindow lives in a layer, but it's superview, and super layer.delegate are both nil) I haven't checked constraints, but that shouldn't matter. I've looked at the application's window, it's nil superview, it's layer, it's subviews. The application's bounds aren't effected either. The app is sandboxed so well, that springboard and notification center don't exist in the app's world.
I started going down the path of accessing private frameworks, but decided it wasn't worth the effort.
I've opened Instruments and looked at the transparency levels of the views. (Is it possible to force all views in a hierarchy to be opaque, and then use that to see if the banner is blocking something? Perhaps it's not "blocking" if it's transparent?)
I've also attempted to take a screenshot, and check the colors in the top area of the screen, but that wouldn't work because you need to pass a view in to the context. Even if it would work, it wouldn't be particularly performant.
Another thought I've had would be to listen for push notifications on the push port, but I doubt that Apple would allow you to catch another app's notifications. As a developer, I wouldn't send private info in an alert, but it's still a concern.
The truth is that notification banners don't really cause your application to become inactive, so I'm not sure that this behavior is wrong. If it's a convenience, file a bug.
How about requesting DeviceWillShowNotificationNotification?

Xcode 4 organization, Views and Controllers

Thank you for reading this.
These are my first steps in the iPhone Ipad app programming.
In order to learn from scratch (and because I know my app would need dynamic views), I decided not to use Interface Builder.
My question is(regarding the fact that I don't use IB): how would one use Views and Controllers?
I think I understand the MVC concept as it is repeated over and over again in the tutorials I follow, but after the "MVC explanation" part, nothing is made to make it clear "on the field" and closer to the real world (Earth being Xcode here).
Worse, sometimes it seems that some tutorials mix these two concepts up and use one word to say the other.
I read around here a lot of questions (and answers of course) based on the matter but I still don't get it. Sometimes it's too generic, sometimes it's too specific (for me at least).
For what I think I understood, the UIView is the static View when the View Controller is the logic which links the View to the data and those 3 concepts must be separated.
This separation, while a bit clearer with the use of Interface Builder seems to get quite blurry when you code everything as it becomes a virtual soup.
Technically, should I create a specific ".h" and ".m" file for each View AND ALSO for each associated Controller?
If I understand the MVC pattern, it's seems that I should but when I follow tutorials (without IB) it is never the case, view and controllers are created and manipulated within the same implementation files.
Any high level (I'm a noob, don't forget) but still applicable explanation of the use and best practices?
Let's say I want to create a simple app with a green view I can swipe to get to a red view.
I know for sure that I would need at least an:
xxxappDelegate.h
xxxappDelegate.m
xxxView.h
xxxView.m
What else?
1)Where should I put the the second view (along with the first one in "xxxView" or should I create another class h and m file?)?
2) What would the controller(s) do, for that kind of application? In which files would they be created and in which files would they be invoked and how would they "control" the related view?
3) Mainly, regarding to MVC pattern and the fact that there would be no IB, how would you organize that app?
I know it's a lot if you go into the details and code but that's not the point here.
Thank you. This - as simple as it seems - would be of a great help and is not as easily found in tutorials as you might think.
I understand the tutorials I read but they are so particular. As soon as I try to create something on my own which is not a "Hello World" screen, I realize that something is missing, logic wise.
Thank you very much for your help.
Sorry, but I can't get past your first paragraph. If you don't use Interface Builder, you are not going to be a successful iOS programmer. It's that simple. The best advice I've ever read about this is in this Aaron Hillegass interview:
Experienced Cocoa programmers put a lot of the smarts of their application in the NIB file. As a result, their project has a lot less code. Programmers who have spent a few years working in Visual Studio get freaked out. They ask me stuff like, "Can I write Cocoa apps without using Interface Builder? I like to see the code. Maybe I can just explicitly create my windows and the views that go on it?"
It is difficult to explain how the NIB file (and a few other scary ideas) create leverage. It is that leverage that enables one guy in his basement to compete with a team of engineers at Microsoft or Adobe. It is like I showed a chain saw to a early American colonist, and he said, "Can I cut down the tree without starting the engine? I don't like the noise. Maybe I can just bang it against the tree?"
Yes, it's hard to generalize after reading specific tutorials, but you will learn. I thought the learning curve was insurmountable when I first started, but if I can become a programmer that gets paid to write Cocoa software, you can too. Just keep reading and practicing. Don't fight the tools--use them.
Early:
In order to learn from scratch (and because I know my app would need
dynamic views), I decided not to use Interface Builder.
Later:
As soon as I try to create something on my own which is not a "Hello
World" screen, I realize that something is missing, logic wise.
I think what is missing logic wise is that you have accepted your assumption that Interface Builder was a crutch and that to learn "from scratch" you had to avoid using it. You are trying to learn the MVC design pattern but you are not willing to use the tools that have been designed to support it.
In Apple's own documentation they discuss the fact that sometimes there is value in having combined roles—Model Controllers and View Controllers—and that is worth reading, as it may explain some of the code examples you're reviewing. But my primary advice would be: before assuming you know better than the people who built the tools, trying using them the way they recommend. It might be an eye-opener.
Additions later:
OK, so to try and actually answer your questions...
1)Where should I put the the second view (along with the first one in
"xxxView" or should I create another class h and m file?)?
If I am understanding correctly and the two views you are thinking of here are the red and the blue displays to the user, you wouldn't have a second view—what you would do, whether in IB or in code—is to have an element in your view on which you changed a colour property... This would be done programmatically whether you were setting up the parent view in IB or in code.
2) What would the controller(s) do, for that kind of application? In
which files would they be created and in which files would they be
invoked and how would they "control" the related view?
There would be a view controller that would implement the gesture support, and would provide a method for changing the colour of the item in the view between blue and red when that swipe gesture was successfully received. I would have a ViewController.h and and ViewController.m. I think if you were implementing the View entirely in code, it would be implemented in the ViewController.m rather than having a separate View.m. (If you were using IB, you would have a ViewController.h, ViewController.m and ViewController.xib, with the latter providing the basic setup of the view elements and layers.)
You would create a ViewController instance in your AppDelegate.
3) Mainly, regarding to MVC pattern and the fact that there would be
no IB, how would you organize that app?
As above.
If you really insist on going without IB (and I agree 100% with SSteve) then in addition to the files you list you will also want to use a UIViewController. Now, it is important to know that you only need to create header and implementation files when you are adding or changing default behavior.
In you case, the view can probably just be a generic UIView, so you wouldn't need the files. What you would do is subclass UIViewController, and put the swipe logic there. In the swipe logic code you would probably just change the background color of the view.
You would instantiate the view controller in the delegate (in this case anyway) and create the view in the view controller's loadView method. That is required since you won't be using IB.
Personally though, I think that IB does a great job of encouraging proper MVC patterns, and if you are just starting then you should go with IB.
In practice you mostly do not make classes for views, unless they need to do custom drawing or display.
For lightweight configuration of views, that is often done in the viewController's viewDidLoad (or I guess in your case loadView) method.
Yes it's a good idea to keep model and view separated, but that's also balanced with the equally good idea to reduce the amount of code that exists. The less code that is written, the fewer bugs you will have.
Since you are just starting out at this point I would absolutely start by using ARC, and using IB - even though I'm sure you're tired of hearing that from everyone, I'll give you an alternate take. Less code means fewer bugs. And the fact that so many experienced developers are telling you to use it should be a giant clue about what a productive path forward is. I mean, are you doing this to build applications or learn every corner of the UIView class?
To speak to your code example, you do not need the UIView custom class. Just create use a UIViewController's main view as a container view, place a UIView inside with the background set to red. On swipe (using a gesture recognizer attached to the container view) call the UIVew method to swap in a new green-background UIView for the existing red view, you can even define the transition style.
Or create a scroll view in the container view, set up the red and green view inside the scroll view, set the content size and enable paging on the scrollview.
Or create a custom UIView class as you had, listen for touch events and slowly adjust two subview positions to follow the drag action.
Or use an OpenGL backed view, and based on the gesture recognizer pan the scene you are observing with two triangles for a green rectangle and two triangles for a red rectangle.

What is a good design pattern for skinning an (iphone) app?

I'm far far down the road of having built all my nice buttons and things in a XIB, and I had the sudden realization that people will pay $200 more for the lower quality laptop "Cause it's pink!" but I digress.
I need to somehow centrally control the colour scheme of my app, and change colours on the fly. I have a couple ideas, like maybe a singleton "Theme" object with a few kvo compliant properties holding theme colours and fonts, then have the view controller "listen" for changes, and re-paint everything when the theme changes.
Problems with this so far: I'd need to have a pointer to every single UI object, including things like table view cells - which seems like a pain. Another possibility would be to subclass all my themed UI objects, maybe get them to register as observers on their own, but that makes me wonder about the overhead needed for this, maybe KVO isn't even the way to go here, I don't know.
So I'm wondering if anyone could share what they've done in the past, what works and what leads to big problems?
Thanks!
Update
I ended up going with a singleton object called SkinDispatcher, which only contains a lot of properties that are meant to be observed by UIView subclasses. Then I made quick and dirty subclasses of UILabel, UIButton, UITextField, anything else I happened to use.
These subclasses each looked up their tag number and used it to register for changes to the applicable fonts and colours, in awakeFromNib.
Next I made a class specifically for loading a style, which essentially opens up a plist file full of keys holding font names and R|G|B|A colours, reads them in and sets them to the applicable property in SkinDispatcher, as determined by key name. The last step (yet to be tried) is to set an observer on the StandardUserDefaults key for skins, props to the answerer for that idea.
You already mentioned a possible approach, namely using KVO. ANother possibility is to register for a specific notification, and react accordingly when the notification arrives in your view controller.

App development: Always subclass, always load from NIBs - caveats?

This is Cocoa Touch (et al), iPhone, XCode only.
After completing my first commercial iPhone app, I'm struggling a bit to find a way to start and expand an app from scratch which gives the most linear development (i.e., the least scrapping, re-write or re-organization of code, classes and resources) as app specs change and I learn more (mostly about what Cocoa Touch and other classes and components are designed to be capable of and the limitation of their customization).
So. File, New Project. Blank window based app? Create the controllers I need, with .xib if necessary, so I can localize them and do changes requested by the customer in IB? And then always subclass each class except those extremely unlikely to be customized? (I mean framework classes such as UIButton, CLLocation etc here.)
The question is a generic 'approach' type question, so I'll be happy to listen to handy dev practices you've found paid off. Do you have any tips for which 're-usable components' you've found have become very useful in subsequent projects?
Clients often describe programs in terms of 'first, this screen appears, and then you can click this button and on the new screen you can select... (and so on)' terms. Are there any good guides to go from there to vital early-stage app construction choices, i.e. 'functions-features-visuals description to open-ended-app-architecture'?
For example, in my app I went from NavBar, to Toolbar with items, to Toolbar with two custom subviews in order to accommodate the functions-features-visuals description. Maybe you have also done such a thing and have some advice to offer?
I'm also looking for open-ended approaches to sharing large ("loaded data") objects, or even simple booleans, between controllers and invoking methods in another controller, specifically starting processes such as animation and loading (example: trigger a load from a URL in the second tab viewcontroller after making sure an animation has been started in the first tab viewcontroller), as these two features apply to the app architecture building approach you advocate.
Any handy pointers appreciated. Thanks guys.
Closing this up as there's no single correct answer and was more suitable for the other forum, had I known it existed when I asked :)
If you want to know the method I ended up with, it's basically this:
Window-based blank app
Navigation Controller controls all, whether I need to or not (hide when not used)
Tab Bar Controller if necessary
Connect everything <-- unhelpful, I know.
Set up and check autorotation, it might get added to some view later.
Add one viewcontroller with xib for each view, you never know when they want an extra button somewhere. It's easier to copy code than make the max ultra superdynamic adjustable tableviewcontroller that does all list-navigation, etc.
Re-use a viewcontroller only when just the content differs in it, such as a detail viewcontroller.
Minimize code in each viewcontroller by writing functions and methods and shove them in a shared .m
Everything that's shared ends up in the App delegate, except subclassed stuff.
Modal viewcontrollers are always dynamically created and never have an xib.

Delay with touch events

We have an app in AppStore Bust~A~Spook we had an issue with. When you tap the screen we use CALayer to find the position of all the views during their animation and if you hit one we start a die sequence. However, there is a noticeable delay, it appears as if the touches are buffered and we we receive the event to late. Is there a way to poll or any better way to respond to touches to avoid this lag time?
This is in a UIView not a UIScrollView
Are you using a UIScrollView to host all this? There's a property of that called delaysContentTouches. This defaults to YES, which means the view tries to ascertain whether a touch is a scroll gesture or not, before passing it on. You might try setting this to NO and seeing if that helps.
This is a pretty old post about a seasonal app, so the OP probably isn't still working on this problem, but in case others come across this same problem and find this useful.
I agree with Kriem that CPU overload is a common cause of significant delay in touch processing, though there is a lot of optimization one can do before having to pull out OpenGL. CALayer is quite well optimized for the kinds of problems you're describing here.
We should first check the basics:
CALayers added to the main view's layer
touchesBegan:withEvent: implemented in the main view
When the phase is UITouchPhaseBegan, you call hitTest: on the main view's layer to find the appropriate sub-layer
Die sequence starts on the relevant model object, updating the layer.
Then, we can check performance using Instruments. Make sure your CPU isn't overloaded. Does everything run fine in simulator but have trouble on the device?
The problem you're trying to solve is very common, so you should not expect a complex or tricky solution to be required. It is most likely that the design or implementation has a basic flaw and just needs troubleshooting.
Delayed touches usually indicates a CPU overload. Using a NSTimer for frame-to-frame based action is prone to interfering with the touch handling.
If that's the case for your app, then my advice is very simple: OpenGL.
If you're doing any sort of core-animation animation of the CALayers at the same time as you're hit-testing, you must get the presentationLayer before calling hitTest:, as the positions of the model layers do not reflect what might be on screen, but the positions to which the layers are animating.
Hope that helps.