How to implement touch when using quartz-2d - iphone

I'm using quartz-2d to do a school project. I have implement a user interface like the following.
My question is how to implement touch function under quartz-2d ? For example when I touch some area in the menu will draw another picture?
Thanks

Quartz or no, you're still drawing into a view, and you probably still have a view controller, right? Both the view and the view controller are subclasses of UIResponder, so implement the usual -touchesBegan:withEvent:, -touchesEnded:withEvent:, etc. Since it's probably your view that knows how the various parts of your UI are drawn, it would make sense for the view to handle the touches, figure out which part of the UI was touched, and send higher-level messages to the view controller.

Related

Which design approach should I take for creating this custom view?

I'm trying to create a generic, reusable view, that looks like a lined notepad. The way I decided to approach the problem (after a couple design iterations) is to create a custom view that is composed of a UITextView and a UIView.
When the user scrolls through lines of text I want the UIView to track the scroll direction. The key here is: Within my custom view, I need to change the position of one subview in response to events in another subview. Something needs to coordinate these changes...
Now, one approach I thought of taking was to use a MVC design pattern. A view controller could handle all events and move the subviews around accordingly. This MVC could then be embedded in other MVCs.
Normally when using a MVC design pattern, a controller would handle user events and manipulate the model and view. However, my custom view doesn't have a model - all I'm trying to do is have the view manage it's own subviews when a user does something like scroll. It seems to me that the MVC design pattern isn't a good fit here for two reasons:
There isn't a model or logic that is specific to the program it's being used in.
It seems to me that the view should be responsible for handling user events that change how the view should appear.
... but I could be wrong, which is why I'm asking for help. The question, for those who are more experienced than I and who may have done this many times before, is:
What type of design pattern is appropriate in this situation? MVC or...
You want a view to manage its own subviews? Then do that! So what if that pattern doesn't have a TLA?
A typical approach is to implement layoutSubviews in your container view. Have it check its current state, or the state of the other views in the window (e.g. the contentOffset of a scroll view), and then set up its subviews appropriately. (Resize them, reposition them, etc.)
Just try to keep it fast, since it's likely that layoutSubviews will be called frequently.

Iphone - Drawing into a view : philosophy and interactions

I've understood that I need to subclass a UIView t be able to draw inside it.
The thing I don't understand yet, is the philosophy of the way i must be done...
Let's say I have a view controller, and depending on context, I may want to draw a line into one of the subviews it manages, or a circle, or a rect, or a processed graphic. Or lets say two points that are moving inside a view into a defined rect and that display a bigger point when they are close.
How may I subclass and define the subview to make it able to do this only into its drawRect method ?
How does the controller, that manages more than this simple UIView (let's imagine you have a view controller that manages a view inside which there are many other view, and you want to make some drawings in two of them), and that knows what is needed to be drawn into the correct view (it's a controller, isn't it ?), may interact with the views ? And when the drawing is done, how may the views interact with the controller ?
I've read many doc about drawings (apple, web, forums, tutorials, ...), but I still can't touch the philosophy of the way this must be done.
it's very simple. Make a new class, OliverView, which is a UIView. (ie, it is a subclass of UIView.) In that view, make it draw stuff in a fancy way, inside drawRect.
Now make a UIViewController, called OliverVC. In storyboard put an OliverView inside OliverVC. (beginner explanation of how to do that).
In the OliverView, have properties "hours", "minutes", "seconds".
Now, in OliverView - in the drawRect - have a fancy way to display those values. (Pie chart, glowing letters, animation - whatever you want.)
Now, up in OliverVC, do some calculations to determine the time in Zimbabwe, for example.
Once you want a time displayed, simply set those properties in OliverView - - and you are done.
Your colleague could be programming the OliverView. You need know nothing about how she is going to display the time. Conversely, your colleague need know nothing about your calculations in OliverVC..
So, it's simpleL One part has the job of displaying the data. One part has the job of coming up with the data (doing whatever sort of calculation is relevant in the app).
It's the only architecture possible in a "real time" screen device where the views can and do change at any time.
In answer to your question below: you've forgotten that quite simply, if you have a button that would be a whole separate element. (Perhaps sitting "on top of" the OliverView.) So, it's easy!
The -drawRect method in your UIView subclass defines the onscreen appearance of the view. All drawing is done in -drawRect. Your UIViewController calls methods on its UIView to tell it to draw something differently or to perform some other action.
The UIViewController manages everything to do with the view that is not inherently associated with the drawing of the content. Data associated with the view is often stored in the controller.

iphone MVC game design question

I've got a question about the Model View Controller (MVC) design pattern for iphone games.
Let's say I have a simple game that uses a ViewController. So this view controller has an associated window/view and takes player inputs of buttons sliders, etc.. on this view.
Now I also have a subview of the ViewController's main window/view and I actually do some animation of various polygons in this subview. I also want to take touch events in this subview.
My question is, in the subview, I've got all the user touch code and animation code as the player's touch input affects the animation directly changing rotation etc.. There's a lot of variables in my subview class.
Am I violating the MVC design? Should I delegate this stuff to another class or the view controller?
Many thanks
It depends on what you're trying to accomplish.
Let's assume you want your game to run on an ordinary PC, as well as the iPhone.
Obviously, you'd want to isolate all of the code specific to the iPhone, which includes the touches. I'm assuming you'd want the animation on both versions of your game, so that would be part of the controller, or perhaps the model. Rendering the animation would be part of the view.
The easiest way to determine which functions belong in the view, and which functions belong in the controller, is to imagine porting your application to two different viewers. It doesn't have to be a PC and an iPhone. It can be Android and an iPhone. :-)

Cocoa touch view with multiple identical subviews

I'm having my first foray into Cocoa Touch programming (and one of my first into Cocoa in general) and writing a simple game for the iPhone, though this question is about cocoa touch in general.
The main UI consists of a strip of identical acting buttons (only varying in colour) arranged horizontally across the screen. Although they act like buttons they need to have a custom drawn appearance. Each responds to touch events in the same way, triggering other events in the application.
I want to define a custom view, partly to have more control over the behaviour than just having a bunch of standard buttons, and partly to learn more about cocoa programming.
Should I define a main view with an array of subviews each of which draws itself and forwards touch events? Each button should do standard things like show a pressed state when touched and so on. Are there any pre-existing container views for this kind of scenario?
Or should I just define one main view which draws the whole strip and detects where a touch occurs? I feel this is a badly engineered approach - I shouldn't be programming hit test code.
Edited to clarify the question
The more lightweight approach is to add sublayers to your UIView's layer. Use hitTest: to dispatch touches you receive on you UIView to the CALayer instance that needs to receive it.
If you need more of the UIResponder behavior (touchesBegan etc.), you might want to go with subviews instead of sublayers as that would allow you to handle the events directly in the objects rather than having to dispatch them from a central responder (your main UIView).
Consequently, the essential bit may be just how much of the behavior associated with your individual buttons should be known (handled) by the your main UIView. If it makes sense to have everything controlled from a central place, you can put all the logic in the UIView and just use sublayers for lightweight display purposes. If it makes more sense to put the behavior into the buttons themselves, they shoudl be UIResponders and as such subclass UIView and be added as subviews of your main view.
You should use an array of subviews - that way each "button" class knows how to draw itself and its superview (your stated "main view") places the buttons where they need to go.
And second on the NDA: just talk about the iPhone.
If you have a lot of buttons and want to do fancy things with them, I recommend using layers. Your UIView will handle interpreting which layer had the touch (hit testing) and
respond appropriately. If all you're doing is managing a whole bunch of buttons with various effects and animations, this might be an easier route.
As for bad engineering, not at all. If you take a look at the associated guides, you'll see core animation and layers does require hit testing (though that's relatively easy), but it's far cleaner than the UIView doing all the drawing and more efficient than many subviews. It slips right between those two approaches nicely.
Full disclosure: I'm just wrapping my head around how to best leverage this stuff myself, but for more complicated interactive controls.
You can layout your view in Interface Builder. Simply drag a bunch of UIButtons in your view controller's view.
To respond to events, you define an IBAction in your view controller and connect the buttons to it.
This is all very basic. I really suggest that you at least walk through the iPhone programming introduction that Apple has online. It will teach you iPhone and Interface Builder basics.

iPhone: detecting double taps on uiscrollview

Besides subclassing, is there a simple means to detect double taps on a UIImageView within a UIScrollView?
Thanks
I have created ZoomScrollView class (a drop-in subclass of UIScrollView) that can help you intercept any touches from a scroll view, and also handles double-tap zooming out of the box if that's what you want to do.
Grab it at github.com/andreyvit/ScrollingMadness/ (the README contains a long description of two UIScrollView tricks and the reasoning behind them).
Of course, if you did not want to zoom, and just wanted to intercept a double-tap on some inner image view, then subclassing is your friend. (Another way would be to attach a view controller to that image view or one of its parent views inside UIScrollView, then the controller will be part of the responder chain and will be able to handle the touches.)
Looking at UIImageView.h (within the UIKit framework) there are no public delegate methods or other methods that let you know if the image view has been double-tapped. You'll probably have to subclass.
The answer is NO.
http://developer.apple.com/library/ios/#samplecode/ScrollViewSuite/Introduction/Intro.html
Download the sample code (download link on the top).
See how apple did it.
See you.