SpriteKit/Scenekit TouchesBegan or GestureRecognizer - sprite-kit

Is there a best practice when it comes to games for overriding either touchesBegan etc. in your scene and node subclasses as opposed to using the gestureRecognizers?
I know Apple's templates have the override func touchesXXXX methods in them and this allows for a bit more control (unless using custom recognizers). But a lot of tutorials seem to use the GestureRecognizer approach.
Is this primarily tutorials making things easier, or is it more common to use the GestureRecognizer route to remove some of the complexity? I know this could come down to developer preference, but I'm looking for a 'Best Practice' - do Apple actually suggest one way over the other when making games?

I've wonder the same thing too - it gets even more interesting when you mix SceneKit and SpriteKit together. I haven't seen any official documentation suggesting one or the other, as they are really for slightly different things. You'll find the Xcode game templates use one or the other (or both, in the case of a cross platform spritekit game).
I think the reason most of the tutorials and blogs use the GestureRecognizer approach is that it does a lot of the work for you, but when you get into more complicated use cases, you may find you need to handle the touches manually and handle gestures yourself (theres a few examples floating around the internet) as I had to for a particular project.
I have also read on a few blogs, that sometimes using the touches approach AND GestureRecognizers together can give incorrect results (specifically missing touches), but that could be stale information - it's worth checking though if you did decided to use both.
So to answer the question, I don't believe there is an official best practice for this, as both are valid and current methods. I'd say use whichever you think fits better and makes the code as simple and clean as possible.

If all you want to do is respond to touches on the screen, use the 3 standard 'touches' methods.
If you need to respond to distinct swipes, pans, pinches etc then you'll need gesture recognisers.

Related

Clean ways of managing device rotation with iOS 4.x

I always get stuck with managing rotation on iOS application, there must be some kind of efficient way to do it but apparently I haven't heard of it yet. My interface is too complex to be parametrized in InterfaceBuilder so I tried doing all these different things:
Build two interfaces, one for portrait and one for landscape, but I found it awfully tiresome to devise some methods that enable one view controller to keep up with the other, so when the device is rotated the second view controller knows where to pick up the story.
Change my views' frames manually inside willRotateToInterfaceOrientation:, but in this scenario my whole interface turns into a bloody mess quite randomly (while sometimes it does the job alright...)
What do you think best practices are? Where might have I gone wrong? What did I miss? Thx!
I always go for the second option and it has never let me down. If you do it right, you will have always the expected result. What I think it's happening to your application (and you call it bloody mess quite randomly) its because your UIView's will probably still have some autoresize definitions on the Interface Builder. Besides removing all the autoresizes I also uncheck the "Autoresize subviews" checkmark from the parent UIViews.
Just playing devils advocate to JackyBoy's comment, I think it depends on the complexity of your view. In lots of cases I have found it simpler to just use a landscape UI. The benefit for me is the ease of visualization. You know (more or less) what you are going to get without as much trial and error and it's easy enough to pass whatever data is needed along (i find that easier then dealing with moving UI components around programmatically). That said, I don't know if there is really a best practice. It's a do what feels best to you kind of thing, I think. Though if I had to guess Apple's definition of what the best practice would be then it might be to use the two views.
Oh, I should also add that you can leverage the 'springs' and such for components in a nib that can sometimes be enough to handle the rotation as well

Is UITextInput missing selection handling mechanics?

If you implement UITextInput on your custom view and - say - use CoreText to render the text you get to a point where you can draw your own cursor and selection/marking and have that fully working with the hardware keyboard. If you switch to Japanese input then you see the marking, but there's something curious: if you long press into the marking you get the rectangular system loupe and selection handling without having to deal with the touches yourself.
What I don't get why we would have to implement our own touch handling for the selection, draw our own loupes etc. It's working for marking! So what do I have to do to get the standard gesture recognizers added to my custom view as well?
the one sample on the dev site only has a comment about that user selection would be outside the scope of the sample. Which would indicate that indeed you have to do it yourself.
I don't think that it is in Apple's interest that all developers doing their own Rich Text editor class keep doing their own selection handling code, let alone custom drawing of the round and rectangular loupes?! Granted you can try to reverse engineer it such that it comes really close, but that might give users a strange feeling if the selection mechanics differ ever so slightly.
I found that developers are split in two groups:
1) rapes UIWebView with extensive JavaScript code to make it into an editor
2) painstakingly implements the selection mechanics and loupe drawing themselves
So what is the solution here? Keep submitting Radars until Apple adds this missing piece? Or is this actually already existing (as claimed by the aforementioned engineer I met) and we are just too stupid to find how to make use of it, instead resorting to doing everything (but marked text) manually?
Even the smart guys at Omnifocus seem to think that the manual approach is the only one working. This makes me sad, you get such a great protocol, but if you implement it you find it severely crippled. Maybe even intentionally?
Unfortunately the answer to my question is: YES. If you want to get selection mechanics on a cusrom view you have to program it yourself.
As of iOS 6 you can subclass UITextView and draw the text yourself. According to an Apple engineer this should provide the system selection for you.

iOS chart framework: TapkuLibrary, Core-Plot or any. Tap of point calls method?

I'm working on an iOS project where tapping on a particular point in a graph should take the user to another scene. Basically, to be able to trigger a method from the user tapping on a point if this makes more sense. Is there any of these frameworks which would make this easy?
Thanks in advance.
I would recommend giving ShinobiControls a try, they have many built-in interactive features.
As a full disclosure, I work for the parent company that owns shinobi controls
Core Plot can certainly do this. It includes several example programs that demonstrate how to set up a delegate to be notified when a point is touched on the plot. How you respond to that notification is up to you.
Yep. TapKu and Core-Plot both have user interaction, if I recall. Core-Plot is definitely powerful, but I'll be damned if it's lightweight or easy to use. TapKu is definitely lightweight and easy to use, but I needed a little more juice for my charts ... like multiple lines, negative numbers, missing data points, not just the single line (with a "goal").
Right now I've got a bit of a hybrid between Josh Buhler's GRChart, Kryali's MultiTouchS7GraphView, BugCloud's Customed-s7graphview and some Frankensteinian menagerie of my own junk to power my chart.
I'm personally pretty fond of Josh's GRChart and recenty Honcheng's iOSPlot for their sheer simplicity (lending well to customization) and frankly, their underdogishness. Neither of those have touch gestures, but it wouldn't take much work at all to reuse the code from BugCloud's xAxisWasTapped: method, or other methods from any of the other touch-enabled charts.

port an iOS (iPhone) app to mac?

Is there a preferred way to go about this?
The app in question is not too large . . . single-player game that I wrote over the course of a couple of months.
EDIT: I should add that I have no experience with mac development . . . outside of what comes naturally with being an iOS developer.
EDIT: Classes heavily used in the game: subclasses of NSObject, UIView, and UIViewController. I don't know much about NSView, but I'm pretty sure all the UIView stuff will work in that class. Also some use of UITableViewController. I do also have Game Center, but I can leave that part out for now. There is no multi-touch.
EDIT: My graphics is all stuff that is in the QuartzCore and CoreGraphics frameworks. I do have a moderate view hierarchy.
EDIT: If you are doing such a port, you may also be interested in the issue of memory management
There's no easy way. It's that simple. Depressingly, you simply have to become good at programming the Mac.
"I'm pretty sure all the UIView stuff will work in that class" -- unfortunately, no. Everything is different that enough you have to work hard.
It's not a fun gig. Make sure you really, really think it's worth it financially.
Apart from anything else, be aware of the "sibling views don't work on OSX" problem if you stack up a lot of views in your iOS app. Essentially, you will have to change to using layers (instead of simply views) on the Mac if you rely on nested hierarchies of views here and there on the phone!
Click this link: Is there a proper way to handle overlapping NSView siblings? for the gory details on that particular problem!
http://chameleonproject.org/ UIKit for Mac
from Iconfactory is worth checking out.
"Chameleon is a work in progress. The framework currently implements about 60% of UIKit after nine months of work."
https://github.com/BigZaphod/Chameleon
You may have a lot of work ahead of you. While purely algorithmic classes will port without any change, anything that touches UIKit will likely need to be rewritten, or heavily adapted. The UI class design pattern on OSX is that of a relationship between views, where your code is responsible for managing controllers; while on iOS it is one of a relationship between view controllers, where view management is implied.
Of course, as BoltClock mentioned, you have the issue of interaction. Since touch no longer works, you will probably need to work on your interaction model first, even before you start porting.
There exists an open source (BSD) UMEKit library that may help with porting a few UI classes, but you may have to rewrite a fair amount of the UI to better handle the mouse/keyboard/multi-window/menu GUI environment. Basic NSObjects, and some Open GL and Quartz graphics rendering, may port with only minor touch ups.
As others say, porting can be a chore. The general techniques work, though. You redesign the interface in Interface Builder (where applicable) and check what the different controls are called (CocoaTouch only has a small subset of typical desktop controls). UI* typically becomes NS*. Tableview delegation is similar, so it will probably be easy.
I'll have to recommend the Aaron Hillegass book as usual. It's a great introduction to Mac development, and knowing iOS development gives you an edge.
Since it's a game, you probably need to consider how to do fullscreen mode. The game doesn't necessarily take up the entire screen anymore, and you shouldn't force it. A whole new set of preferences will now be necessary. There is of course some "fun" involved now that there are new ways to handle resolution listing/changing with Snow Leopard (with the previous ways giving you deprecation warnings).
Just accept that there will be a possibly lengthy transition period until everything "clicks" :)

Implementing tracing gestures on iPhone

I'd like to create an iPhone app that supports tracing of arbitrary shapes using your finger (with accuracy detection). I have seen references to an Apple sample app called "GestureMatch" that supposedly implemented exactly that, but it was removed from the SDK at some point and I cannot find the source anywhere via Google. Does anyone know of a current official sample that demonstrates tracing like this? Or any solid suggestions on other resources to look at? I've done some iPhone programming, but not really anything with the graphics API's or custom handling of touch gestures, so I'm not sure where to start.
If you're on 3.1.3 firmware you can use the touchesBegan, touchesChanged, and touchesEnded methods. If you were to do an iPad app on 3.2, you'd have access to gesture recognizers such as UIPanGestureRecognizer - which provides the same basic functionality but also gives you some extra information.
The problem here is that they will not give you a smooth line without some extra work on your part, but these are the basic ways to handle finger tracking.
Unfortunately I don't have any examples to give you, but check out the stuff I mentioned in the developer documentation. You should be able to at least get started from that.
I'm uncertain if gesture recognizers are available in 4.0. Might be worth checking out.