Best way to procedurally add Power-up items to an iPhone game - iphone

To better explain this I'll use Doodle Jump as an example. Assuming that the platforms are recycled, when the character jumps up and the new platforms appear (by scrolling down) there is occasionally a propeller hat on one of them. is there a recommended method to manage this new object? Should I instantiate a single one of these power-ups in the game level's "init" method, and then set a boolean to flag whether it appears in my render method and update methods? Or should I instantiate it at the time that I want it to appear (i.e. just before the new platform scrolls down from its position just above the screen) and release it when it's a) grabbed by the character sprite or b) moves off the screen untouched?
Thanks!
Scott

I vote for the latter - instantiate it at the time you want it to appear. If you use some flag to determine whether to display it or not, you'll end up with a bunch of special-case code; not something you want, especially for something this relatively simple.
Considering that you're developing for a mobile device, if for some reason the construction of this object is effecting performance, then I would look into alternate methods (i.e., instantiate once, use flag to render/update).

Related

How to avoid using instanceOf in this case? (allowing clicking on only some objects in a quadtree)

I have a bunch of Tank objects inserted into a quad tree. Some of these tank objects can be clicked on if they implement a Clickable interface. The problem is that in order to know what is being clicked, I need to query the same quadtree, but the quadtree has both clickable and non-clickable objects in it.
Potential solutions:
I could using instanceOf to see which ones in a specified region were clickable when the user clicks the screen, but I hear that using instanceOf is bad practice.
I could maintain TWO quadtrees. one for just tanks, and one for clickable objects. But then I could have to update tanks that implement the clickable interface TWICE since they would be in two seperate quadtrees, which would be slow considering I have to update every step, and people only click the screen every so often.
I could only insert clickable objects into the quadtree, and simply make non-clickable objects define dummy click methods. This would solve the problem, but it doesn't feel right because if something is non-clickable, then it shouldn't be implementing a clicking methods to begin with even if it is an empty one. Or is this ok?
I'm thinking #3 is probably the best way to do it, but I'm not sure. Any pointers?
EDIT: Now that I think about it, #3 would have some problems dealing with overlapped clicks. If a non-clickable tank overlaps a clickable one, and I click in the overlap section, unless I call all click methods for everything at that point, I could end up calling the click method of the wrong tank. To avoid doing THAT, I'd have to use instanceOf and find the clickable object and only call ITS click method.
So maybe #2 is better because I could easily cycle through all clickable objects at the clicked region and choose the one that is either at the top, or closest to the center of the region, and only call that tanks click method.
It could also not necessarily be that they overlap, but are just close together. If I were to make this game touch based, your finger isn't a point, but a region, so it might be better to cycle through objects in a region and find the closest clickable object, which would be hard to do with 3# without sifting through a bunch of non-clickable objects.
#3 sounds pretty good. You could maybe enhance it by creating another interface that ALL your objects implement, for example "GameObject", which has a method "isClickable". Then you can have your clickable objects implement a "return true" for isClickable.

Cocos2D iPhone - Handling Touches Correctly

I am not a newbie to Cocos2D but I am building quite an advanced HUD with several sliding and overlapping CCLayer and CCMenu/CCMenuItemImage objects.
They are all responding to touches correctly in turn. However when things overlap, it seems the buttons underneath take priority over the things on the top, no matter what order I add them to the world.
Indeed, even implementing the registerWithTouchDispatcher method and returning YES/NO ccTouchBegan:withEvent: seems not to have the correct effect. It also appears that ccTouchBegan:withEvent: is then called on all buttons/menus in the world rather than just those underneath the touch.
I'd really like advice on a reliable way to detect and consume a touch on an object that is top most in the view without anything else hearing about the touch.
Thanks in advance!
How about this commit for develop branch of cocos2d-iphone?
v1.0.0-rc3 or earlier doesn't have the mechanism for touch priority. This commit seems to implement it.
Why can't you use tags? I'm not sure at the moment how to check z order but I would personally probably just use tags.

Iphone/ipad architecture suggestions for game look-and-feel app

All you ios architects out there, please help me choose architecture/technology for the following iphone/ipad app.
The app itself is a financial app, but we want more of a game look-and-feel of the app, so we probably don't want to use the builtin looks of the cocoa widgets. The elements on the screen will probably be some kind of blob-shaped images.
The app will essentially have five "blob"-shaped areas, spread out evenly across the screen. One of the blobs will be centered and larger than the other ones. Within each blob there will be clickable areas which will pop up "details" and menu-action blobs. These blobs are also graphics objects and must not take over the whole screen. The blobs should animate nicely when popping up. The graphics elements will have a couple of lines of text, which are generated, so the overlaying text itself cannot be part of the static background-image.
The main user interaction will be swiping within the center blob, displaying summaries of the items that are conceptually contained within the blobs underlying data store. Now and then, the user will drag and drop the item to one of the other blobs. While dragging, the item should be traced by a line and when dropping on the other blob, the item should be animated to look like it's being "sucked into" the blob.
Now, what kind of technique would you suggest for this app? Is Cocoa suitable in this scenario? Should I use a game framework like Cocos2D? All kinds of suggestions including example code snippets are most welcome.
I realize that this question might not be as straightforward and to the point as questions generally are on SO, but I hope your answers will come to use by more people than me. Thanks!
EDIT (MY SOLUTION):
I eventually ended up doing everything in UIKit, which was a lot easier than I expected.
Briefly described I used UIButtons with Custom style and an image background, which gave me full control over the visual appearance of the "items". I also found it very useful to manipulate the underlying CALayer of many of my other UIViews. It is often easier than drawing things from scratch using Core Graphics programming.
Another thing that was useful were the UIGestureRecognizer:s. I found them useful for both handling "real" gestures like swiping, longpress etc, but also for handling normal "tap" for UIView classes that aren't subclasses of UIControl. Two examples are UIImage, UILabel and UIView itself. That way I could handle taps for these simple classes. I could for example use a normal UIView, modify it's CALayer to change the look of it completely and still handle taps. Using this technique, I didn't have to subclass any views at all in my app.
The animations were pretty easy too, even though I had to use a non-public method to use "suck" animation, so my app will never pass App Store moderation. It was just a prototype anyway so I don't care.
When this app will be for real, I will probably implement it in HTML5/JavaScript wrapped by Phonegap. The reason for this is mainly reuse of existing mobile web services and also for code reuse across platforms. It will probably also be easier to hook into the existing security solution when using a webapp.
Cocos2d is great if you need to move elements around really fast as it is a layer on top of OpenGLES. I think from what you have said the UIKit will be fine, you get nice animation support, you can do some nice things with UIScrollViews to handle moving elements around etc.
If you need more detailed graphics support and lots of moving elements, particle effects etc then by all means go for Cocos2D but be aware that in Cocos2d the application works more on a scheduled update method, i.e. you get notified every 1/60th of a second to move stuff draw stuff etc, whereas with normal UIKit approach it is more event drive, i.e. I click a button and show a view etc.

iPhone - At user event create objects in the view

I am new to iPhone programming, so I think part of the problem is that I don't know what I really want to google to find my answer. I am looking for a method that allows a user to draw a line on the screen. There is no guarantee that it will be straight, it can be curved or whatever. I was thinking that I could create some small square image, and then as they draw, place them into a NSset. But I am not really sure how to communicate each new object up to the view. Up to this point, I've just been messing around with objects I put on the view and then assign movement to those, this is my first jump into on-the-fly object creation.
It might be that I just need to jump into a class/object type or even a tutorial, any guidance would be great.
Thanks!
Are you asking how to create a 'paint' type application? There's an apple example for that:
http://developer.apple.com/iphone/library/samplecode/GLPaint/Introduction/Intro.html
This question is relevant, but might be too complex when you're just starting out:
Improving Finger Painting Performance
If you're a bit more specific about what problem your app is to solve you might get some more specific answers.

Things to consider when writing for touch screen?

I'm starting a new project which involves developing an interface for a machine that measures wedge and roundness of lenses and stores the information in a database and reports on it. There's a decent chance we're going to be putting a touch screen on this machine so that it doesn't need to have a mouse or keyboard...
I don't have any experience developing for full size touch screens, so I'm looking for advice/tips/info from you guys...
I can imagine you want to make the elements a little larger than normal... space buttons out a bit more.... things like that... anyone have anything else to add?
A few things to consider:
You need to account for parallax error when touching controls. Basically, the user may touch the screen above or below your actual control and therefore miss the control. This is a combination of the size of the control (eg you can have the active area larger than visual control to allow the user to miss and still activate the control), the viewing angle of the user (which you may or may not be able to predict/control) and the type of touch screen you're using. If you know where the user will be placed relative to the screen when using it, you can usually accommodate this with appropriate calibration.
Depending on the type of touch screen, you may need to ensure that your users aren't wearing gloves or using an implement other than their fingers (eg the end of a pen) to touch the screen. Some screens (eg those depending on conductance) don't respond well to anything other than flesh and blood.
Avoid using double clicks because it can be very hard for users to reliably double click a control. This can be partly mitigated if you've got experienced/trained users working in a fairly controlled environment where they're used to the screens.
Linked to the above, if you are using double clicks, you may find the double click activated when the user only wants to single click. This is because it's very easy for the user's finger to bounce slightly on touching the screen and, depending on how sensitive the double click settings are, trigger a double rather than a single click. For this and the previous reason, we always disable double clicks and only use single clicks (or similar single activation controls).
However big you think you need to make the controls to allow for touch activation, they almost certainly need to be bigger still. Make sure you test the interface with real users in the real deployment environment (or as close to it as you can get). For example, we deployed some screens with nice big buttons you couldn't miss only to find that the control room was unheated and that the users were wearing thick gloves in the middle of winter, making their fingers way bigger than we had allowed for.
Don't put any controls near the edges of the screen - it's very hard to get your finger into the edges (particularly if the screen has a deep bezel) and a slight calibration problem can easily shift the control too close to the edge to use. Standard menus and scroll bars are a good example of controls that can be very tricky to use on a touch screen and you should either avoid them (which is preferable - they're not good for touch screens) or replicate them with jumbo equivalents.
Remember that the user's hand will be over the screen, obscuring some of the screen and controls (typically those below where the user is touching, but it depends on the position of the user relative to the screen). Don't put instructions or indicators where the user's hand or arm will obscure them when trying to use the control they relate to (eg typically put them above rather than below the control).
Depending on the environment, make sure your touch screen is suitably proofed against dust, damp, grease etc and make sure it's easy to clean without damaging it. You wouldn't believe the slime that can quickly accumulate on a touch screen in an industrial or public setting.
The other obvious one is that there's no equivalent of pointer 'hover'. Not that that affects many apps though.
If you decide to put in analog controls (scrollbars, rotation widgets, etc) be sure to put in a digital control also. Some companies think that a touch screen means perfect control over something with your fingers. In real life, this translates to minutes of frustration trying to fix a number that's just a little off.
The most obvious thing is that everything on the GUI needs to be big enough for a fingertip to hit, which is sometimes bigger than you think.
As has been mentioned, there's really no way for a right-click action to happen. Also, double-clicking can be tricky with a fingertip on a touch screen.
The other major thing is that you'll want to create a on-screen keyboard that pops up for text entry and an on-screen numpad for number only fields.
I wrote my own set of controls for a POS application designed specifically to be touchscreen friendly.
Remember to allow enough real estate for stubby fingers and talons. In our application the users can have these manicures that necessitate them to use the pad of their finger instead of the tip. This means that you need to allow more space for activation areas than you would normally consider in any other type of application.
I would also recommend that you accommodate yourself as a programmer from a testing standpoint and from the point of view that things change and there may need to be a keyboard/mouse attached to a non-touch workstation. I cannot tell you how many times I went to touch my flat panel LCD expecting something to happen, before remembering that I had to use the mouse.
Make sure to read your basic UI principles like Fitz law (The time to acquire a target is a function of the distance to and size of the target).
Also consider whether or not the device is stationary or not when it is in use (e.g., like a palmpilot or iphone), research shows that you must accomodate that into your design.
The larger gui elements is the major thing. But it applies to all elements, scroll bars, tabs and even text fields.
The other major thing that I can think of, it's hard for the user to right click. So things that require a right click should be avoided, context menus are the only thing that comes to mind at the moment.
The other responses are pretty good, but are you totally sure that a touch screen would actually be easier to use? There are a lot of devices where a touch screen actually makes them much harder to use, not easier. The main problem is that you can't use the device when you're not looking at it. If users are going to be doing a lot of repetitive actions, a keyboard could be a lot more efficient.
Also, a touch screen might be a lot harder to use by someone with a disability, if you think there's even a small chance that could happen.
Even though this is quite old now, I found it to still be useful, as a starting point for design considerations.
http://www.sapdesignguild.org/resources/tsdesigngl/index.htm
If you've not already done so, have a look at some of the documentation available for developers on mobile platforms, eg Windows Mobile, iPhone.