I'm using CCAnimate to animate my CCAnimations. What I want is reuse CCAnimate action, so, when I want to play another animation I do something like:
[_animateAction setAnimation:animation];
This works more ore less, the problem is that the internal elapsed time for the animation is not reset, setAnimation is just setting the new animation. Is there any way reset an action in cocos2D? I have been lurking through the code and documentation, and it doesn't seem to exist any method to accomplish this.
Does anyone know what is the "best practice" in this situation?.
There's been a lot of confusion about reusing actions in Cocos2D. Apparently the docs say you should "initialize" the action again but this may not be working for all actions and it's definitely considered a bad practice to send the "init…" message to an already existing object again. This is similar to how you're not supposed to send dealloc manually to an object, yet you can do it.
Nevertheless, that's the way it is supposed to be for Cocos2D actions, so in your case to re-use the same animation action you would have to send it the appropriate init… message again:
[animation initWithDuration:5 animation:anim restoreOriginalFrame:YES];
[_animateAction setAnimation:animation];
As a side note, there's been a suggestion for mutable actions in the Cocos2D issue tracking system for two years now. The submitted code patch isn't going to work without modifications with the current Cocos2D version but it may be used to create your own mutable actions should the need arise.
Related
I'm just starting with Unity and got pretty excited when I saw that the Event System existed, and I could create custom events. The event I need is 'IInventoryMessage::NewItemInInventory', so I went ahead and created the interface for that, set it up on my Inventory.
Then it came time to trigger the event, and the documentation threw me a little.
ExecuteEvents.Execute<IInventoryMessage>(target, null, (x,y)=>x.NewItemInInventory());
My confusion is that it seems to be passing in the target.
My hope was the Unity would keep track of all the components with the message's interface and call that when it was executed. But it seems I have to pass in the GameObject myself.
Is it the case that I'm supposed to keep a list of all the GameObjects I want to receive the message, and the loop over them to pass them into Execute? Why do I need the EventSystem at that point, if I'm already looping over the objects I know need to be called?
I use ExecuteEvents only inside my custom input system where the target it's always known and up to date (according to the pointer raycast). Whenever I want to send a message or trigger an action when something happened, I use the standard C# events, as BugFinder said.
I'm building a game in SpriteKit and while I've seen various posts about making timers for executing run actions like MoveTo. But I wanted to know the best way to go about building a general purpose timer for the whole GameScene. Something that could be referenced almost globally, or passed by value to functions as to tell them when to execute. For example, as of now I spawn a boss on intervals of 50 killed enemies but I want to be able to reference this timer to spawn one, say every minute. I could have a variable at the top of GameScene that is updated in Update, but I'm not sure if this makes sense long term (especially with pausing or general reference). Any advice would be great.
Timer implementation using SKAction may be of use for almost every aspect in a game:
spawning enemies, updating time left labels, regenerating health, regenerating shields or whatever can be regenerated and so on...
There is an update: method with its passed currentTime parameter, where you can do whatever you can achieve using SKActions. It is up to you what you are going to use.
About using global timers. I don't really get what you are referring to and where you can have a need for them... Timers like described above, are often defined per scene, because usually, you don't have to count how much time is passed while transitioning between scenes. If you have to, then, we will think of something :) But, that is rare and I will skip that situation.
All of these timers, when game is paused, should be paused as well. You don't want to see that you have run out of time after a return from a phone call, right? Or you don't want to see full screen of enemies (which both would likely happen if you use NSTimer for time related actions). So as you can see, there is an NSTimer, but I would skip a story about it, because there are lot of posts, here on SO, where people debating about should we use it in SpriteKit or should we skip it. I would say, just skip it because you don't need it at all. So using SKAction or update: method will allow you to have everything paused when game is interrupted (actions will be paused automatically).
On the other hand, implementing timers for a specific purpose, say implementing refilling lives feature, is not a task that SKActions (nor some update: method implementation) can solve. That is because you have to calculate how much time have passed since a certain moment, but a user can terminate an app in a meanwhile. In this case, using something like NSUserDefaults will give you a way to solve app termination, so you can continue counting from where you have left of next time user starts the app. But there is a catch... A user can mess with his clock, so you don't want to rely on client time, and this is usually solved using a server and its time. I guess this may be what you called "a global timer".
I've looked at the methods for block based animation and noticed there is no equivalent parameter or option for [UIView setAnimationRepeatCount:].
What's the simplest way to repeat an animation a fixed number of times? Do you, for instance, chain them using the completion block?
I just asked a similar question and then I read the 2010-11-15 release of the View Programming Guide for iOS. Page 64 caught my attention.
In the animation block, one can still use the [UIView setAnimationRepeatCount:]. I thought that I could/should not. So my ability to read Apple doc needs to improve.
So perhaps this would solve your (and my need). I'm trying it later today
As #PommeOuest mentioned. You can still user [UIView setAnimationRepeatCount:] inside the animation block. I just tried in my project and it works well.
I'm using XCode4 and iOS5.
Set a completion callback - re-initiate the animation in it - and keep track of the counter yourself.
UIView's that don't handle their events pass them up the chain. By default, this passes them to their parent View, and if not handled (ultimately) to their parent UIViewController.
UIScrollView breaks this (there's lots of questions on SO, variations on the theme of "why does my app stop working once I add a UIScrollView?)
UISV decides whether the event is for itself, and if not, it passes it DOWN (into its subviews); if they don't handle the event, UISV just throws it away. That's the bug.
In that case, it's supposed to throw them back up to its own parent view - and ultimately parent UIVC. AFAICT, this is why so many people get confused: it's not working as documented (NB: as views are documented; UISV simply is "undocumented" on this matter - it doesn't declare what it aims to do in this situation).
So ... is there an easy fix for this bug? Is there a category I could write that would fix UISV in general and avoid me having to create "fake" UIView subclasses who exist purely to capture events and hand them where they're supposed to go? (which makes for bug-prone code)
In particular, from Apple's docs:
If the time fires without a significant change in position, the scroll view sends tracking events to the touched subview of the content view. If the user then drags their finger far enough before the timer elapses, the scroll view cancels any tracking in the subview and performs the scrolling itself.
...if I could override that "if the timer fires" method, and implement it correctly, I believe I could fix all my UISV instances.
But:
- would apple consider this "using a private API" (their description of "private" is nonsensical in normal programming terms, and I can't understand what they do and don't mean by it)
- does anyone know what this method is, or a good way to go about finding it? (debugging the compiled ObjC classes to find the symbol names, perhaps?)
I've found a partial answer, that's correct, but not 100% useable :(.
iPhone OS 4.0 lets you remotely add listeners to a given view, via the UIGestureRecognizer class. That's great, and works neatly.
Only problem is ... it won't work on any 3.x iPhones and iPod Touches.
(but if you're targetting 4.0 and above, it's an easy way forwards)
EDIT:
On OS 3.x, I created a custom UIView subclass that has extra properties:
NSObject *objectToDelegateToOnTouch;
id touchSourceIdentifier;
Whenever a touch comes in, the view sends the touch message directly to the objectToDelegateToOnTouch, but with the extra parameter of the touchSourceIdentifier.
This way, whenever you get a touch, you know where it came from (you can use an object, or a string, or anything you want as the "identifier").
We have an app in AppStore Bust~A~Spook we had an issue with. When you tap the screen we use CALayer to find the position of all the views during their animation and if you hit one we start a die sequence. However, there is a noticeable delay, it appears as if the touches are buffered and we we receive the event to late. Is there a way to poll or any better way to respond to touches to avoid this lag time?
This is in a UIView not a UIScrollView
Are you using a UIScrollView to host all this? There's a property of that called delaysContentTouches. This defaults to YES, which means the view tries to ascertain whether a touch is a scroll gesture or not, before passing it on. You might try setting this to NO and seeing if that helps.
This is a pretty old post about a seasonal app, so the OP probably isn't still working on this problem, but in case others come across this same problem and find this useful.
I agree with Kriem that CPU overload is a common cause of significant delay in touch processing, though there is a lot of optimization one can do before having to pull out OpenGL. CALayer is quite well optimized for the kinds of problems you're describing here.
We should first check the basics:
CALayers added to the main view's layer
touchesBegan:withEvent: implemented in the main view
When the phase is UITouchPhaseBegan, you call hitTest: on the main view's layer to find the appropriate sub-layer
Die sequence starts on the relevant model object, updating the layer.
Then, we can check performance using Instruments. Make sure your CPU isn't overloaded. Does everything run fine in simulator but have trouble on the device?
The problem you're trying to solve is very common, so you should not expect a complex or tricky solution to be required. It is most likely that the design or implementation has a basic flaw and just needs troubleshooting.
Delayed touches usually indicates a CPU overload. Using a NSTimer for frame-to-frame based action is prone to interfering with the touch handling.
If that's the case for your app, then my advice is very simple: OpenGL.
If you're doing any sort of core-animation animation of the CALayers at the same time as you're hit-testing, you must get the presentationLayer before calling hitTest:, as the positions of the model layers do not reflect what might be on screen, but the positions to which the layers are animating.
Hope that helps.