Dual Virtual Joystick for iOS simultaneous use - sprite-kit

I'm using a virtual joystick for SpriteKit, and it works great. I have tried either JCInput version, or SpriteKit-Joystick version. Both were used for the movement, and used them on the left.
This is the code I used, although the documentation is very good in the gitHub anyway:
For JCInput version:
self.joystick = [[JCJoystick alloc] initWithControlRadius:40 baseRadius:45 baseColor:[SKColor blueColor] joystickRadius:25 joystickColor:[SKColor redColor]];
[self.joystick setPosition:CGPointMake(70,70)];
[self addChild:self.joystick];
and the update function:
-(void)update:(CFTimeInterval)currentTime {
[self.myLabel1 setPosition:CGPointMake(self.myLabel1.position.x+self.joystick.x, self.myLabel1.position.y+self.joystick.y)];
[self.myLabel2 setPosition:CGPointMake(self.myLabel2.position.x+self.imageJoystick.x, self.myLabel2.position.y+self.imageJoystick.y)];
/* Called before each frame is rendered */
}
And for the SpriteKit-Joystick version:
SKSpriteNode *jsThumb = [SKSpriteNode spriteNodeWithImageNamed:#"joystick"];
[jsThumb setScale:0.5f];
SKSpriteNode *jsBackdrop = [SKSpriteNode spriteNodeWithImageNamed:#"dpad"];
[jsBackdrop setScale:0.5f];
self.joystick = [Joystick joystickWithThumb:jsThumb andBackdrop:jsBackdrop];
self.joystick.position = CGPointMake(50,50);
[self addChild:self.joystick];
//I've already declared it in the header file
velocityTick = [CADisplayLink displayLinkWithTarget:self selector:#selector(joystickMovement)];
[velocityTick addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSRunLoopCommonModes];
and the update function:
-(void)joystickMovement
{
if (self.joystick.velocity.x != 0 || self.joystick.velocity.y != 0)
{
[self.myLabel1 setPosition:CGPointMake(self.myLabel1.position.x+.1 *self.joystick.velocity.x, self.myLabel1.position.y+.1 * self.joystick.velocity.y)];
}
}
Now everything works perfectly, and don't have any issue with it. But I need to add another one to rotate my character (let's call it the self.myLabel1). I tried duplicating the object creation (with unique names, parameters and positions to put them to the right side of the screen, but other bits of code being exactly the same as what I used above).
They also work, but the problem is that they don't work simultaneously. I can either use the left one or the right one at any given time, and not together. Do I need to run them on two separate thread? I've tried using two CADisplayLinks with two separate Selectors, and nothing. I tried using the same one, and nothing.
Can anyone shed some light on this shadow?
Thanks a lot in advance.

You should be overriding update in your SKScene instead of using CADisplayLink. You can call joystickMovement from update and achieve the desired effect.
You can read more about the different methods called as SKScene processes each frame in the SKScene class reference.
If you haven't already, you'll also need to set multipleTouchEnabled to true in your SKScene's view. You can do this from your GameViewController with
self.view.multipleTouchEnabled = YES;

Related

Detect click/touch on isometric texture

I am having a hard time trying to implement click handling in a simple isometric Sprite Kit game.
I have a Game Scene (SKScene) with a Map (SKNode) containing multiple Tile objects (SKSpriteNode).
Here is a screenshot of the map :
I want to be able to detect the tile the user clicked on, so I implemented mouseDown on the Tile object. Here is my mouseDown in Tile.m :
-(void)mouseDown:(NSEvent *)theEvent
{
[self setColorBlendFactor:0.5];
}
The code seems to work fine but there is a glitch : the nodes overlap and the click event is detected on the transparent part of the node. Example (the rects have been added to illustrate the problem only. They are not used in the logic) :
As you can see, if I click on the top left corner of the tile 7, the tile 8 becomes transparent.
I tried something like getting all the nodes at click location and checking if click is inside a CGPath without success (I think there was something wrong in the coordinates).
So my question here is how to detect the click only on the texture and not on the transparent part? Or maybe my approach of the problem is wrong?
Any advice would be appreciated.
Edit : for anyone interested in the solution I finally used, see my answer here
My solution for such a problem 'right now' is:
in your scene get all nodes which are in that position of your click, i.e.
[myScene nodesAtPoint:[theEvent lactionInNode:myScene]]
don't forget to check if your not clicking the root of your scene
something like that:
if (![[myScene nodeAtPoint:[theEvent locationInNode:myScene]].name isEqual: #"MyScene"])
then go through the Array of possible nodes and check the alpha of the Texture (NOT myNode.alpha)
if the alpha is 0.0f go to the next node of your Array
pick the first node which alpha is not 0.0f and return the nodes name
this way you can find your node (first) and save it, as the node you need, and kill the array, which you don't need anymore
than do what you want with your node
btw check if your new node you wish to use is nil after searching for its name. if thats true, just break out of your moveDown method
To get the alpha try this.
Mine looks something like that:
-(void)mouseDown:(NSEvent *)theEvent {
/* Called when a mouse click occurs */
if (![[self nodeAtPoint:[theEvent locationInNode:self]].name isEqual: self.name]]) {
/* find the node you clicked */
NSArray *clickedNodes = [self nodesAtPoint:[theEvent locationInNode:self]];
SKNode *clickedNode = [self childNodeWithName:[clickedNodes getClickedCellNode]];
clickedNodes = nil;
/* call the mouseDown method of your Node you clicked to to node specific actions */
if(clickedNode) {
[clickedNode mouseDown:theEvent];
}
/* kill the pointer to your clicked node */
clickedNode = nil;
}
}
For simple geometries like yours, a workaround would be to superimpose invisible SKShapeNodes on top of your diamonds, and watch for their touches (not for the skspritenode ones).
If this still does not work, make sure you create the SKPhysicsBody using the "fromPolygon: myNode.path!" option...
i have some problem with bugs [SKPhysicsWorld enumerateBodiesAtPoint].
But i found my solution and it will work for you too.
Create tile path (your path 4 points)
Catch touch and convert point to your tile node
if touch point inside shape node - win!
CODE:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
selectedBuilding = nil;
NSArray *nodes = [farmWorldNode nodesAtPoint:point];
if (!nodes || nodes.count == 0 || nodes.count == 1) {
return;
}
NSMutableArray *buildingsArray = [NSMutableArray array];
int count = (int)nodes.count;
for (int i = 0; i < count; i++) {
SKNode *findedBuilding = [nodes objectAtIndex:i];
if ([findedBuilding isKindOfClass:[FBuildingBaseNode class]]) {
FBuildingBaseNode *building = (FBuildingBaseNode *)findedBuilding;
CGPoint pointInsideBuilding = [building convertPoint:point fromNode:farmWorldNode];
if ([building.colisionBaseNode containsPoint:pointInsideBuilding]) {
NSLog(#"\n\n Building %# \n\n ARRAY: %# \n\n\n", building, findedBuilding);
[buildingsArray addObject:building];
}
}
}
selectedBuilding = (FBuildingBaseNode *)[buildingsArray lastObject];
buildingsArray = nil;
}

C4: Add panning to an object other than "self"

I watched the C4 tutorial on adding a pan gesture to an object and animating it to return to its original position when the panning is finished. I'm trying to add this to three individual objects. I have it working with one object so far to move it and reset it to a CGPoint, but for it to work, I have to add the pan gesture to "self", not the object. For reference, I'm pretty much using the code from here:
http://www.c4ios.com/tutorials/interactionPanning
If I add the gesture to the object itself, sure, it pans around, but then it just leaves itself at the last touch location. However, I'm assuming that leaving the gesture on "self" will affect more than just the object I want to move, and I want to be able to move the three objects individually.
I'm using roughly the same modification to the "move" method that's used in the example:
-(void)move:(UIPanGestureRecognizer *)recognizer {
[character move:recognizer];
if (recognizer.state == UIGestureRecognizerStateEnded) {
[character setCenter: charStartOrigin];
}
}
And then a new method to spawn the object:
-(void)createCharacters {
character = [C4Shape ellipse:charStart];
[character addGesture:PAN name:#"pan" action:#"move:"];
[self.canvas addShape:character];
}
The example link you are working from is sneaky. Since I knew that there was only going to be one object on the canvas I knew I could make it look like I was panning the label. This won't work for multiple objects, as you have already figured out.
To get different objects to move independently, and recognize when they are done being dragged, you need to subclass the objects and give them their own "abilities".
To do this I:
Subclass C4Shape
Add custom behaviour to the new class
Create subclassed objects on the canvas
The code for each step looks like the following:
subclassing
You have to create a subclass that gives itself some behaviour. Since you're working with shapes I have done it this way as well. I call my subclass Character, its files look like this:
Character.h
#import "C4Shape.h"
#interface Character : C4Shape
#property (readwrite, atomic) CGPoint startOrigin;
#end
I have added a property to the shape so that I can set its start origin (i.e. the point to which it will return).
Character.m
#import "Character.h"
#implementation Character
-(void)setup {
[self addGesture:PAN name:#"pan" action:#"move:"];
}
-(void)move:(UIGestureRecognizer *)sender {
if(sender.state == UIGestureRecognizerStateEnded) {
self.center = self.startOrigin;
} else {
[super move:sender];
}
}
#end
In a subclass of a C4 object, setup gets called in the same way as it does for the canvas... So, this is where I add the gesture for this object. Setup gets run after new or alloc/init are called.
The move: method is where I want to override with custom behaviour. In this method I catch the gesture recognizer, and if it's state is UIGestureRecognizerStateEnded then I want to animate back to the start origin. Otherwise, I want it to move: like it should so I simply call [super move:sender] which runs the default move: method.
That's it for the subclass.
Creating Subclassed Objects
My workspace then looks like the following:
#import "C4WorkSpace.h"
//1
#import "Character.h"
#implementation C4WorkSpace {
//2
Character *charA, *charB, *charC;
}
-(void)setup {
//3
CGRect frame = CGRectMake(0, 0, 100, 100);
//4
frame.origin = CGPointMake(self.canvas.width / 4 - 50, self.canvas.center.y - 50);
charA = [self createCharacter:frame];
frame.origin.x += self.canvas.width / 4.0f;
charB = [self createCharacter:frame];
frame.origin.x += self.canvas.width / 4.0f;
charC = [self createCharacter:frame];
//5
[self.canvas addObjects:#[charA,charB,charC]];
}
-(Character *)createCharacter:(CGRect)frame {
Character *c = [Character new];
[c ellipse:frame];
c.startOrigin = c.center;
c.animationDuration = 0.25f;
return c;
}
#end
I have added a method to my workspace that creates a Character object and adds it to the screen. This method creates a Character object by calling its new method (I have to do it this way because it is a subclass of C4Shape), turns it into an ellipse with the frame I gave it, sets its startOrigin, changes its animationDuration.
What's going on with the rest of the workspace is this (NOTE: the steps are marked in the code above):
I #import the subclass so that I can create objects with it
I create 3 references to Character objects.
I create a frame that I will use to build each of the new objects
For each object, I reposition frameby changing its origin and then use it to create a new object with the createCharacter: method I wrote.
I add all of my new objects to the canvas.
NOTE: Because I created my subclass with a startOrigin property, I am able within that class to always animate back to that point. I am also able to set that point from the canvas whenever I want.

Select multiple images

I have this UITableView which is displaying images, downloaded rom a database, as a matrix: 4 images in each table row.
To be able to select images from the views I'm using a UITapGestureRecognizer. To make each selection unique I've been trying to tag each tap recognizer and each imageView. That's where the problem is...
I've put a log within the for-loop that is creating and tagging the imageViews and recognizers and I can see in the output that they pass through all the values. However when I try to get the tag by later pressing an image I always get "3" (the last number in the table row). This makes me think the tags are simlpy overwriting eachother even though I'm creating a new object in each loop. Either that or I'm reading it out the wrong way.
Unrelated parts cut out.
for (NSInteger i = 0; i < 4; i++){
asyncImage = [[AsyncImageView alloc]
initWithFrame:frame];
[asyncImage loadImageFromURL:url];
asyncImage.tag = i;
NSLog(#"TAG %d", asyncImage.tag);
tapRecognizer = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(handleTap)];
tapRecognizer.view.tag = i;
NSLog(#"TapTAG %d", asyncImage.tag);
[asyncImage addGestureRecognizer:tapRecognizer];
}
And the method:
- (void)handleTap{
NSLog(#"TAP %d", self.tapRecognizer.view.tag);
}
If you think I'm doing it all totally wrong, a light push in the right direction is always welcome!
Thanks in advance, Tom
The following line has no effect until the gesture recognizer has been added to a view:
tapRecognizer.view.tag = i;
This is because tapRecognizer's view is initially nil. Make the assignment on the last line of your for loop to correct this problem.
Also your NSLog is always showing the tag of the last recognizer that you have added
self.tapRecognizer.view.tag // Instance variable
not the one that fired the event. Change handleTap as follows:
- (void)handleTap:(UITapGestureRecognizer*) tapRecognizer{
NSLog(#"TAP %d", tapRecognizer.view.tag);
}
You should also replace the tapRecognizer instance variable with a local variable in the method that adds the recognizer to the view, and add a colon : to your selector name:
action:#selector(handleTap:)
// HERE: ----^
I think you're doing it wrong in the loop.
Your loop is run 4 times and every time you run the loop you store AsyncImageView to the asyncImage variable (local or instance?). So the first time you run the loop, you create an object and store it at the asyncImage location, the second time this is overwritten, the third....
You have initialized 4 ImageViews, but you are only referencing to the last one. And the last one holds the correct GestureRecognizer, you want.
When do you add the ImageView to a view?
If you use the instance varable and overwrite it directly, all the other ImageViews you added to the screen point to the pointer of asyncImage. And the pointer - after running 4 times the loop and exchanging asyncImage data - points to the last image in the loop.
Hope you understand, what the problem is here.

CALayer -hitTest: not respecting containsPoint: overload

Back again and have a different question with the same function that I posted before:
- (AIEnemyUnit *) hitTestForEnemyUnit:(CGPoint)where {
CALayer * layer = [self hitTest:where];
while (layer) {
if ([layer isKindOfClass:[AIEnemyUnit class]]) {
return (AIEnemyUnit *)layer;
} else {
layer = layer.superlayer;
}
}
return nil;
}
I have a bomb that the user drags on top of the enemy so that it is displayed directly above the AIEnemyUnit. For this bomb I implemented the CALayer -containsPoint: to return NO during a drag to allow -hitTest: to pass through the layer. Basically this type of hit testing was working fine with these "pass-through" layers as long as I only used CGImageRef contexts. However once I started implementing sublayers for additional effects on the bomb, -hitTest: immediately broke. It was obvious, the new layers were capturing the -hitTest:. I tried implementing the same technique by overloading -containsPoint: for these layers, but it was still returning the bomb's generic CALayer subclass instead of passing through.
Is there a better way?
Maybe the "where" point is not relative to your "self" layer. You need to convert these points between the layers coordinate systems using:
– convertPoint:fromLayer: or
– convertPoint:toLayer:
See http://developer.apple.com/library/ios/documentation/GraphicsImaging/Reference/CALayer_class/Introduction/Introduction.html#//apple_ref/occ/instm/CALayer/convertPoint:fromLayer:
I resolved this by putting everything on a second "root" layer (called "gameLayer") the same size as the original. Then during the UIPanGestureRecognizer, I move the bomb element from "gameLayer" into my UIView.layer. Then while I am testing for a AIEnemyUnit, I only run the hitTest on the "gameLayer".
- UIView.layer --------- gameLayer
| |
dragObj(bomb) gameElements
|
bomb

NSMutableArray and memory dealloc

Im making an app for the iphone using cocos2d and i am trying to figure out the best approach for removing items from a NSmutableArray and from the layer at the same time.
What i mean by this is that the objects within the array inherit from ccNode and contain a ccsprite which i have added as a child to the cclayer. The below code is in a cclayer that has the nsmutablearray called bonusicons.
-(void) AddNewBonusIcon: (int) colour :(int) pos{
BonusIcon *newbonus;
CGSize winSize = [[CCDirector sharedDirector] winSize];
int maxX = winSize.width;
int maxY = winSize.height;
int posX, posY;
newbonus = [[BonusIcon alloc] init];
[newbonus setBonusColour: colour];
int bonusOffset = 0;
posX = anchorX;
posY = anchorY;
bonusOffset = [bonusIcons count]*([newbonus.bonus_sprite boundingBox].size.width/2 + 12);
newbonus.bonus_sprite.position = ccp(posX+bonusOffset,posY);
[newbonus.bonus_sprite setTag:pos];
[self addChild:newbonus.bonus_sprite];
[bonusIcons addObject:newbonus ];
[newbonus release];
}
This appears to do what i want for adding the objects sprite to screen and adding the objects to the nsmutablearray. Now of course this is probably not the correct way to do it so shout at me if not!
next i try to delete the objects from the array and from the screen. I can delete them from the array with no problems i just do the following
for (int i = INITIAL_BONUSES-1; i>=0; i--) {
[bonusIcons removeObjectAtIndex:i];
}
this of course leaves the sprites on screen. so how do i approach what i am trying to do so that i can remove both the sprites from screen and the objects from array that the sprite is associated with. I can remove the sprites from the screen by using the tags and typing
[self removeChildByTag:i cleanup:YES]; but then i get errors when trying to remove items from the array . i assume because i have deleted a part of the object already and the dealloc of the ccnode can no longer find the sprite to release?
so any pointers/tips etc of how i should be doing this would be much appreciated. I have read a bunch of stuff on memory management which i believe is my current issue but i just dont seem to be getting it right.
thanks all
edit: ok since posting this i have removed the sprite dealloc from the ccnode itself and added it to the cclayer above it. This has stopped the crashing so i guess i was right with the problem i was having. I of course do not think the way i solved it is the most ideal way but it will do until i find a better way.
You don't have it in the code you posted, but your question seems to strongly imply that you are calling dealloc. The only place you should ever call dealloc is [super dealloc] at the end of a class's dealloc method. Calling it on anything but super or in any other place is wrong and will lead to errors about prematurely deallocated objects (because, well, that's what it does).
If this is what you're doing, I strongly suggest you read Apple's memory management guide. It lays out how memory management works in Cocoa very simply yet thoroughly.