Detect click/touch on isometric texture - sprite-kit

I am having a hard time trying to implement click handling in a simple isometric Sprite Kit game.
I have a Game Scene (SKScene) with a Map (SKNode) containing multiple Tile objects (SKSpriteNode).
Here is a screenshot of the map :
I want to be able to detect the tile the user clicked on, so I implemented mouseDown on the Tile object. Here is my mouseDown in Tile.m :
-(void)mouseDown:(NSEvent *)theEvent
{
[self setColorBlendFactor:0.5];
}
The code seems to work fine but there is a glitch : the nodes overlap and the click event is detected on the transparent part of the node. Example (the rects have been added to illustrate the problem only. They are not used in the logic) :
As you can see, if I click on the top left corner of the tile 7, the tile 8 becomes transparent.
I tried something like getting all the nodes at click location and checking if click is inside a CGPath without success (I think there was something wrong in the coordinates).
So my question here is how to detect the click only on the texture and not on the transparent part? Or maybe my approach of the problem is wrong?
Any advice would be appreciated.
Edit : for anyone interested in the solution I finally used, see my answer here

My solution for such a problem 'right now' is:
in your scene get all nodes which are in that position of your click, i.e.
[myScene nodesAtPoint:[theEvent lactionInNode:myScene]]
don't forget to check if your not clicking the root of your scene
something like that:
if (![[myScene nodeAtPoint:[theEvent locationInNode:myScene]].name isEqual: #"MyScene"])
then go through the Array of possible nodes and check the alpha of the Texture (NOT myNode.alpha)
if the alpha is 0.0f go to the next node of your Array
pick the first node which alpha is not 0.0f and return the nodes name
this way you can find your node (first) and save it, as the node you need, and kill the array, which you don't need anymore
than do what you want with your node
btw check if your new node you wish to use is nil after searching for its name. if thats true, just break out of your moveDown method
To get the alpha try this.
Mine looks something like that:
-(void)mouseDown:(NSEvent *)theEvent {
/* Called when a mouse click occurs */
if (![[self nodeAtPoint:[theEvent locationInNode:self]].name isEqual: self.name]]) {
/* find the node you clicked */
NSArray *clickedNodes = [self nodesAtPoint:[theEvent locationInNode:self]];
SKNode *clickedNode = [self childNodeWithName:[clickedNodes getClickedCellNode]];
clickedNodes = nil;
/* call the mouseDown method of your Node you clicked to to node specific actions */
if(clickedNode) {
[clickedNode mouseDown:theEvent];
}
/* kill the pointer to your clicked node */
clickedNode = nil;
}
}

For simple geometries like yours, a workaround would be to superimpose invisible SKShapeNodes on top of your diamonds, and watch for their touches (not for the skspritenode ones).
If this still does not work, make sure you create the SKPhysicsBody using the "fromPolygon: myNode.path!" option...

i have some problem with bugs [SKPhysicsWorld enumerateBodiesAtPoint].
But i found my solution and it will work for you too.
Create tile path (your path 4 points)
Catch touch and convert point to your tile node
if touch point inside shape node - win!
CODE:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
selectedBuilding = nil;
NSArray *nodes = [farmWorldNode nodesAtPoint:point];
if (!nodes || nodes.count == 0 || nodes.count == 1) {
return;
}
NSMutableArray *buildingsArray = [NSMutableArray array];
int count = (int)nodes.count;
for (int i = 0; i < count; i++) {
SKNode *findedBuilding = [nodes objectAtIndex:i];
if ([findedBuilding isKindOfClass:[FBuildingBaseNode class]]) {
FBuildingBaseNode *building = (FBuildingBaseNode *)findedBuilding;
CGPoint pointInsideBuilding = [building convertPoint:point fromNode:farmWorldNode];
if ([building.colisionBaseNode containsPoint:pointInsideBuilding]) {
NSLog(#"\n\n Building %# \n\n ARRAY: %# \n\n\n", building, findedBuilding);
[buildingsArray addObject:building];
}
}
}
selectedBuilding = (FBuildingBaseNode *)[buildingsArray lastObject];
buildingsArray = nil;
}

Related

Dual Virtual Joystick for iOS simultaneous use

I'm using a virtual joystick for SpriteKit, and it works great. I have tried either JCInput version, or SpriteKit-Joystick version. Both were used for the movement, and used them on the left.
This is the code I used, although the documentation is very good in the gitHub anyway:
For JCInput version:
self.joystick = [[JCJoystick alloc] initWithControlRadius:40 baseRadius:45 baseColor:[SKColor blueColor] joystickRadius:25 joystickColor:[SKColor redColor]];
[self.joystick setPosition:CGPointMake(70,70)];
[self addChild:self.joystick];
and the update function:
-(void)update:(CFTimeInterval)currentTime {
[self.myLabel1 setPosition:CGPointMake(self.myLabel1.position.x+self.joystick.x, self.myLabel1.position.y+self.joystick.y)];
[self.myLabel2 setPosition:CGPointMake(self.myLabel2.position.x+self.imageJoystick.x, self.myLabel2.position.y+self.imageJoystick.y)];
/* Called before each frame is rendered */
}
And for the SpriteKit-Joystick version:
SKSpriteNode *jsThumb = [SKSpriteNode spriteNodeWithImageNamed:#"joystick"];
[jsThumb setScale:0.5f];
SKSpriteNode *jsBackdrop = [SKSpriteNode spriteNodeWithImageNamed:#"dpad"];
[jsBackdrop setScale:0.5f];
self.joystick = [Joystick joystickWithThumb:jsThumb andBackdrop:jsBackdrop];
self.joystick.position = CGPointMake(50,50);
[self addChild:self.joystick];
//I've already declared it in the header file
velocityTick = [CADisplayLink displayLinkWithTarget:self selector:#selector(joystickMovement)];
[velocityTick addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSRunLoopCommonModes];
and the update function:
-(void)joystickMovement
{
if (self.joystick.velocity.x != 0 || self.joystick.velocity.y != 0)
{
[self.myLabel1 setPosition:CGPointMake(self.myLabel1.position.x+.1 *self.joystick.velocity.x, self.myLabel1.position.y+.1 * self.joystick.velocity.y)];
}
}
Now everything works perfectly, and don't have any issue with it. But I need to add another one to rotate my character (let's call it the self.myLabel1). I tried duplicating the object creation (with unique names, parameters and positions to put them to the right side of the screen, but other bits of code being exactly the same as what I used above).
They also work, but the problem is that they don't work simultaneously. I can either use the left one or the right one at any given time, and not together. Do I need to run them on two separate thread? I've tried using two CADisplayLinks with two separate Selectors, and nothing. I tried using the same one, and nothing.
Can anyone shed some light on this shadow?
Thanks a lot in advance.
You should be overriding update in your SKScene instead of using CADisplayLink. You can call joystickMovement from update and achieve the desired effect.
You can read more about the different methods called as SKScene processes each frame in the SKScene class reference.
If you haven't already, you'll also need to set multipleTouchEnabled to true in your SKScene's view. You can do this from your GameViewController with
self.view.multipleTouchEnabled = YES;

C4: Add panning to an object other than "self"

I watched the C4 tutorial on adding a pan gesture to an object and animating it to return to its original position when the panning is finished. I'm trying to add this to three individual objects. I have it working with one object so far to move it and reset it to a CGPoint, but for it to work, I have to add the pan gesture to "self", not the object. For reference, I'm pretty much using the code from here:
http://www.c4ios.com/tutorials/interactionPanning
If I add the gesture to the object itself, sure, it pans around, but then it just leaves itself at the last touch location. However, I'm assuming that leaving the gesture on "self" will affect more than just the object I want to move, and I want to be able to move the three objects individually.
I'm using roughly the same modification to the "move" method that's used in the example:
-(void)move:(UIPanGestureRecognizer *)recognizer {
[character move:recognizer];
if (recognizer.state == UIGestureRecognizerStateEnded) {
[character setCenter: charStartOrigin];
}
}
And then a new method to spawn the object:
-(void)createCharacters {
character = [C4Shape ellipse:charStart];
[character addGesture:PAN name:#"pan" action:#"move:"];
[self.canvas addShape:character];
}
The example link you are working from is sneaky. Since I knew that there was only going to be one object on the canvas I knew I could make it look like I was panning the label. This won't work for multiple objects, as you have already figured out.
To get different objects to move independently, and recognize when they are done being dragged, you need to subclass the objects and give them their own "abilities".
To do this I:
Subclass C4Shape
Add custom behaviour to the new class
Create subclassed objects on the canvas
The code for each step looks like the following:
subclassing
You have to create a subclass that gives itself some behaviour. Since you're working with shapes I have done it this way as well. I call my subclass Character, its files look like this:
Character.h
#import "C4Shape.h"
#interface Character : C4Shape
#property (readwrite, atomic) CGPoint startOrigin;
#end
I have added a property to the shape so that I can set its start origin (i.e. the point to which it will return).
Character.m
#import "Character.h"
#implementation Character
-(void)setup {
[self addGesture:PAN name:#"pan" action:#"move:"];
}
-(void)move:(UIGestureRecognizer *)sender {
if(sender.state == UIGestureRecognizerStateEnded) {
self.center = self.startOrigin;
} else {
[super move:sender];
}
}
#end
In a subclass of a C4 object, setup gets called in the same way as it does for the canvas... So, this is where I add the gesture for this object. Setup gets run after new or alloc/init are called.
The move: method is where I want to override with custom behaviour. In this method I catch the gesture recognizer, and if it's state is UIGestureRecognizerStateEnded then I want to animate back to the start origin. Otherwise, I want it to move: like it should so I simply call [super move:sender] which runs the default move: method.
That's it for the subclass.
Creating Subclassed Objects
My workspace then looks like the following:
#import "C4WorkSpace.h"
//1
#import "Character.h"
#implementation C4WorkSpace {
//2
Character *charA, *charB, *charC;
}
-(void)setup {
//3
CGRect frame = CGRectMake(0, 0, 100, 100);
//4
frame.origin = CGPointMake(self.canvas.width / 4 - 50, self.canvas.center.y - 50);
charA = [self createCharacter:frame];
frame.origin.x += self.canvas.width / 4.0f;
charB = [self createCharacter:frame];
frame.origin.x += self.canvas.width / 4.0f;
charC = [self createCharacter:frame];
//5
[self.canvas addObjects:#[charA,charB,charC]];
}
-(Character *)createCharacter:(CGRect)frame {
Character *c = [Character new];
[c ellipse:frame];
c.startOrigin = c.center;
c.animationDuration = 0.25f;
return c;
}
#end
I have added a method to my workspace that creates a Character object and adds it to the screen. This method creates a Character object by calling its new method (I have to do it this way because it is a subclass of C4Shape), turns it into an ellipse with the frame I gave it, sets its startOrigin, changes its animationDuration.
What's going on with the rest of the workspace is this (NOTE: the steps are marked in the code above):
I #import the subclass so that I can create objects with it
I create 3 references to Character objects.
I create a frame that I will use to build each of the new objects
For each object, I reposition frameby changing its origin and then use it to create a new object with the createCharacter: method I wrote.
I add all of my new objects to the canvas.
NOTE: Because I created my subclass with a startOrigin property, I am able within that class to always animate back to that point. I am also able to set that point from the canvas whenever I want.

Creating a MKPolygon from user-placed annotations in map

I want the user to be able to create polygons after placing some (unknown number) MKpointAnnotations in the map.I have put a gesture recognizer that gets activated once the user taps a button, and so annotations are placed.But how to use these as corners for a MKPolygon?
Below the code for saving the corners of the polygon.This after some mods I did to it.Now the app crashes and the crash reporter says index out of range.The corners are MKPointAnnotation-s created via a GestureRecognizer.
-(IBAction)addCorner:(id)sender
{
NSMutableArray *addCorners = [[NSMutableArray alloc] init];
[addCorners addObject:pointAnnotation];
ptsArray = addCorners;
}
-(IBAction)addPolygonOverlay:(id)sender
{
int cornersNumber = sizeof(ptsArray);
MKMapPoint points[cornersNumber];
for (int i=0; i<cornersNumber; i++) {
points[i] = MKMapPointForCoordinate([[ptsArray objectAtIndex:i] coordinate]);
}
MKPolygon *polygon = [MKPolygon polygonWithPoints:points count:cornersNumber];
[mapview addOverlay:polygon];
}
First problem is the addCorner method. Instead of adding each corner to the ptsArray variable, it creates a new array with just the last corner and sets theptsArray equal to that so it only has the one, last corner.
Change the addCorner method like this:
-(IBAction)addCorner:(id)sender
{
if (ptsArray == nil)
{
self.ptsArray = [NSMutableArray array];
}
[ptsArray addObject:pointAnnotation];
}
Also make sure ptsArray is declared and synthesized properly:
//in the .h file...
#property (nonatomic, retain) NSMutableArray *ptsArray;
//in the .m file...
#synthesize ptsArray;
(By the way, why not add the corner to ptsArray right where the pointAnnotation is created instead of in a separate user action?)
Second problem is in the addPolygonOverlay method. You have to use the NSArray count property to get the number of items in the array. The sizeof function returns the number of bytes of physical memory the passed variable uses. For ptsArray which is a pointer, it will return 4. If the ptsArray has less than 4 items, you will get the "index out of range" exception.
So change
int cornersNumber = sizeof(ptsArray);
to
int cornersNumber = ptsArray.count;
Another important thing to note is that the polygon sides will be drawn in the order the points are in the array. If the user does not add corners in a clockwise or counter-clockwise order, the polygon will look strange. You could re-create the polygon overlay immediately after a user adds/removes an annotation so they get immediate feedback on how it will look.

can't get CALayer containsPoint to work

I've got an array of CALayers containing images which can be moved around by the user, and i'm trying to use containsPoint to detect if they have been touched - the code is as follows:
int num_objects = [pageImages count];
lastTouch = [touch locationInView:self];
CGRect objRect;
CALayer *objLayer;
for (int i = 0; i < num_objects; i++) {
objLayer = [pageImages objectAtIndex:i];
objRect = objLayer.bounds;
NSLog(#"layerPos:%#, layerBounds:%#", NSStringFromCGPoint(objLayer.position), NSStringFromCGRect(objRect));
NSLog(#"point:%#", NSStringFromCGPoint(lastTouch));
if ([objLayer containsPoint:lastTouch] == TRUE) {
NSLog(#"touched object %d", i);
return i;
}
}
The information i'm outputting puts the touch within the bounds of the layer (i've assumed position is the centre of the layer, i haven't altered the anchor point. The layer hasn't been rotated or anything like that either), but containsPoint: doesn't return true. Can anyone see what i'm doing wrong, or suggest a different/better way to achieve what i want?
So .. found the problem - the point needs to be converted from superlayer coordinates in order to work with the layer containsPoint:
replace
if ([objLayer containsPoint:lastTouch] == TRUE) {
with
if ([objLayer containsPoint:[objLayer convertPoint:lastTouch fromLayer:objLayer.superlayer]] == TRUE) {
You can mess about with the co-ordinates yourself and use CGRectContainsPoint: (see comments above), but this is a simpler solution so i get to answer my own question for the first time. big tick for me, yay!

How do I determine if a coordinate is in the currently visible map region?

I have a list of several hundred locations and only want to display an MKPinAnnotation for those locations currently on the screen. The screen starts with the user's current location within a 2-mile radius. Of course, the user can scroll, and zoom on the screen. Right now, I wait for a map update event, then loop through my location list, and check the coordinates like this:
-(void)mapViewDidFinishLoadingMap:(MKMapView *)mapView {
CGPoint point;
CLLocationCoordinate2D coordinate;
. . .
/* in location loop */
coordinate.latitude = [nextLocation getLatitude];
coordinate.longitude = [nextLocation getLongitude];
/* Determine if point is in view. Is there a better way then this? */
point = [mapView convertCoordinate:coordinate toPointToView:nil];
if( (point.x > 0) && (point.y>0) ) {
/* Add coordinate to array that is later added to mapView */
}
So I am asking to convert the coordinate where the point would be on the screen(unless I misunderstand this method which is very possible). If the coordinate isn't on the screen, then I never add it to the mapView.
So my question is, is this the correct way to determine if a location's lat/long would appear in the current view and should be added to the mapView? Or should I be doing this in a different way?
In your code, you should pass a view for the toPointToView: option. I gave it my mapView. You have to specify an upper bound for the x and y too.
Here's some code which worked for me (told me the currently visible annotations on my map, while looping through the annotation):
for (Shop *shop in self.shops) {
ShopAnnotation *ann = [ShopAnnotation annotationWithShop:shop];
[self.mapView addAnnotation:ann];
CGPoint annPoint = [self.mapView convertCoordinate:ann.coordinate
toPointToView:self.mapView];
if (annPoint.x > 0.0 && annPoint.y > 0.0 &&
annPoint.x < self.mapView.frame.size.width &&
annPoint.y < self.mapView.frame.size.height) {
NSLog(#"%# Coordinate: %f %f", ann.title, annPoint.x, annPoint.y);
}
}
I know this is an old thread, not sure what was available back then... But you should rather do:
// -- Your previous code and CLLocationCoordinate2D init --
MKMapRect visibleRect = [mapView visibleMapRect];
if(MKMapRectContainsPoint(visibleRect, MKMapPointForCoordinate(coordinate))) {
// Do your stuff
}
No need to convert back to the screen space.
Also I am not sure the reason why you are trying to do this, I think this is strange to not add annotations when they are not on the screen... MapKit already optimizes this and only creates (and recycles) annotation views that are visible.
After a little bit of reading I can't find anything that says this is a bad idea. I've done a bit of testing in my app and I always get correct results. The app loads much quicker when I only add coordinates that will show up in the currently visible map region instead of all the 300+ coordinates at once.
What I was looking for was a method like [mapView isCoordinateInVisibleRegion:myCoordinate], but so far this method is quick and seems accurate.
I've also changed the title to read "in the visible map region" instead of the previous because I think the incorrect title may have confused my meaning.