iphone - Cocos2d puzzle game ,how to solve piece position? - iphone

i need help with my first app with cocos2d,
I'm trying to develop puzzle game,
the game has image pieces and it's transparent background , and i need to match thees peices to the correct place/position !
my problem is that i don't know how to find the exact position/ place of the piece on it's place on the image ?
can any one help me please ?
trying to make game like this :
Animal Puzzle
and puzzle image and pieces like this :
Puzzle Image & pieces

If I'm understanding correctly, you want to detect when the user has dragged a puzzle piece into the correct slot.
If that's the case, and correct me if I'm misunderstanding, you'll need at least two pieces of information to make it happen: the position of the puzzle piece's correct location (i.e. the position the piece should be in when the puzzle is put together), and the same puzzle piece's current location as it's being moved by the user. Once you have this information, perhaps as two CGPoints (correctPosition and currentPosition), you can get the distance between them with ccpDistance():
CGPoint currentPosition = selectedPuzzlePiece.position;
float distanceFromCorrectPos = ccpDistance(correctPosition, currentPosition);
In the above example, correctPosition could be a CGPoint stored before the pieces are mixed up, and selectedPuzzlePiece could be the CCSprite the user is currently touching. You could then use the distanceFromCorrectPos value to determine when the user has moved the piece close enough to snap it into place:
float snapIntoPlaceThreshold = 10.0f;
if(distanceFromCorrectPos <= snapIntoPlaceThreshold) {
selectedPuzzlePiece.position = correctPosition;
}

Related

Add a Snap like effect to a Live Video based on tracking the users face in real time in Swift

I desire to add a Snap like effect to a Live Video based on tracking the users face in real time. My design would like to place streams of particles coming from they eyebrows, eyes, or lips. I already have a flexible effects library that can place the desired streams at any chosen points on the screen that can be updated in real time.
Apple provides a Swift demo project that I downloaded at this link :
https://developer.apple.com/documentation/vision/tracking_the_user_s_face_in_real_time
If you download and run that project without any changes, it will show you an overlay containing face landmarks such as left and right eyebrows, eyes, nose, lips, etc that tracks a persons face in real time.
There wasn’t much documentation on the coordinate system, layer drawing, etc. to enable me to extract CGPoint values that would correspond to face landmarks such as points on the left eyebrow for instance.
I made some progress in analyzing the drawing code used in the Apple demo but have had only limited success in getting the desired coordinates.
The left eyebrow appears to consist of an array of 6 points on a path connected by lines. I would just like to get a CGPoint that indicates the current location for one of the points on the left eyebrow.
Apple provides a routine called addPoints.
addPoints is called for both open and closed landmarks.
That routine is called for each face landmark. Since the eyebrow is not a closed path it corresponds to being of this type : openLandmarkRegions . The mouth and eyes corresponds to a slightly different type, closedLandmarkRegions, since they are closed paths where the start point and end point are the same.
fileprivate func addPoints(in landmarkRegion: VNFaceLandmarkRegion2D, to path: CGMutablePath, applying affineTransform: CGAffineTransform, closingWhenComplete closePath: Bool)
It really doesn’t matter if the path is open or closed. All I care about is getting a valid CGPoint on any of the landmarks. Eventually I will have some effects for the eyes and mouth as well, as soon as I figure out how to get a valid CGPoint for just one of the face landmarks.
This is what I tried. I declared some global variables and I added some logic inside Apples drawing code to try to help pick out CGPoints on the left eyebrow.
var sampleLeftEyebrowPoint = false
var mostRecentLeftEyebrowPoint = CGPoint()
Since addPoints is called in for loops over all the landmarks, I had to try to pick out the loop that corresponded to the left eyebrow.
In addPoints Apple has this line of code where they use the points on any given landmark :
let points: [CGPoint] = landmarkRegion.normalizedPoints
I added this code snippet just after that line of code :
if sampleLeftEyebrowPoint
{
mostRecentLeftEyebrowPoint = points[1]
mostRecentLeftEyebrowPoint = mostRecentLeftEyebrowPoint.applying(affineTransform)
sampleLeftEyebrowPoint = false
}
Note that points[1] is the 2nd point on the eyebrow, which is one of the middle points.
Note that I apply the same affine transform to the single point that Apple applies in their logic.
I set sampleLeftEyebrowPoint to true in this Apple routine with some logic that determines if the left eyebrow is currently being looped over :
fileprivate func addIndicators(to faceRectanglePath: CGMutablePath, faceLandmarksPath: CGMutablePath, for faceObservation: VNFaceObservation)
In that routine Apple has a for loop over the open landmarks as shown below. I added some logic to set sampleLeftEyebrowPoint so that the logic in addPoints will recognize the left eyebrow is currently in work so it can set .
for openLandmarkRegion in openLandmarkRegions where openLandmarkRegion != nil {
if openLandmarkRegion == landmarks.leftEyebrow
{
sampleLeftEyebrowPoint = true
}
The mostRecentLeftEyebrowPoint that I obtain seems to correlate somewhat in ways to my desired CGPoint, but not fully. The X coordinate seems to track but needs some scaling. But the Y coordinate seems inverted, with maybe something else going on.
Can anyone provide a routine that will get me the desired CGPoint corresponding to mostRecentLeftEyebrowPoint ?
Once I have that, I have already figured out how to hide the face landmarks, so that only my effect will be visible and my effect will track the left eyebrow in real time. To hide the face detection lines that are shown, just comment out Apples call to :
// self.updateLayerGeometry()

How to avoid FPS drop when drawing lines in SpriteKit?

My current project contains a gravity simulator where sprites move in accordance with the forces they experience in the game scene.
One of my features involve allowing moving sprites to draw a line behind them so you can see what paths they take.
Shown here:
However, as the Sprite continues it's movements around the screen, the FPS begins to dive. This can be seen in this second image where some time has passed since the sprite first started its movement.
When researching, I found other people had posted with similar problems:
Multiple skshapenode in one draw?
However, in the question above, the answer's poster detailed that it (The answer) was meant for a static image, which isn't something I want, because this line will change in real time depending on what influences the sprites path, this was reflected when I tried implementing a function to add a new Line to the old one which didn't work. That Code here
I'm asking if anyone can assist me in finding a way to properly stop this constant FPS drop that comes from all the draw operations. My current draw code consists of two Functions.
-(void)updateDrawPath:(CGPoint)a B:(CGPoint)b
{
CGPathAddLineToPoint(_lineToDraw, NULL, b.x, b.y);
_lineNode.path = _lineToDraw;
}
-(void)traceObject:(SKPlanetNode *)p
{
_lineToDraw = CGPathCreateMutable();
CGPathMoveToPoint((_lineToDraw), NULL, p.position.x, p.position.y);
_lineNode = [SKShapeNode node];
_lineNode.path = _lineToDraw;
_lineNode.strokeColor = [SKColor whiteColor];
_lineNode.antialiased = YES;
_lineNode.lineWidth = 3;
[self addChild:_lineNode];
}
updateDrawPath: Draws line to latest position of Sprite.
traceObject: Takes SKPlanetNode (Subclass of SKSpriteNode), and sets it up to have a line drawn after it.
If anyone can suggest a way to do this and also reduce the terrible overhead I keep accumulating, it would be fantastic!
A couple suggestions:
Consider that SKShapeNode is more or less just a tool for debug drawing mostly, due to the fact that it doesn't draw in batches it's really not suitable to make a game around that or to use it extensively (both many shapes as well as few but complex shapes).
You could draw lines using a custom shader which will likely be faster and more elegant solution, though of course you may have to learn how to write shader programs first.
Be sure to measure performance only on a device, never the simulator.

Multiple CCTMXTiledMaps for iPhone Game

So I want to divide my game into chunks by using several different CCTMXTiledMaps.
I am able to load the maps into my main 'HelloWorldLayer'. I am also able to detect whether the player sprite collides with a tile with the property of 'collectable'.
My problem occurs when I add several CCTMXTiledMap nodes to the game, as it doesn't do the collectible tile detection on all of them, just the first one.
Here is my working code that does the check, but only for the first added CCTMXTledMap:
CGPoint point = [self getTileCoordForPosition:position :map];
CCTMXLayer *metaLayer = [map layerNamed:#"Meta"];
CCTMXLayer *foregroundLayer = [map layerNamed:#"Foreground"];
CCSprite *metaTile = [metaLayer tileAt:point];
CCSprite *foregroundTile = [foregroundLayer tileAt:point];
if (foregroundTile)
{
NSLog(#"HIT!");
// Remove the meta tile and the foreground tile
[metaLayer removeTileAt:point];
[foregroundLayer removeTileAt:point];
}
How can I make this code do the check for every CCTMXTiledMap node that has been added?
The problem was that I was calculating the tile map positions wrong, in a tile map co-ordinates to map position function.
I was multiplying by the CC_SCALE_RATIO() function, or something like that (going off the top of my head), and it was mis-calculating the pixel positioning.
Just thought I'd write in an answer since I found the solution. Hope it helps somebody!

Programming controls for sprite

I am trying to make a snake kind of game on the iPhone. For that I plan to have a simple dot as the sprite which draws a line. The dot is controlled by two buttons on the left and the right of the screen enabling it to turn left or right. Now I really do not have any idea how to make the sprite move automatically forward and how to program the turning. It should not be exactly as in snake where it abruptly turns left or right but it should more be a smooth curve. I hope you're getting my point and I'd appreciate all kinds of thoughts! Thanks a lot!
Somewhat trying to make it like this:
http://www.bnet.im/images/screen_curve.png
There are a lot of ways of doing this, but one simple way would be to store the angle of the snake's trajectory (I've called it theta) and move it a fixed amount in every call to Update.
Assuming that the snake class inherits from CCNode:
-(void)Update:(ccTime)dt {
self.position = ccp(self.position.x + cos(theta)*dt,
self.position.y + sin(theta)*dt);
}
You could then update theta from your event handling logic by increasing it or decreasing it when the user taps to turn left or right.

Collision Detection between two rectangles

Fairly simple question that Im sure you will laugh at me for.
I have two rectangles playerRect and wall.
I have an if statement with the condition being ..
if (CGRectIntersectsRect(playerRect,wall)) {
//handle collision here
}
The problem I'm having is working out which side actually hit the wall rectangle.
I need to know because then I stop te player from moving depending on which side hit.
Thanks for any help
Disco
I would add some direction property to my 'Player' object. This way when you detect a collision, you just check to see which way the player was moving prior to the collision and react accordingly.
Create a CGRect for each side of your object with just a width of 1 (or height of 1 depending on the side) and look for intersections with the sides instead of the entire object. If your object is moving faster than 1 pixel per collision check, then you would check the sides in addition to checking the entire object