I am trying to show bounding rectangles around my sprites to refine the collision detection and handling.
I have used following code to display circle or polygon
glColor4ub(255, 255, 0, 255);
glLineWidth(2);
CGPoint vertices2[] = { ccp(30,130), ccp(30,230), ccp(50,200) };
ccDrawPoly( vertices2, 3, YES);
ccDrawCircle(ccp(0,0), 50, 360, 5, NO);
as mentioned in drawPrimitivesTest.m file in cocos2d.
i removed all background sprites also. but it is not showing me any circle or polygon.
can anybody has faced same problem? how to solve this problem?
Thanks in advance.
You should not put that code outside of the draw method
TIP:
Every CocosNode has a "draw" method.
In the "draw" method you put all the code that actually draws your node.
And Test1 is a subclass of TestDemo, which is a subclass of Layer, which is a subclass of CocosNode.
As you can see the drawing primitives aren't CocosNode objects. They are just helper
functions that let's you draw basic things like: points, line, polygons and circles.
Answering
TIP:
Don't draw your stuff outide the "draw" method. Otherwise it wont get transformed.
TIP: If you want to rotate/translate/scale a circle or any other "primtive", you can do it by rotating
the node. eg:
self.rotation = 90;
Related
This is a bizarre one for me and after having spent two days trying to fix it and reading what I could find on apple sites and stack overflow I still have no solution. Hopefully someone can help me.
So I am rotating a CAShapeLayer which is in the coordinate system of the view. After rotation the frame-coordinates are updated but those for the path are not.
On screen the path and frame both display as rotated! So If I used path.contains to see if a point belongs the CAShapeLayer after rotation I get wrong answer. Using the rotated frame does not work because frames of adjacent paths can overlap and give wrong answer.
Here is the code that rotates the relevant CAShapeLayer:
let shapeCopy = CAShapeLayer()
let inputShape = tempShapeList[index]
shapeCopy.backgroundColor =UIColor.red.withAlphaComponent(0.75).cgColor
shapeCopy.frame = inputShape.frame
shapeCopy.bounds = inputShape.bounds
shapeCopy.path = inputShape.path
shapeCopy.position = inputShape.position
shapeCopy.anchorPoint = inputShape.anchorPoint
print("bounding rect pre rotation: \(shapeCopy.frame)")
print("path pre rotation: \((shapeCopy.path)!)")
let transform = CATransform3DMakeRotation(CGFloat(75*Double.pi/180.0), 0, 0, 1)
shapeCopy.transform = transform
print("bounding rect post rotation:\(shapeCopy.frame)")
print("path post rotation: \((shapeCopy.path)!)")
if ((shapeCopy.path)!.contains(newPoint)) {
containingView.layer.addSublayer(shapeCopy)
answer = index
print("Prize is:\(String(describing: textLabelList[index].text))")
break
}
The message in the debugger:
bounding rect pre rotation: (139.075809065823, 236.846930318145, 174.164592138914, 163.153069681855)
path pre rotation: Path 0x600000236a60:
moveto (207, 400)
lineto (138.901, 266.349)
curveto (196.803, 236.847) (267.115, 247.983) (313.066, 293.934)
lineto (207, 400)
closepath
bounding rect post rotation:(189.419925763055, 292.163148046286, 202.670877072107, 210.457199272682)
path post rotation: Path 0x600000236a60:
moveto (207, 400)
lineto (138.901, 266.349)
curveto (196.803, 236.847) (267.115, 247.983) (313.066, 293.934)
lineto (207, 400)
closepath
ScreenShot of the simulator:
Screen shot of the simulator
In the screen shot you will see the rotated path and the frame of the path in the dark colored pie and slightly translucent frame.
However the coordinates of the path haven't changed. So the program believes that the red dot belongs to the shaded slice that got rotated away! If the paths updated correctly the red dot would belong to the yellow slice labelled "e6 ¢" gives wrong answers.
Also note that the background fortune wheel is a view etc in its own coordinate system. The rotated dark pie is in the coordinate system of the top level view as is the red dot.
Not sure if the post is fully clear - apologize in advance for this verbose post. If I have missed on any detail that can help please let me know.
Thanks in advance....
Applying a transform to a layer doesn't change the way the layer's content is stored. If the layer contains an image, the image is stored unrotated, and if the layer contains a path, the path is stored unrotated.
Instead, when the window server builds up (“composites”) the screen image, it applies the transform as it is drawing the layer's content into the frame buffer.
The frame property is different. It is actually computed from several other properties: position, bounds.size, anchorPoint, and transform.
You want to test whether a point is inside the on-screen appearance of the layer's path—that is, the path with the transform applied.
One way to do this is to convert the point into the layer's coordinate system. To convert it, you also need to know the original coordinate system of the point. Then you can use -[CALayer convertPoint:fromLayer] or -[CALayer convertPoint:toLayer:]. For example, suppose you have a tap gesture recognizer and you want to know if the tap is inside the path:
#IBAction func tapperDidFire(_ tapper: UITapGestureRecognizer) {
let newPoint = tapper.location(in: view)
let newPointInShapeLayer = shapeLayer.convert(newPoint, from: view.layer)
if shapeLayer.path?.contains(newPointInShapeLayer) ?? false {
print("Hit!")
}
}
I am trying to put together a game using SpriteKit, in Swift.
I have a moving sprite which is a rectangle of width (sprite.frame.size.width) and height 2*(sprite.frame.size.width)
I only want to check collision of the bottom half which is a square of width (sprite.frame.size.width) and height (sprite.frame.size.width)
I set sprite.anchorPoint = CGPointMake(0.5, 0.25) and use method CGRectIntersectsRect to check for collision with another sprite. But this does not work. The collision area remains as the first rectangle.
I do not want to use methods that call for physicsBody because there is no other physics in the game.
What am I missing here?
I actually found this UIEdgeInsets function which can be used to shrink a frame. If spriteB is the sprite in question, then instead of testing like this:
if CGRectIntersectsRect(spriteA.frame, spriteB.frame) {
// something happens
}
just inset the edges to the bottom half of spriteB, like this:
if CGRectIntersectsRect(spriteA.frame, UIEdgeInsetsInsetRect(spriteB.frame, UIEdgeInsetsMake(spriteB.frame.height/2 , 0 , 0 , 0))) {
// something happens
}
I have got the code to repeat X- and Y- which is:
bg = [CCSprite spriteWithFile:#"ipadbgpattern.png" rect:CGRectMake(0, 0, 3000, 3000)];
bg.position = ccp(500,500);
ccTexParams params = {GL_LINEAR,GL_LINEAR,GL_REPEAT,GL_REPEAT};
[bg.texture setTexParameters:¶ms];
[self addChild:bg];
However, I do not know how to change the params in order for the background to repeat along the horizontal axis.
There's no parameter for that. Just make sure the CGRect spans the region where you want the texture to repeat, and the texture itself must be a power of two (ie 1024x1024).
I'm guessing that maybe you're using a 1024x768 texture and then you'll see a gap between texture repeats.
This cannot be achieved at the GL level, since GL_REPEAT expects textures with power-of-two dimensions.
Take a look at my TiledSprite class for a rather unoptimized, but functional means of arbitrarily repeating an arbitrarily-sized texture or subtexture:
https://gist.github.com/Nolithius/6694990
Here's a brief look at its results and usage:
http://www.nolithius.com/game-development/cocos2d-iphone-repeating-sprite
I basically have a pie chart where I have lines coming out of each segment of the pie chart. So in the case where the line comes out of the circle to the left, when I draw my text, it is reversed. "100%" would look like => "%001" (Note, the 1 and % sign are actually drawn in reverse to, like if a mirror. So the little overhang on top of the 1 points to the right, rather than the left.)
I tried reading through Apple's docs for the AffineTransform, but it doesn't make complete sense to me. I tried making this transformation matrix to start:
CGAffineTransform transform1 = CGAffineTransformMake(-1, 0, 0, 1, 0, 0);
This does flip the text around its x-axis so the text now looks correct on the left side of the circle. However, the text is now on the line, rather than at the end of the line like it originally was. So I thought I could translate it by moving the text in the x-axis direction by changing the tx value in the matrix. So instead of using the above matrix, I used this:
CGAffineTransform transform1 = CGAffineTransformMake(-1, 0, 0, 1, -strlen(t1AsChar), 0);
However, the text just stays where it's at. What am I doing wrong? Thanks.
strlen() doesn't give you the size of the rendered text box, it just gives you the length of the string itself (how many characters that string has). If you're using a UITextField you can use textField.frame.size.width instead.
How can I make such an interface with cocos2d for iphone? Cortex interface
I already made a subclass of CCSprite and override the draw
method like this:
-(void)draw {
ccDrawCircle(CGPointMake(480/2, 320/2), 70, 0, 50000, NO);
ccDrawCircle(CGPointMake(480/2, 320/2), 25, 0, 50000, NO);
ccDrawLine(CGPointMake(480/2, 320/2+25), CGPointMake(480/2, 320/2+70));
ccDrawLine(CGPointMake(480/2+25, 320/2), CGPointMake(480/2+70, 320/2));
ccDrawLine(CGPointMake(480/2, 320/2-25), CGPointMake(480/2, 320/2-70));
ccDrawLine(CGPointMake(480/2-25, 320/2), CGPointMake(480/2-70, 320/2));
}
The problem is that I don't have any control over the circle (can't set the position of it)...and i don't know how to place text/images into these "cells". Another problem is the touch detection..mayby just cgrects? but what if i have more than 4 cells and one cell is "rotated"?
Any ideas?
I think you have two options here, but I don't recommend subclassing CCSprite, infact very rarely would recommend doing so, theres almost no need to.
In my opinion, you could do either of these to get your image.
1. Use OpenGL to draw your image.
2. Use CCSprite to draw your image. (Cleaner)
Once you have drawn it, its simply a matter of creating it when you press down on the screen.
Once you press down on the screen (or any prescribed object) I would then employ a simple trigonometric solution.
This is the algorithm I would use:
Press down on screen, Get the position of touch. (sourcepos) and create your cortex img
On Movement of finger on screen, get the position (currentpos) the angle and magnitude in relation to the original (sourcepos) touch.
Now, using simple angles we can install different bounds on your CCSprite using if statements. Its also a good idea to use #define kMinMagnitude X statement to ensure the user moves their finger adequately.
I suppose you can either execute the //Load Twitter or Load Facebook either on the movement or the cancelation of a touch. Thats entirely up to you.
(PSUDOCODE):
dx = currentpos.x - sourcepos.x
dy = currentpos.y - sourcepos.y
mag = sqrt(dx*dx + dy*dy);
ang = CC_RADIANS_TO_DEGREES(atan2f(dy/dx));
if (ang > 0 && ang < 80 && mag > kMinMagnitude) //Load Twitter
if (ang > 80 && ang < 120 && mag > kMinMagnitude) //Load facebook
I don't think making a subclass of CCSprite is the right choice here. You will probably want a NSObject that creates the CCSprites for you.
Also CCSprite.position = CGPointMake( X, Y ) should allow you to set the position of the sprite. Don't forget to add it to a layer just like any other CCNode object.