Is there a gridview in cocos2d of a battleship game? - iphone

I've using cocos2d for a while and I want to make a battleship game.
The thing is I can probably do a battleship with UiKit(UIButtons and UIImageView) easier and faster than in cocos2d but I want to take full advantage of cocos2d because I think it's better for games. The problem is that I need a grid for the battleship or something to separate the touches in quadrants. Is there something like a gridview in cocos2d? If not I think I would have to create my own quadrants by programming?
What do you think is the best method?
Thanks a lot
Carlos Vargas

There's not a base class in Cocos2d to do that, but you could easily make a Class specifically designed to handle touches, and mapping them to the correct quadrants.
So if you have a 480x320 screen, and quadrant size is 32, you can get the correct quadrant for a touch like:
With a configuration like this you would have 480/32 = 15 , 320/32 = 10, 10*15 = 150; a 150 quadrants grid.
e.g: To get the quadrant for a touch
// Defined the Quadrant size for your grid
CGPoint quadrantSize = CGPointMake(32.0, 32.0)
// Obtain the quadrant X, Y coordinates for a user touch (assume touchPoint is CGPoint)
int quadrant_x = (int)ceilf(touchPoint.x/quadrantSize.x);
int quadrant_y = (int)ceilf(touchPoint.y/quadrantSize.y);
// Access a Quadrant
quadrantArray[quadrant_x][quadrant_y].touched = YES;

Related

Unity: Detecting taps on particles in a particle system

I am making a scientific visualization app of the Galaxy. Among other things, it displays where certain deep sky objects (stars, star clusters, nebulae, etc) are located in and around the Galaxy.
There are 6 or 7 classes of object types (stars, star clusters, nebulae, globular clusters, etc). Each object within a class looks the same (i.e. using the same image).
I've tried creating GameObjects for each deep sky object, but the system can get bogged down with many objects (~10,000). So instead I create a particle system for each class of deep sky object, setting the specific image to display for each class.
Each particle (i.e. deep sky object) is created at the appropriate location and then I do a SetParticles() to add them to that class's particle system. This works really well and I can have 100,000 objects (particles) with decent performance.
However, I need to allow the user to click/tap on an object to select it. I have not found any examples of how to do hit testing on individual particles in the particle system. Is this possible in Unity?
Thanks,
Bill
You'll have to do the raycasting yourself.
Just implement a custom raycasting algorithm using a simple line rectangle intersection. Simply assume a small rectangle at each particle's position. Since you do not rely on Unity's built in methods you can do this async. For performance optimization you can also cluster the possible targets at simulation start, allowing the elimination of whole clusters when their bounding box is not hit by your ray.
Note: Imho you should choose a completely different approach for your data rendering.
Take a look at unity's entity component system. This allows for large amounts of data, but comes with some disadvantages (e.g. when using Unity's physics engine) (which will not be of relevance for your case I suppose).
I ended up rolling my own solution.
In Update(), upon detecting a click, I iterate through all the particles. For each particle, I calculate its size on the screen based on the particle's size and its distance from the camera.
Then I take the particle's position and translate that into screen coordinates. I use the screen size to generate a bounding rectangle and then test to see if the mouse point is inside it.
As I iterate through the particles I keep track of which is the closest hit. At the end, that is my answer.
if (Input.GetMouseButtonDown(0))
{
Particle? closestHitParticle = null;
var closestHitDist = float.MaxValue;
foreach (var particle in gcParticles)
{
var pos = particle.position;
var size = particle.GetCurrentSize(gcParticleSystem);
var distance = Vector3.Distance(pos, camera.transform.position);
var screenSize = Utility.angularSizeOnScreen(size, distance, camera);
var screenPos = camera.WorldToScreenPoint(pos);
var screenRect = new Rect(screenPos.x - screenSize / 2, screenPos.y - screenSize / 2, screenSize, screenSize);
if (screenRect.Contains(Input.mousePosition) && distance < closestHitDist)
{
closestHitParticle = particle;
closestHitDist = distance;
}
}
if (closestHitDist < float.MaxValue)
{
Debug.Log($"Hit particle at {closestHitParticle?.position}");
}
Here is the angularSizeOnScreen method:
public static float angularSizeOnScreen (float diam, float dist, Camera cam)
{
var aSize = (diam / dist) * Mathf.Rad2Deg;
var pSize = ((aSize * Screen.height) / cam.fieldOfView);
return pSize;
}

Snapping UI sprite in Unity?

IMAGE
Is there any way to snap UI sprite vertex? Holding "V" does not work in this case.
Checking Unity's Documentation is good start when you run into an issue. I checked the documentation, which can be found here: Modifying Sprite Vertices via Script.
Reading the documentation you can grab a Sprites Vertices using a Vector2 Array.
//Fetch the Sprite and vertices from the SpriteRenderer
Sprite sprite = m_SpriteRenderer.sprite;
Vector2[] spriteVertices = sprite.vertices;
You can draw using the vertices by following this and viewing them in scene view
// Show the sprite triangles
void DrawDebug()
{
Sprite sprite = m_SpriteRenderer.sprite;
ushort[] triangles = sprite.triangles;
Vector2[] vertices = sprite.vertices;
int a, b, c;
// draw the triangles using grabbed vertices
for (int i = 0; i < triangles.Length; i = i + 3)
{
a = triangles[i];
b = triangles[i + 1];
c = triangles[i + 2];
//To see these you must view the game in the Scene tab while in Play mode
Debug.DrawLine(vertices[a], vertices[b], Color.red, 100.0f);
Debug.DrawLine(vertices[b], vertices[c], Color.red, 100.0f);
Debug.DrawLine(vertices[c], vertices[a], Color.red, 100.0f);
}
}
Snapping the vertices together through the use of scripting, however, does seem overly complicated depending on what these are for. Given this, it would be useful to know why you want to do this? If these sprites are static and unmoving or only used for a short period it may be much easier to manually align them in the Scene view.
Another method could be to use ProGrid, which is a Unity Package that allows you to turn on snapping in your scene and is very useful for aligning gameobjects; this also allows the amount of snapping to be changed.
Find it by going to Window -> Package Manager. Note that you may need to turn on preview packages to find it.

Detecting touch position on 3D objects in openGL

I have created a 3D object in opengl for one of my application. The object is something like a human body and can be rotated on touch. How can I detect the position of touch on this 3D object. Means if the user touches the head, I have to detect that it is the head. If touch is on the hand, then that has to be identified. It should work even if the object is rotated to some other direction. I think the coordinates of touch on the 3D object is required.
This is the method where I am getting the position of touch on the view.
- (void) touchesBegan: (NSSet*) touches withEvent: (UIEvent*) event
{
UITouch* touch = [touches anyObject];
CGPoint location = [touch locationInView: self];
m_applicationEngine->OnFingerDown(ivec2(location.x, location.y));
}
Can anyone help? Thanks in advance!
Forget about RayTracing and other Top Notch Algorithms. We have used a simple trick for one of our applications(Iyan 3D) on App Store. But this technique need one extra render pass everytime you finish rotating the scene to a new angle. Render different objects (head, hand, leg etc) in different colors (not actual colors but unique ones). Read the color in the rendered image corresponding to the screen position. You can find the object based on its color.
In this method you can use change rendered image resolution to balance accuracy and performance.
To determine the 3D location of the object I would suggest ray tracing.
Assuming the model is in worldspace coordinates you'll also need to know the worldspace coordinates of the eye location and the worldspace coordinates of the image plane. Using those two points you can calculate a ray which you will use to intersect with the model, which I assume consists of triangles.
Then you can use the ray triangle test to determine the 3D location of the touch, by finding the triangle that has the closest intersection to the image plane. If you want which triangle is touched you will also want to save that information when you do the intersection tests.
This page gives an example of how to do ray triangle intersection tests: http://www.scratchapixel.com/lessons/3d-basic-lessons/lesson-9-ray-triangle-intersection/ray-triangle-intersection-geometric-solution/
Edit:
Updated to have some sample code. Its some slightly modified code I took from a C++ raytracing project I did a while ago so you'll need to modify it a bit to get it working for iOS. Also the code in its current form wouldn't even be useful since it doesn't return the actual intersection point but rather if the ray intersects the triangle or not.
// d is the direction the ray is heading in
// o is the origin of the ray
// verts is the 3 vertices of the triangle
// faceNorm is the normal of the triangle surface
bool
Triangle::intersect(Vector3 d, Vector3 o, Vector3* verts, Vector3 faceNorm)
{
// Check for line parallel to plane
float r_dot_n = (dot(d, faceNorm));
// If r_dot_n == 0, then the line and plane are parallel, but we need to
// do the range check due to floating point precision
if (r_dot_n > -0.001f && r_dot_n < 0.001f)
return false;
// Then we calculate the distance of the ray origin to the triangle plane
float t = ( dot(faceNorm, (verts[0] - o)) / r_dot_n);
if (t < 0.0)
return false;
// We can now calculate the barycentric coords of the intersection
Vector3 ba_ca = cross(verts[1]-verts[0], verts[2]-verts[0]);
float denom = dot(-d, ba_ca);
dist_out = dot(o-verts[0], ba_ca) / denom;
float b = dot(-d, cross(r.o-verts[0], verts[2]-verts[0])) / denom;
float c = dot(-d, cross(verts[1]-verts[0], o-verts[0])) / denom;
// Check if in tri or if b & c have NaN values
if ( b < 0 || c < 0 || b+c > 1 || b != b || c != c)
return false;
// Use barycentric coordinates to calculate the intersection point
Vector3 P = (1.f-b-c)*verts[0] + b*verts[1] + c*verts[2];
return true;
}
The actual intersection point you would be interested in is P.
Ray tracing is an option and is used in many applications for doing just that (picking). The problem with ray tracing is that this solution is a lot of work to get a pretty simple basic feature working. Also ray tracing can be slow but if you have only one ray to trace (the location of your finger say), then it should be okay.
OpenGL's API also provides a technique to pick object. I suggest you look at for instance: http://www.lighthouse3d.com/opengl/picking/
Finally a last option would consist of projecting the vertices of an object in screen space and use simple 2d techniques to find out which faces of the object your finger overlaps.

How to move a node in cocos2d and cocos3d

I have a node. In this particular case, it's a CCLayer, but I'm looking for a general solution. My node is centered at point1 (let's say { 100, 100 }). I'd like it to move to point1 (say { 200, 200 }) over the course of 0.5 seconds.
Really simple stuff, right? But I'm just not finding the docs/tutorials I need to do it.
Hints?
Thanks!
Extra credit: same question with a CC3Node, if the answer is different. :)
You can move anything inherits CCNode using runAction: [CCMoveTo actionWithDuration: 0.5 position:ccp(x,y)]
http://www.cocos2d-iphone.org/wiki/doku.php/prog_guide:actions
// assuming you've already got a CCLayer called "myLayer"
[myLayer runAction:[CCMoveTo actionWithDuration:0.5 position:ccp(200,200)]];
EDIT: Changed to CCMoveTo rather than CCMoveBy after re-reading the question.
/*
Moving an entire layer including all of his children (sprites, labels etc.). Insert this code into your 'init' method which belongs to the layer you'd like to be moved to the new point. X and Y are exactly coords of the new position of the layer relative to his center.
Example: x = 0; x = 100; in this case the layer will be moving vertical.
*/
x = ?; // X value
y = ?; // Y value
[self runAction:[CCMoveTo actionWithDuration:5.0f position:ccp(x, y)]];
Are you trying to move the layer or does your layer contain sprites that you want to move? i am not sure if it is even possible to move a layer (or stacks of layers) that contain childs (ccnodes, ccsprites, etc.).
My advice would be to move the layers child elements using ccanimation/ccmoveby/ccmoveto/etc.

How make a interface that looks like this Cortex interface? (Circle,cocos2d,iphone)

How can I make such an interface with cocos2d for iphone? Cortex interface
I already made a subclass of CCSprite and override the draw
method like this:
-(void)draw {
ccDrawCircle(CGPointMake(480/2, 320/2), 70, 0, 50000, NO);
ccDrawCircle(CGPointMake(480/2, 320/2), 25, 0, 50000, NO);
ccDrawLine(CGPointMake(480/2, 320/2+25), CGPointMake(480/2, 320/2+70));
ccDrawLine(CGPointMake(480/2+25, 320/2), CGPointMake(480/2+70, 320/2));
ccDrawLine(CGPointMake(480/2, 320/2-25), CGPointMake(480/2, 320/2-70));
ccDrawLine(CGPointMake(480/2-25, 320/2), CGPointMake(480/2-70, 320/2));
}
The problem is that I don't have any control over the circle (can't set the position of it)...and i don't know how to place text/images into these "cells". Another problem is the touch detection..mayby just cgrects? but what if i have more than 4 cells and one cell is "rotated"?
Any ideas?
I think you have two options here, but I don't recommend subclassing CCSprite, infact very rarely would recommend doing so, theres almost no need to.
In my opinion, you could do either of these to get your image.
1. Use OpenGL to draw your image.
2. Use CCSprite to draw your image. (Cleaner)
Once you have drawn it, its simply a matter of creating it when you press down on the screen.
Once you press down on the screen (or any prescribed object) I would then employ a simple trigonometric solution.
This is the algorithm I would use:
Press down on screen, Get the position of touch. (sourcepos) and create your cortex img
On Movement of finger on screen, get the position (currentpos) the angle and magnitude in relation to the original (sourcepos) touch.
Now, using simple angles we can install different bounds on your CCSprite using if statements. Its also a good idea to use #define kMinMagnitude X statement to ensure the user moves their finger adequately.
I suppose you can either execute the //Load Twitter or Load Facebook either on the movement or the cancelation of a touch. Thats entirely up to you.
(PSUDOCODE):
dx = currentpos.x - sourcepos.x
dy = currentpos.y - sourcepos.y
mag = sqrt(dx*dx + dy*dy);
ang = CC_RADIANS_TO_DEGREES(atan2f(dy/dx));
if (ang > 0 && ang < 80 && mag > kMinMagnitude) //Load Twitter
if (ang > 80 && ang < 120 && mag > kMinMagnitude) //Load facebook
I don't think making a subclass of CCSprite is the right choice here. You will probably want a NSObject that creates the CCSprites for you.
Also CCSprite.position = CGPointMake( X, Y ) should allow you to set the position of the sprite. Don't forget to add it to a layer just like any other CCNode object.