iPhone SDK: Collision detection, does it have to be a rectangle? - iphone

I am making a basic platform game for the iPhone and I have encountered a problem with my collision detection.
if (CGRectIntersectsRect(player.frame, platform.frame))
pos2 = CGPointMake(0.0, +0.0);
else
pos2 = CGPointMake(0.0, +10.0);
The collision detection is to stop in-game-gravity existing when the player is on a platform, the problem is with the fact that the collision detection is the rectangle around the player, is there anyway to do collision detection for the actual shape of an image (with transparency) rather that the rectangle around it?

You'll have to program this on your own, and beware the pixel-by-pixel collision is probably too expensive for the iPhone. My recommendation is to write a Collidable protocol (called an interface in every other programming language), give it a collidedWith:(Collidable *)c function, and then just implement that for any object that you want to allow collision for. Then you can write case-by-case collision logic. Similarly, you can make a big superclass that has all the information you'd need for collision (in your case either an X, Y, width, and height, or an X, Y, and a pixel data array) and a collidesWith method. Either way you can write a bunch of different collision methods - if you're only doing pixel collision for a few things, it won't be much of a performance hit. Typically, though, it's better to do bounding box collision or some other collision based on geometry, as it is significantly faster.
The folks over at metanetsoftware made some great tutorials on collision techniques, among them axis separation collsion and grid based collision, the latter of which sounds like it would be more viable for your game. If you want to stick with brute force collision detection, however (checking every object against every other object), then making a bounding box that is simply smaller than the image is typically the proper way to go. This is how many successful platformers did it, including Super Mario Brothers.You might also consider weighted bounding boxes - that is, you have one bounding box for one type of object and a different sized one for others. In Mario, for example, you have a larger box to hit coins with than you do enemies.
Now, even though I've warned you to do otherwise, I'll oblige you and put in how to do pixel-based collision. You're going to want to access the pixel data of your CGImage, then iterate through all the pixels to see if this image shares a location with any other image. Here's some code for it.
for (int i = 0; i < [objects count]; i++)
{
MyObject *obj1 = [objects objectAtIndex:i];
//Compare every object against every other object.
for (int j = i+1; j < [objects count]; j++)
{
MyObject *obj2 = [objects objectAtIndex:j];
//Store whether or not we've collided.
BOOL collided = NO;
//First, do bounding box collision. We don't want to bother checking
//Pixels unless we are within each others' bounds.
if (obj1.x + obj1.imageWidth >= obj2.x &&
obj2.x + obj2.imageWidth >= obj1.x &&
obj1.y + obj1.imageHeight >= obj2.y &&
obj2.y + obj2.imageGeight >= obj1.y)
{
//We want to iterate only along the object with the smallest image.
//This way, the collision checking will take the least time possible.
MyObject *check = (obj1.imageWidth * obj1.imageHeight < obj2.imageWidth * obj2.imageHeight) ? obj1 : obj2;
//Go through the pixel data of the two objects.
for (int x = check.x; x < check.x + check.imageWidth && !collided; x++)
{
for (int y = check.y; y < check.y + check.imageHeight && !collided; y++)
{
if ([obj1 pixelIsOpaqueAtX:x andY:y] && [obj2 pixelIsOpaqueAtX:x andY:y])
{
collided = YES;
}
}
}
}
}
}
I made it so pixelIsOpaque takes a global coordinate rather than a local coordinate, so when you program that part you have to be careful to subtract the x and y out of that again, or you'll be checking out of the bounds of your image.

Related

Unity .GetParticles() high CPU spike

Im using .GetParticles() on a particlesystem to then destroy the particles when they are outisde of a certain range. My script is below and the particle system variable is cached. I attached the deep profile picture of what is taking the CPU and it points to GetParticles(). I can't find any documentation to why this would be a bad approach, as the script is only attached to one single game object.
Any insight appreciated, thanks!
private void Update()
{if (Time.time > (delayTimer + 0.02f))
{
delayTimer = Time.time;
transform.position = new Vector3(transform.position.x, Water.transform.position.y - 2f, transform.position.z);
//Gets the change in the waters height and resets it
var waterHeightChange = Mathf.Abs(Water.transform.position.y - waterHeight);
waterHeight = Water.transform.position.y;
//Gets the distance to the top of the water to pop the bubble if surpassed
var DistanceToDestroy = Mathf.Abs((Water.transform.position.y + (Water.transform.localScale.y / 2f) - .15f) - transform.position.y);
//Creates a particle array sized a the emitters maximum particles
//Gets the particles from the emitter system and transfers into array of particles, returns total number of particles
//Only called on occasion (once at start really) to make allocation as minimal as possible
if (pSystemParticles == null || pSystemParticles.Length < pSystem.main.maxParticles)
{
pSystemParticles = new ParticleSystem.Particle[pSystem.main.maxParticles];
}
int numParticlesAlive = pSystem.GetParticles(pSystemParticles);
for (int i = 0; i < numParticlesAlive; i++)
{
//Changes the height accordingly for each particle
newPos.Set(pSystemParticles[i].position.x, pSystemParticles[i].position.y, pSystemParticles[i].position.z + waterHeightChange);
//Grab the 'y' positional height relative to the emitter
var particleHeight = newPos.z;
if (particleHeight >= DistanceToDestroy)
{
//Deletes particle if needed
pSystemParticles[i].remainingLifetime = 0;
}
}
//Sets the particle system from the modified array
pSystem.SetParticles(pSystemParticles, numParticlesAlive);
}
}
I'm not sure why you get this spike but this seems to be outside of your control. Maybe you could use the Collision Module, put an invisible plane above your waterline where the particles should stop and then set up the collision module such that the particles lose their whole lifetime when they collide (Lifetime Loss = 1). That way they should disappear when colliding with the plane. Could be faster because particles would never have to enter the C# world plus it's a lot less code to write.

SpriteKit : calculate distance between two texture masks

I have two irregular shapes in SpriteKit, and I want to calculate the vertical distance from the base of a space ship and the (irregular) terrain right below.
Is there a way to do it ?
Thanks !
Place an SKPhysicsBody that is in a shape of a line at the center of your ship with a width of 1 and the height of your scene, then in the didBeginContact method, grab the 2 contact points. You now know 2 points, just use the distance formula (in this case it is just y2-y1) and you have your answer
I found a different way to solve my problem, but I think that KnightOfDragon's one is conceptually better (although I did not manage to make it work).
The terrain's texture is essentially a bitmap with opaque and transparent pixels. So I decided to parse these pixels, storing the highest opaque pixel for each column, building a "radar altitude map". So I just have to calculate the difference between the bottom of the ship and the altitude of the column right beneath its center:
CFDataRef imageData = CGDataProviderCopyData(CGImageGetDataProvider(terrain.texture.CGImage));
const UInt32 *pixels = (const UInt32*)CFDataGetBytePtr(imageData);
NSMutableArray *radar = [NSMutableArray new];
for (long col = 0; col < terrain.size.width; col++)
[radar addObject:#(0)];
for (long ind = 0; ind < (terrain.size.height * terrain.size.width); ind++)
{
if (pixels[ind] & 0xff000000) // non-transparent pixel
{
long line = ind/terrain.size.width;
long col = ind - (line*terrain.size.width);
if ([radar[col]integerValue] <terrain.size.height-line) radar[col] = #(terrain.size.height-line);
}
}
This solution could be optimized, of course. It's just the basic idea.
I've added an image to show the original texture, its representation as opaque/transparent pixels, and a test by putting little white nodes to check where the "surface" was.

Blob position comparison across several video frames

Goal is to detect whether an object/s(can be multiple) is stationary in a ROI for a period of time (Application: Blocking the zebra lane detection). So it means obeserving each blob with respect to time t
Input = Video file
So, let's say the pedestrian crossing lane is the ROI. Background subtraction happens inside ROI only, then each blob(vehicle) will be observed separately for time t if they have been motionless there.
What I'm thinking is getting the position of the blob at frame 1 and frame n (time threshold) and check if the position is the same. But this must be applied on each blob assuming there are multiple blobs. So a loop is involved here to process each blob one by one. But what about processing each blob by getting its position at frame 1 and frame n, then compare if it's the same(if so then it has been motionless for time t therefore it's "blocking"). Then move on to the next blob.
My logic written on java code:
//assuming "blobs" is an arraylist containing all the blobs in the image
int initialPosition = 0, finalPosition = 0;
static int violatorCount=0;
for(int i=0; i<blobs.size(); i++){ //iterate to each blob to process them separately
initialPosition = blobs.get(i).getPosition();
for(int j=0; j<=timeThreshold; j++){
if(blobs.get(i) == null){ //if blob is no longer existing on frame j
break;
}
finalPosition = blobs.get(i).getPosition();
}
if(initialPosition == finalPosition){
violatorCount++;
}
//output count on top-right part of window
}
Can you share guys the logic on how to implement the goal/idea in either Matlab or OpenCV?
Optical Flow is an option thanks to PSchn. Any other options I can consider
Sounds like optical flow. You could yous the OpenCV implementation. Pass your points to cv::calcOpticalFlowPyrLK along with the next image (see here). Then you could check for the distance between to points and dicide what to do.
I dont know if it works, just an idea.

Move object to nearest empty space on a plane

Check the following gif: https://i.gyazo.com/72998b8e2e3174193a6a2956de2ed008.gif
I want the cylinder to instantly change location to the nearest empty space on the plane as soon as I put a cube on the cylinder. The cubes and the cylinder have box colliders attached.
At the moment the cylinder just gets stuck when I put a cube on it, and I have to click in some direction to make it start "swimming" through the cubes.
Is there any easy solution or do I have to create some sort of grid with empty gameobjects that have a tag which tells me if there's an object on them or not?
This is a common problem in RTS-like video games, and I am solving it myself. This requires a breadth-first search algorithm, which means that you're checking the closest neighbors first. You're fortunate to only have to solve this problem in a gridded-environment.
Usually what programmers will do is create a queue and add each node (space) in the entire game to that queue until an empty space is found. It will start with e.g. the above, below, and adjacent spaces to the starting space, and then recursively move out, calling the same function inside of itself and using the queue to keep track of which spaces still need to be checked. It will also need to have a way to know whether a space has already been checked and avoid those spaces.
Another solution I'm conceiving of would be to generate a (conceptual) Archimedean spiral from the starting point and somehow check each space along that spiral. The tricky part would be generating the right spiral and checking it at just the right points in order to hit each space once.
Here's my quick-and-dirty solution for the Archimedean spiral approach in c++:
float x, z, max = 150.0f;
vector<pair<float, float>> spiral;
//Generate the spiral vector (run this code once and store the spiral).
for (float n = 0.0f; n < max; n += (max + 1.0f - n) * 0.0001f)
{
x = cos(n) * n * 0.05f;
z = sin(n) * n * 0.05f;
//Change 1.0f to 0.5f for half-sized spaces.
//fmod is float modulus (remainder).
x = x - fmod(x, 1.0f);
z = z - fmod(z, 1.0f);
pair<float, float> currentPoint = make_pair(x, z);
//Make sure this pair isn't at (0.0f, 0.0f) and that it's not already in the spiral.
if ((x != 0.0f || z != 0.0f) && find(spiral.begin(), spiral.end(), currentPoint) == spiral.end())
{
spiral.push_back(currentPoint);
}
}
//Loop through the results (run this code per usage of the spiral).
for (unsigned int n = 0U; n < spiral.size(); ++n)
{
//Draw or test the spiral.
}
It generates a vector of unique points (float pairs) that can be iterated through in order, which will allow you to draw or test every space around the starting space in a nice, outward (breadth-first), gridded spiral. With 1.0f-sized spaces, it generates a circle of 174 test points, and with 0.5f-sized spaces, it generates a circle of 676 test points. You only have to generate this spiral once and then store it for usage numerous times throughout the rest of the program.
Note:
This spiral samples differently as it grows further and further out from the center (in the for loop: n += (max + 1.0f - n) * 0.0001f).
If you use the wrong numbers, you could very easily break this code or cause an infinite loop! Use at your own risk.
Though more memory intensive, it is probably much more time-efficient than the traditional queue-based solutions due to iterating through each space exactly once.
It is not a 100% accurate solution to the problem, however, because it is a gridded spiral; in some cases it may favor the diagonal over the lateral. This is probably negligible in most cases though.
I used this solution for a game I'm working on. More on that here. Here are some pictures (the orange lines in the first are drawn by me in Paint for illustration, and the second picture is just to demonstrate what the spiral looks like if expanded):

How to move incrementally in a 3D world using glRotatef() and glTranslatef()

I have some 3D models that I render in OpenGL in a 3D space, and I'm experiencing some headaches in moving the 'character' (that is the camera) with rotations and translation inside this world.
I receive the input (ie the coordinates where to move/the dregrees to turn) from some extern event (image a user input or some data from a GPS+compass device) and the kind of event is rotation OR translation .
I've wrote this method to manage these events:
- (void)moveThePlayerPositionTranslatingLat:(double)translatedLat Long:(double)translatedLong andRotating:(double)degrees{
[super startDrawingFrame];
if (degrees != 0)
{
glRotatef(degrees, 0, 0, 1);
}
if (translatedLat != 0)
{
glTranslatef(translatedLat, -translatedLong, 0);
}
[self redrawView];
}
Then in redrawView I'm actualy drawing the scene and my models. It is something like:
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
NSInteger nModels = [models count];
for (NSInteger i = 0; i < nModels; i++)
{
MD2Object * mdobj = [models objectAtIndex:i];
glPushMatrix();
double * deltas = calloc(sizeof(double),2);
deltas[0] = currentCoords[0] - mdobj.modelPosition[0];
deltas[1] = currentCoords[1] - mdobj.modelPosition[1];
glTranslatef(deltas[0], -deltas[1], 0);
free(deltas);
[mdobj setupForRenderGL];
[mdobj renderGL];
[mdobj cleanupAfterRenderGL];
glPopMatrix();
}
[super drawView];
The problem is that when translation an rotation events are called one after the other: for example when I'm rotating incrementally for some iterations (still around the origin) then I translate and finally rotate again but it appears that the last rotation does not occur around the current (translated) position but around the old one (the old origin). I'm well aware that this happens when the order of transformations is inverted, but I believed that after a drawing the new center of the world was given by the translated system.
What am I missing? How can I fix this? (any reference to OpenGL will be appreciated too)
I would recommend not doing cummulative transformations in the event handler, but internally storing the current values for your transformation and then only transforming once, but I don't know if this is the behaviour that you want.
Pseudocode:
someEvent(lat, long, deg)
{
currentLat += lat;
currentLong += long;
currentDeg += deg;
}
redraw()
{
glClear()
glRotatef(currentDeg, 0, 0, 1);
glTranslatef(currentLat, -currentLong, 0);
... // draw stuff
}
It sounds like you have a couple of things that are happening here:
The first is that you need to be aware that rotations occur about the origin. So when you translate then rotate, you are not rotating about what you think is the origin, but the new origin which is T-10 (the origin transformed by the inverse of your translation).
Second, you're making things quite a bit harder than you really need. What you might want to consider instead is to use gluLookAt. You essentially give it a position within your scene and a point in your scene to look at and an 'up' vector and it will set up the scene properly. To use it properly, keep track of where you camera is located, call that vector p, and a vector n (for normal ... indicates the direction you're looking) and u (your up vector). It will make things easier for more advanced features if n and u are orthonormal vectors (i.e. they are orthoginal to each other and have unit length). If you do this, you can compute r = n x u, (your 'right' vector), which will be a normal vector orthoginal to the other two. You then 'look at' p+n and provide the u as the up vector.
Ideally, your n, u and r have some canonical form, for instance:
n = <0, 0, 1>
u = <0, 1, 0>
r = <1, 0, 0>
You then incrementally accumulate your rotations and apply them to the canonical for of your oritentation vectors. You can use either Euler Rotations or Quaternion Rotations to accumulate your rotations (I've come to really appreciate the quaternion approach for a variety of reasons).