SpriteKit : calculate distance between two texture masks - sprite-kit

I have two irregular shapes in SpriteKit, and I want to calculate the vertical distance from the base of a space ship and the (irregular) terrain right below.
Is there a way to do it ?
Thanks !

Place an SKPhysicsBody that is in a shape of a line at the center of your ship with a width of 1 and the height of your scene, then in the didBeginContact method, grab the 2 contact points. You now know 2 points, just use the distance formula (in this case it is just y2-y1) and you have your answer

I found a different way to solve my problem, but I think that KnightOfDragon's one is conceptually better (although I did not manage to make it work).
The terrain's texture is essentially a bitmap with opaque and transparent pixels. So I decided to parse these pixels, storing the highest opaque pixel for each column, building a "radar altitude map". So I just have to calculate the difference between the bottom of the ship and the altitude of the column right beneath its center:
CFDataRef imageData = CGDataProviderCopyData(CGImageGetDataProvider(terrain.texture.CGImage));
const UInt32 *pixels = (const UInt32*)CFDataGetBytePtr(imageData);
NSMutableArray *radar = [NSMutableArray new];
for (long col = 0; col < terrain.size.width; col++)
[radar addObject:#(0)];
for (long ind = 0; ind < (terrain.size.height * terrain.size.width); ind++)
{
if (pixels[ind] & 0xff000000) // non-transparent pixel
{
long line = ind/terrain.size.width;
long col = ind - (line*terrain.size.width);
if ([radar[col]integerValue] <terrain.size.height-line) radar[col] = #(terrain.size.height-line);
}
}
This solution could be optimized, of course. It's just the basic idea.
I've added an image to show the original texture, its representation as opaque/transparent pixels, and a test by putting little white nodes to check where the "surface" was.

Related

Scaling seperate triangles (in geometry shader?)

For a masking object, I am trying to scale each triangle individually. If I scale the object as a whole, the points further away from the center will get moved too far and I just want the object to have 'more body'. Since I use it as a mask, it doesn't matter if the triangles end up overlapping.
Although looking at this might hurt someone deep inside, this is actually what I'm trying to achieve:
I thought this was best done in a shader and I thought this could be achieved in the geometry shader since I need to know the center of the triangle. I came up with the code below, but things keep acting... strange.
float3 center = (IN[0].vertex.xyz + IN[1].vertex.xyz + IN[2].vertex.xyz) / 3;
for (int i = 0; i < 3; i++)
{
float3 distance = IN[i].vertex.xyz - center.xyz;
float3 normal = normalize(distance);
distance = abs(distance);
float scale = 1;
float3 pos = IN[i].vertex.xyz + (distance * normal.xyz * (scale - 1));
o.pos.xyz = pos.xyz;
o.pos.w = IN[i].vertex.w;
tristream.Append(o);
}
My plan was to calculate the center of the triangle and than calculate the distance between the center and each point. I would than take the normal of this distance to know in which direction I would have to move the vertex and change the position by adding the distance * normal(direction) * scale to the original position of the vertex. Yet, it seems the triangles change when you rotate the camera, so I would doubt it if this is right. Does anyone know what could be wrong?
(Just some notes:
the mesh is basically 2D, only changing across the x- and z-axis (if this matters).
I did abs(distance) since I thought it would cancel out the normal if both would be negative. I'm not sure if this is necessary.
I did scale -1 since a scale of 1 would result in the mesh staying the same. A scale of 2 should result in all triangles being twice as big.
I have no clue on what to do with the w value, but keeping the old value at least doesn't screw up that much. Perhaps here lays the problem? I thought this value should always be 1 for matrix multiplications.
)
Oke, so besides using a way to 'complex' formula to calculate the new position of each point. (Better way at https://math.stackexchange.com/questions/1563249/how-do-i-scale-a-triangle-given-its-cartesian-cooordinates). I found out that it somehow indeed had to do with the w-value. As I always thought this was mainly a helper variable, it would be awesome if someone could explain how that values screwed things over.
Anyways, including that value in the equation it works fine.
float4 center = (IN[0].vertex.xyzw + IN[1].vertex.xyzw + IN[2].vertex.xyzw) / 3;
for (int i = 0; i < 3; i++)
{
float scale = 2;
float4 pos = (IN[i].vertex.xyzw * scale) - center.xyzw;
o.pos.xyzw = pos.xyzw;
tristream.Append(o);
}
This works just fine :)

Matlab and OpenCV calculate different image moment m00 for the same image

For exactly the same image
Opencv Code:
img = imread("testImg.png",0);
threshold(img, img_bw, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
Mat tmp;
img_bwR.copyTo(tmp);
findContours(tmp, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// Get the moment
vector<Moments> mu(contours.size() );
for( int i = 0; i < contours.size(); i++ )
{ mu[i] = moments( contours[i], false );
}
// Display area (m00)
for( int i = 0; i < contours.size(); i++ )
{
cout<<mu[i].m00 <<endl;
// I also tried the code
//cout<<contourArea(contours.at(i))<<endl;
// But the result is the same
}
Matlab code:
Img = imread('testImg.png');
lvl = graythresh(Img);
bw = im2bw(Img,lvl);
stats = regionprops(bw,'Area');
for k = 1:length(stats)
Area = stats(k).Area; %m00
end
Any one has any thought on it? How to unify them? I think they use different methods to find contours.
I uploaded the test image at the link below so that someone who is interested in this can reproduce the procedure
It is a 100 by 100 small 8 bit grayscale image with only 0 and 255 pixel intensity. For simplicity, it only has one blob on it.
For OpenCV, the area of contour (image moment m00) is 609.5 (Very odd value)
For Matlab, the area of contour (image moment m00) is 763.
Thanks
Exist many different definitions of how contours should be extracted from binary image. For example it can be polygon that is the perimeter of white object in a binary image. If this definition was used by OpenCV, then areas of contours would be the same as areas of connected components found by Matlab. But this is not the case. Contour found by findContour() function is the polygon that connects centers of neighbor "edge pixels". Edge pixel is a white pixel that has black neighbor in N4 neighborhood.
Example: suppose you have an image whose size is 100x100 pixels. Every pixel above the diagonal is black. Every pixel below or on the diagonal is white (black triangle and white triangle). Exact separation polygon will have almost 200 vertexes at distance of 1 pixel: (0,0), (1,0), (1,1), (2,1), (2,2),.... (100,99), (100,100), (0,100). As you can see this definition is not very good from practical point of view. Polygon returned by OpenCV will have exactly 3 vertexes needed to define the triangle: (0,0), (99,99), (0,99). Its area is (99 x 99 / 2) pixels. It is not equal to number of white pixels. It is not even an integer. But this polygon is more practical than previous one.
Those are not the only possible definitions for polygon extraction. Many other definitions exist. Some of them (in my opinion) may be better than the one used by OpenCV. But this is the one that was implemented and used by a lot of people.
Currently there no effective workaround for your problem. If you want to get exactly same numbers from MATLAB and OpenCV you will have to draw the contours found by foundContours on some black image, and use function moments() on image. I know that upcoming OpenCV 3 have function that finds connected components but I didn't tried it myself.

(Unity3D) Paint with soft brush (logic)

During the last few days i was coding a painting behavior for a game am working on, and am currently in a very advanced phase, i can say that i have 90% of the work done and working perfectly, now what i need to do is being able to draw with a "soft brush" cause for now it's like am painting with "pixel style" and that was totally expected cause that's what i wrote,
my current goal consist of using this solution :
import a brush texture, this image
create an array that contain all The alpha values of that texture
When drawing use the array elements in order to define the new pixels alpha
And this is my code to do that (it's not very long, there is too much comments)
//The main painting method
//theObject = the object to be painted
//tmpTexture = the object current texture
//targetTexture = the new texture
void paint (GameObject theObject, Texture2D tmpTexture, Texture2D targetTexture)
{
//x and y are 2 floats from another class
//they store the coordinates of the pixel
//that get hit by the RayCast
int x = (int)(coordinates.pixelPos.x);
int y = (int)(coordinates.pixelPos.y);
//iterate through a block of pixels that goes fro
//Y and X and go #brushHeight Pixels up
// and #brushWeight Pixels right
for (int tmpY = y; tmpY<y+brushHeight; tmpY++) {
for (int tmpX = x; tmpX<x+brushWidth; tmpX++) {
//check if the current pixel is different from the target pixel
if (tmpTexture.GetPixel (tmpX, tmpY) != targetTexture.GetPixel (tmpX, tmpY)) {
//create a temporary color from the target pixel at the given coordinates
Color tmpCol = targetTexture.GetPixel (tmpX, tmpY);
//change the alpha of that pixel based on the brush alpha
//myBrushAlpha is a 2 Dimensional array that contain
//the different Alpha values of the brush
//the substractions are to keep the index in range
if (myBrushAlpha [tmpY - y, tmpX - x].a > 0) {
tmpCol.a = myBrushAlpha [tmpY - y, tmpX - x].a;
}
//set the new pixel to the current texture
tmpTexture.SetPixel (tmpX, tmpY, tmpCol);
}
}
}
//Apply
tmpTexture.Apply ();
//change the object main texture
theObject.renderer.material.mainTexture = tmpTexture;
}
Now the fun (and bad) part is the code did exactly what i asked for, but there is something that i didn't think of and i couldn't solve after spend the whole night trying,
the thing is that by asking to draw anytime with the brush alpha i found myself create a very weird effect which is decreasing the alpha value of an "old" pixel, so i tried to fix that by adding an if statement that check if the current alpha of the pixel is less than the equivalent brush alpha pixel, if it is, then augment the alpha to be equal to the brush, and if the pixel alpha is bigger, then keep adding the brush alpha value to it in order to have that "soft brushing" effect, and in code it become this :
if (myBrushAlpha [tmpY - y, tmpX - x].a > tmpCol.a) {
tmpCol.a = myBrushAlpha [tmpY - y, tmpX - x].a;
} else {
tmpCol.a += myBrushAlpha [tmpY - y, tmpX - x].a;
}
But after i've done that, i got the "pixelized brush" effect back, am not sure but i think maybe it's because am making these conditions inside a for loop so everything is executed before the end of the current frame so i don't see the effect, could it be that ?
Am really lost here and hope that you can put me in the right direction,
Thank you very much and have a great day

Converting set of CGPoints to CATransform3D to deskew image

I have an image which contains a trapezoid section which I need to transform into a square shape (deskew). I'm am really struggling to understand 3D transformation matrices and have no idea where to start with this.
At present I have 4 CGPoints which represent a skewed shape, I also have 4 CGPoints which represent a uniform rectangle. How would I convert the initial 4 CGPoints into a 3D transform matrix to deskew the image?
I'm basically looking for the inverse of this: iPhone image stretching (skew)
Where the first image would be the input shape and a square image would be the output.
Can anyone point me in the right direction?
Find the 2 highest CGPoints, and level them, whilst doing this level the 2 lowest CGPoints, the 2 left most CGPoints and the 2 right most CGPoints.
//assuming skewpoint1 & skewpoint2 are the highest points
if (skewpoint1.y < skewpoint2.y)
skewpoint1.y += speed;
skewpoint2.y -= speed;
else
skewpoint1.y -= speed;
skewpoint2.y += speed;
do similar the for lowest points and the x values for the 2 left most and 2 right most CGPoints.
You could also add a snap-to function for when the between the points is less than 2xspeed to snap to the point central to both, i.e.
if (abs(skewpoint1.y - skewpoint2.y) < (speed*2))
//depending on which CGPoint is higher
skewpoint1.y += (abs(skewpoint1.y - skewpoint2.y)/2);
skewpoint2.y -= (abs(skewpoint1.y - skewpoint2.y)/2);

Car turning circle and moving the sprite

I would like to use Cocos2d on the iPhone to draw a 2D car and make it steer from left to right in a natural way.
Here is what I tried:
Calculate the angle of the wheels and just move it to the destination point where the wheels point to. But this creates a very unnatural feel. The car drifts half the time
After that I started some research on how to get a turning circle from a car, which meant that I needed a couple of constants like wheelbase and the width of the car.
After a lot of research, I created the following code:
float steerAngle = 30; // in degrees
float speed = 20;
float carWidth = 1.8f; // as in 1.8 meters
float wheelBase = 3.5f; // as in 3.5 meters
float x = (wheelBase / abs(tan(steerAngle)) + carWidth/ 2);
float wheelBaseHalf = wheelBase / 2;
float r = (float) sqrt(x * x + wheelBaseHalf * wheelBaseHalf);
float theta = speed * 1 / r;
if (steerAngle < 0.0f)
theta = theta * -1;
drawCircle(CGPointMake(carPosition.x - r, carPosition.y),
r, CC_DEGREES_TO_RADIANS(180), 50, NO);
The first couple of lines are my constants. carPosition is of the type CGPoint. After that I try to draw a circle which shows the turning circle of my car, but the circle it draws is far too small. I can just make my constants bigger, to make the circle bigger, but then I would still need to know how to move my sprite on this circle.
I tried following a .NET tutorial I found on the subject, but I can't really completely convert it because it uses Matrixes, which aren't supported by Cocoa.
Can someone give me a couple of pointers on how to start this? I have been looking for example code, but I can't find any.
EDIT After the comments given below
I corrected my constants, my wheelBase is now 50 (the sprite is 50px high), my carWidth is 30 (the sprite is 30px in width).
But now I have the problem, that when my car does it's first 'tick', the rotation is correct (and also the placement), but after that the calculations seem wrong.
The middle of the turning circle is moved instead of kept at it's original position. What I need (I think) is that at each angle of the car I need to recalculate the original centre of the turning circle. I would think this is easy, because I have the radius and the turning angle, but I can't seem to figure out how to keep the car moving in a nice circle.
Any more pointers?
You have the right idea. The constants are the problem in this case. You need to specify wheelBase and carWidth in units that match your view size. For example, if the image of your car on the screen has a wheel base of 30 pixels, you would use 30 for the WheelBase variable.
This explains why your on-screen circles are too small. Cocoa is trying to draw circles for a tiny little car which is only 1.8 pixels wide!
Now, for the matter of moving your car along the circle:
The theta variable you calculate in the code above is a rotational speed, which is what you would use to move the car around the center point of that circle:
Let's assume that your speed variable is in pixels per second, to make the calculations easier. With that assumption in place, you would simply execute the following code once every second:
// calculate the new position of the car
newCarPosition.x = (carPosition.x - r) + r*cos(theta);
newCarPosition.y = carPosition.y + r*sin(theta);
// rotate the car appropriately (pseudo-code)
[car rotateByAngle:theta];
Note: I'm not sure what the correct method is to rotate your car's image, so I just used rotateByAngle: to get the point across. I hope it helps!
update (after comments):
I hadn't thought about the center of the turning circle moving with the car. The original code doesn't take into account the angle that the car is already rotated to. I would change it as follows:
...
if (steerAngle < 0.0f)
theta = theta * -1;
// calculate the center of the turning circle,
// taking int account the rotation of the car
circleCenter.x = carPosition.x - r*cos(carAngle);
circleCenter.y = carPosition.y + r*sin(carAngle);
// draw the turning circle
drawCircle(circleCenter, r, CC_DEGREES_TO_RADIANS(180), 50, NO);
// calculate the new position of the car
newCarPosition.x = circleCenter.x + r*cos(theta);
newCarPosition.y = circleCenter.y + r*sin(theta);
// rotate the car appropriately (pseudo-code)
[car rotateByAngle:theta];
carAngle = carAngle + theta;
This should keep the center of the turning circle at the appropriate point, even if the car has been rotated.