How do convertToWorld-/NodeSpace work ? - iphone

I am working on a game for a little while now and I worked with convertToWorldSpace by trial & error. It always worked out, but I have no idea what I am doing to be honest. I really hate that I do not understand what I am doing, but unfortunately the Internet does not give much information on this problem. So I hope somebody can explain, why for example I have to call convertToWorldSpace from the node's parent, and when I use convertToNodeSpace. I have absolutely no clue. Can somebody explain in general what they do and maybe give an example ? I would really appreciate it! :-)

simply put...
the world space is the coordinate grid of the screen.
nodespace is the coordinate grid of the layer.
cocos2d and most other game frameworks consist of multiple layers within one scene and you would most likely have a bunch of other little nodes/sprites. when you are calculating the coordinates for each sprites in your layer you would be using node space, but however when you get touch locations, it would be in world space so you would need to convert it using convertToNodeSpace etc.
hope this helps!
---- edit ----
maybe an example will help...
[somenode convertToWorldSpace:ccp(x, y)];
the above code will convert the coordinates (x, y) to the coordinates on the screen. so for example if (x, y) is (0, 0) that position will be the bottom left corner of the node, but not necessarily on the screen. this will convert (0, 0) of the node to the position of the screen.
[somenode convertToNodeSpace:ccp(x, y)];
the above code will do the opposite. it will convert the coordinates on the screen (x, y) to what it would be on somenode.
so it comes handy when you have something you want to move (or get the position of or whatever) that's a child of some other node or layer, since most of the time you want to move the whatever it is relative to the screen rather than within that layer.

Related

Unity some of the object comes with wrong axis from blender fbx file

The problem I have is simple. When I export my file in a blender as fbx and import it to the unity, the objects (or part of the objects for some reason) turn into the wrong axis while it looks ok in both blender and fixes file. I looked at google for this problem. Most of them did not help me. I saw that coding might help. I found a code that can help me which is this:
transform.eulerAngles = new Vector3(0, transform.eulerAngles.y, 0);
But the problem remains because it changes all of the rotation of the road, so the right roads become wrong while wrong roads become right. Putting this in the necessary planes might work but there are so many planes and I will put more planes in the blender project in the future so it won't help me too much.
I have two options.
Write a code that changes it to the right axis if the plane/object looks at the wrong axis.
Find the right way to import it.
I don't how to do any of the time since I am a beginner. I use Unity 2019.4.33f1 and Blender 2.83. Please help me
https://drive.google.com/drive/folders/13Y-lnccTvNPWPKAT520CCM8u-7MgaXkR?usp=sharing
Thank you
EDIT: I put my blender file to the drive link as well.
EDIT2: I keep editing I know but I realize that I have another problem as well. The problem is that Unity gets my object wrong, for example, I have a half green half red object and I want both colors to be at the +y axis. It makes the red part at -y axis. Here are the pictures: https://drive.google.com/drive/folders/1ob5xdKv0nPHN3TSABHGDVA7inW8vkag6?usp=sharing
How can I fix this :'(
LAST EDIT: I found a way to solve it. I added the same object twice but one of them has a different setting. Assume that rotations are 0 0 0 (x y z respectively) and scales are 1 1 1 (x y z respectively). The second road will be different settings from the first road. Its rotation settings will be 0 0 180 and the scale will be -1 1 1. Of course, I am open to better suggestions but this is the solution I found.
SOLUTION: After a long time later, I finally found the answer. I updated this in case someone needs it later. All I had to do was calculate the normals again and the way we do is, after choosing all of the roads, Edit Mode -> Mesh -> Normals -> Recalculate Inside / Recalculate Outside
Blender uses a left handed coordinate system, where the Z axis points upwards, whereas Unity uses a right handed coordinate system, where the Y axis points upwards. There might be some extensions or packages to keep them the same, however you probably just have to rotate the model in Unity or Blender to fit your needs.

Specifying Lat & Long for Leaflet TileLayer

Seems like a simple question, but I have been tearing my hair out for hours now.
I have a series of files ie.
kml_image_L1_0_0.jpg
kml_image_L2_0_0.jpg
kml_image_L2_0_1.jpg
kml_image_L2_1_0.jpg
kml_image_L2_1_1.jpg
etc. However just plotting them on the leaflet map surface understandibly puts the images at 0,0 on the earths surface, and the 0 zoom level inferred by the files should really be about 15 or so.
So I want to specify the latitude and longitude where the images should originate , and what zoom level they should start at. I have tried bounds (which doesn't display anything) and I have tried playing with offsetting the zoom level.
I need this because a user needs to click on an offline map to specify where they are and I need the GPS coordinates.
I also have a KML file but it seems to be of more help for plotting vector data on the map.
Any help is much appreciated, cheers.
If I understand correctly, the "kml_image_Lz_x_y.jpg" images that you have are actually tiles, with zoom, horizontal and vertical indices in their file name?
And your issue is that they use (z,x,y) numbers as if they started from the top-most level (zoom 0, single tile for entire world), but in fact they are just a small portion of the pyramid of tiles?
And you cannot use them as is because you still want to get actual geographic coordinates (latitude, longitude), which would be totally wrong if you used the tiles as if they were showing the entire world?
In that case, you have several options as workarounds:
The most simple and reliable would probably be to simply write a small script to rename all your tiles to their true (z,x,y) numbers.
Another option would be to modify the (z,x,y) numbers before they are written in the tile src attribute, and apply the appropriate offset (constant for z, scaled by z for x and y). That should probably happen in L.TileLayer.getTileUrl() method.
Good luck! :-)

openGL ES Rotation about a point

I'm attempting to get an object to rotate about the origin point (0,0,0)
I'm following some guidelines from this blog and was able to get the basic rotation about the Z axis and it makes a very tight circle about the Z azis.
When I change it to the X or Y axis the triangle I made goes behind me and then shows up from the other side.
The basic effect I'm hoping to achieve is to have it spin right infront of the camera.
I understand that I would have to rotate it by the amount I want and then translate it back to the origin, but I'm not quite sure on how to figure out how much to translate it by.
Can someone give me a push in the right direction about this especially the formula I would need to use to translate it properly?
Hard to answer without seeing your code, but it sounds like you want to first translate the center of the triangle to the origin, rotate, then translate back to the triangle's original position. glRotate() rotates around the origin, not an arbitrary point.
So, effectively,
glTranslatef(centerX, centerY, centerZ);
glRotatef(angle, 0, 0, 1);
glTranslatef(-centerX, -centerY, -centerZ);
Remember that OpenGL transformations are applied in reverse order that they are specified in the code, so the above translates by -(centerX, centerY, centerZ), then rotates, then translates back by (centerX, centerY, centerZ).
Check out Chapter 3 of the OpenGL Programming Guide for more information.

Warping an image on the iphone with OpenGL

I am fairly new to programming and I'm doing it, at this point, just to educate myself and have fun.
I'm having a lot of trouble understanding some OpenGL stuff despite having read this great article here. I've also downloaded and played around with an example from the apple developer site that uses a .png image for a sprite. I do eventually want to use an image.
All I want to do is take an image and warp it such that it's four corners end up at four different x,y coordinates that I supply. This would be on a timer of sorts (CADisplayLink?) with one or more of these points changing at each moment. I just want to stretch it between these dynamic points.
I'm just having trouble understanding exactly how this works. As I've understood some example code over at the developer center, I can use:
glVertexPointer(2, GL_FLOAT, 0, spriteVertices);
where spriteVertices is something like:
const GLfloat spriteVertices[] = {
-0.90f, -.85f,
0.95f, -0.83f,
-0.85f, 0.85f,
0.80f, 0.80f,
};
The problem is that I don't understand what the numbers actually mean, why some have negatives infront of them, and where they are counting from to get the four corners. How would I need to change normal x,y coordinates that I get in order to plug them into this? (the numbers I would have for x,y wouldn't look like numbers between 1 and 0 would they? I would like something akin to per pixel accuracy.
Any help is greatly appreciated even if it's just a link to more reading. I'm having trouble finding resources for a newb.
It isn't as complicated as it seems at first. Each pair of numbers relates to an x,y position on the screen. So, 0.80f, 0.80f, would say go to 80% of the drawable area for both x and y(left to right, down to up). While -0.80,-0.80 would say go to 80% of the drawable area from right to left, up to down. The negatives just switch the sides. A point of note, openGL draws down to up(as if you were looking up a building from the ground), while the iPhone draws up to down (as though you were reading a book).
To get pixels, you multiply the float value by drawable area 1024 X 0.8 = 819.2.
This tutorial is for textures, but it is amazing and really helps you learn the coordinate systems:
http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-part-6_25.html

How to determine if iPad user taps within an irregular shaped image?

I've hooked up a UITapGestureRecognizer to a UIImageView containing the image I'd like to display on an iPad screen and am able to consume the user taps just fine. However, my image is that of a hand on a table and I'd like to know if the user has tapped on the hand or on the table part of the image. I can get the x,y coordinates of the user tap with CGPoint tapLocation = [recognizer locationInView:self.view]; but I'm at a loss for how to map that CGPoint to, say, the region of the image that contains the hand vs. the region that contains the table. Everything I've read so far deals with determining if a CGPoint is in a particular rectangular area, but what if you need to determine if that CGPoint is located in the boundaries of a more irregular shape? Is that even possible? Any suggestions or just pointing me in the right direction would be a big help. Thanks!
You could use pointInside:withEvent: to define the hit area programmatically.
To elaborate, you just take the point and evaluate to see if it falls in the area you're after with a series of if statements. If it does, return TRUE. If it doesn't, return FALSE. If this is related to this post, then you could use a circular conditional to compare the distance of the point to the center of your circle using Pythagorean Theorem.
late to the party,
but the core tool you want here is a "point in polygon" routine.
this is a generic approach, independent of iOS.
google has lots of info,
but the general approach is:
1) define your closed polygon.
- it sounds like this might be a bit of work in your case.
2) choose any point not equal to your original point.
(yes, any point)
3) for each edge in the polygon,
determine if the ray from your original point through the seconds point intersects with that polygon edge.
- this requires a line-segment-intersect-ray routine, also available on the 'tubes.
4) if the number of intersections is odd, it's inside the polygon.
if the count is even, it's outside.
for general geometry-type issues,
i highly recommend Paul Bourke: http://local.wasp.uwa.edu.au/~pbourke/geometry/insidepoly/
You can use a bounding rectangle that covers most or all of the hand.
If the user is using his finger to tap either the hand or the table, I doubt that you want him or her to be extremely precise with the tap.
An extension of the bounding rectangle answer,
you could define several smaller bounding rectangles that would approximate a hand without covering the rest of the screen.
OR
you could use a list of rectangles, for each of your objects and put the hand at the end of the list. In this case, if you had a tap on button X on the top right hand of the screen which is technically inside the hand rectangle, it would choose the button X because that rectangle is found first.
define the shape by a black and white bitmap (1 bit per pixel). Check if the particular bit is set. This would eat a lot of memory if you had a lot of large shapes, but for one bitmap with a hand, it should not be a big deal.
define the shape as a polygon. Then you need to do point-in-polygon test. Wikipedia has a wonderful article on this, with links to code here: http://en.wikipedia.org/wiki/Point_in_polygon
iPad libraries might have this already implemented. Sorry, I cannot help you there, not an iPad developer.