Camera ScreenToWorldPoint return biased value When it set output texture - unity3d

There are two cameras, the only difference is one set output to a renderImage.
Camera1 calculation wrong result with ScreenToWorldPoint ---In other words, there is an error on the coordinate transformation
Camera2 is correct with ScreenToWorldPoint.
This is a problem I encounter on the job, I built a camera completely copy of the original. Just didn't set the output.
Believe me, after this operation, the result is correct, at least from the visual effect.
Why set up the output will have an impact on the coordinate transformation. The position of the camera and other attributes are no different.

When you set a targetTexture, the texture is the screen. The size of the texture will affect the calulation of screen position.
You may use ViewportToWorldPoint instead, because a viewport coordinate is always from (0,0) to (1,1)
var viewPoint1 = camera1.ScreenToViewportPoint(mousePosition);
var worldPoint2 = camera2.ViewportToWorldPoint(viewPoint1);

Related

.dae model disappears when approaching

When I move towards my .dae imported model, it disappears. I'm not "inside" the mesh yet, visibly at least, so I don't know what the deal is.
It looks like your object is closer than the scene-view camera's "Near Clip Plane", and is not being rendered as a result. The default editor "near clip plane" distance is around 0.3 units, so it shouldn't normally interfere with your objects.
Check that your object scale is correct. If your object is very small, the scene camera's near clip plane will seem much farther in comparison, and will appear to clip objects more aggressively.
You can create a default "Cube" primitive to check the size of your objects. Cubes are 1 unit in all dimensions by default, and most of the time it's a good idea to roughly map one unit to a real-world scale of 1 meter. If your object is considerably smaller than the cube, you may want to try scaling them up and seeing if that helps.
F key is a shortcut key that will automatically zoom and focus to an object. Select the GameObject and press F. This problem should be gone.
If the problem is still there, select the Camera and change the Clipping Planes Near to 0.3 and Far to 50000. You can mess with these values until Object stops disappearing. Although, pressing F should solve it.

How to rotate an image with content on the same spot?

I have an image like below
then I want to rotate it, but I don't want its position to be changed.
For example the output should look like below
If I do imrotate, it will change its position. Is there any other way to rotate this without changing its position?
The imrotate function rotates the entire image around the specified angle. What you want is to rotate only a part of the image. For that you'll have to specify which part you want to rotate. Formally speaking, this is the rectangle in which this symbol is located.
The coordinates of this rectangle can be found by selecting all rows and columns, where any pixel is black. This can be done by taking the sum over all rows, finding the first and last non-zero entries there, and doing the same over all columns.
sx=find(sum(im==0,1),1,'first');
ex=find(sum(im==0,1),1,'last');
sy=find(sum(im==0,2),1,'first');
ey=find(sum(im==0,2),1,'last');
The relevant part of the image is then
im(sy:ey,sx:ex)
Now you can rotate only this part of the image and save it to the same location within the whole image:
im(sy:ey,sx:ex) = imrotate(im(sy:ey,sx:ex),180);
with the desired result:
Note: this will only work for 180° angles, such as the example you provided. If you rotate by any other angle, e.g. 90° or even arbitrary angles, such as 23°, the output of imrotate will not have the same size as the input, so the assignment im(sy:ey,sx:ex) = ... will always throw an error.

UIImageView rotation affecting position?

I have a UIImageView that is set to move up and down the screen with the value of the accelerometer, using the following code:
ship.center = CGPointMake(ship.center.x, ship.center.y+shipPosition.y);
Where shipPosition is a CGPoint set in the accelerometerDidAccelerate method using:
shipPosition.y = acceleration.x*60;
Obviously this works fine, it is very simple. I run into trouble when I try to something equally simple, vary the rotation of the image depending on its acceleration. I do this using:
ship.transform = CGAffineTransformMakeRotation(shipPosition.y);
For some reason this causes a very strange thing to happen, in that the image snaps back to its origin every time the main method is called. I can see frames where the image moves to where it should be, but then instantly snaps back.
This problem only happens when I have the rotation line in, commented out it works fine. I have no idea what is going on here, I have done this many times for different apps and i never had such a problem. In fact I copied my code from a different app I created where it works fine.
EDIT:
What really confuses me is when I change the angle of the rotation from the acceleration to the position of the ship using:
ship.transform = CGAffineTransformMakeRotation(ship.center.y/10);
When I do this, the ship actually rotates based on the accelerometer but does not move, which is crazy because a changing ship.center.y means the position of the ship is changing, but it's not!!
You should set the transform of you view back to CGAffineTransformIdentity before you set his center coordinates or frame and after that apply the new transformation.
The frame property returns the transformed coordinates of a view if it is transformed and not the true (well actually the transformed are true) coordinates.
Quote from the docs:
Warning: If the transform property is not the identity transform, the value of this property is undefined and therefore should be ignored.
Update/Actual Answer:
Well the actual problem is
shipPosition.y = acceleration.x*60;
Since you set the y pos in accelerometerDidAccelerate.
The acceleration won't remember it's old value. So if you move your device it will get a peak and as you slow down it will decelerate again.
Your ship will be +/-60 at the highest acceleration speed but will be 0 when you stop moving your device and shipPosition.y will be 0.
CGAffineTransformMakeRotation expects angle in radians, not in degrees.
1 radian = M_PI / 180.0 degrees

OpenGL ES: Rotating 3d model around itself

I'm playing with OpenGL ES on iPhone and I'm trying to rotate a model by panning with the finger. I discovered the open source app Molecules that let's you do that and I'm looking at that code, but when it comes to rotate a model of mine I'm able to rotate it only around a point distant in the space (like it was in orbit as a satellite and I am the fixed planet).
Any suggestion on what can be wrong?
I can post the code later , maybe on demand (many lines)
For the most part refer to Molecules you can find it here MOLECULES
If my memory serves me correctly, I think you need to translate the model to the origin, rotate, and then translate back to starting position to get the effect you are after.
I think there is a glTranslate() function, Say the object is at 1,0,0. You should then translate by -1,0,0 to go to origin. That is translate by a vector going from the center of the object to the origin.
The draw code probably looks roughly like this:
glLoadIdentity();
glTranslate(0, 0, -10);
glRotate(...);
drawMolecule();
Now it's important to realize that these transformations are applied in reverse order. If, in drawMolecule, we specify a vertex, then this vertex will first be rotated about the axis given to glRotate (which by definition passes through the local origin of the molecule), and then be translated 10 units in the −z direction.
This makes sense, because glTranslate essentially means: "translate everything that comes after this". This includes the glRotate call itself, so the result of the rotation also gets translated. Had the calls been reversed, then the result of the translation would have been rotated, which results in a rotation about an axis that does not pass through the origin anymore.
Bottom line: to rotate an object about its local origin, put the glRotate call last.

Screen-to-World coordinate conversion in OpenGLES an easy task?

The Screen-to-world problem on the iPhone
I have a 3D model (CUBE) rendered in an EAGLView and I want to be able to detect when I am touching the center of a given face (From any orientation angle) of the cube. Sounds pretty easy but it is not...
The problem:
How do I accurately relate screen-coordinates (touch point) to world-coordinates (a location in OpenGL 3D space)? Sure, converting a given point into a 'percentage' of the screen/world-axis might seem the logical fix, but problems would arise when I need to zoom or rotate the 3D space. Note: rotating & zooming in and out of the 3D space will change the relationship of the 2D screen coords with the 3D world coords...Also, you'd have to allow for 'distance' in between the viewpoint and objects in 3D space. At first, this might seem like an 'easy task', but that changes when you actually examine the requirements. And I've found no examples of people doing this on the iPhone. How is this normally done?
An 'easy' task?:
Sure, one might undertake the task of writing an API to act as a go-between between screen and world, but the task of creating such a framework would require some serious design and would likely take 'time' to do -- NOT something that can be one-manned in 4 hours...And 4 hours happens to be my deadline.
The question:
What are some of the simplest ways to
know if I touched specific locations
in 3D space in the iPhone OpenGL ES
world?
You can now find gluUnProject in http://code.google.com/p/iphone-glu/. I've no association with the iphone-glu project and haven't tried it yet myself, just wanted to share the link.
How would you use such a function? This PDF mentions that:
The Utility Library routine gluUnProject() performs this reversal of the transformations. Given the three-dimensional window coordinates for a location and all the transformations that affected them, gluUnProject() returns the world coordinates from where it originated.
int gluUnProject(GLdouble winx, GLdouble winy, GLdouble winz,
const GLdouble modelMatrix[16], const GLdouble projMatrix[16],
const GLint viewport[4], GLdouble *objx, GLdouble *objy, GLdouble *objz);
Map the specified window coordinates (winx, winy, winz) into object coordinates, using transformations defined by a modelview matrix (modelMatrix), projection matrix (projMatrix), and viewport (viewport). The resulting object coordinates are returned in objx, objy, and objz. The function returns GL_TRUE, indicating success, or GL_FALSE, indicating failure (such as an noninvertible matrix). This operation does not attempt to clip the coordinates to the viewport or eliminate depth values that fall outside of glDepthRange().
There are inherent difficulties in trying to reverse the transformation process. A two-dimensional screen location could have originated from anywhere on an entire line in three-dimensional space. To disambiguate the result, gluUnProject() requires that a window depth coordinate (winz) be provided and that winz be specified in terms of glDepthRange(). For the default values of glDepthRange(), winz at 0.0 will request the world coordinates of the transformed point at the near clipping plane, while winz at 1.0 will request the point at the far clipping plane.
Example 3-8 (again, see the PDF) demonstrates gluUnProject() by reading the mouse position and determining the three-dimensional points at the near and far clipping planes from which it was transformed. The computed world coordinates are printed to standard output, but the rendered window itself is just black.
In terms of performance, I found this quickly via Google as an example of what you might not want to do using gluUnProject, with a link to what might lead to a better alternative. I have absolutely no idea how applicable it is to the iPhone, as I'm still a newb with OpenGL ES. Ask me again in a month. ;-)
You need to have the opengl projection and modelview matrices. Multiply them to gain the modelview projection matrix. Invert this matrix to get a matrix that transforms clip space coordinates into world coordinates. Transform your touch point so it corresponds to clip coordinates: the center of the screen should be zero, while the edges should be +1/-1 for X and Y respectively.
construct two points, one at (0,0,0) and one at (touch_x,touch_y,-1) and transform both by the inverse modelview projection matrix.
Do the inverse of a perspective divide.
You should get two points describing a line from the center of the camera into "the far distance" (the farplane).
Do picking based on simplified bounding boxes of your models. You should be able to find ray/box intersection algorithms aplenty on the web.
Another solution is to paint each of the models in a slightly different color into an offscreen buffer and reading the color at the touch point from there, telling you which brich was touched.
Here's source for a cursor I wrote for a little project using bullet physics:
float x=((float)mpos.x/screensize.x)*2.0f -1.0f;
float y=((float)mpos.y/screensize.y)*-2.0f +1.0f;
p2=renderer->camera.unProject(vec4(x,y,1.0f,1));
p2/=p2.w;
vec4 pos=activecam.GetView().col_t;
p1=pos+(((vec3)p2 - (vec3)pos) / 2048.0f * 0.1f);
p1.w=1.0f;
btCollisionWorld::ClosestRayResultCallback rayCallback(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z));
game.dynamicsWorld->rayTest(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z), rayCallback);
if (rayCallback.hasHit())
{
btRigidBody* body = btRigidBody::upcast(rayCallback.m_collisionObject);
if(body==game.worldBody)
{
renderer->setHighlight(0);
}
else if (body)
{
Entity* ent=(Entity*)body->getUserPointer();
if(ent)
{
renderer->setHighlight(dynamic_cast<ModelEntity*>(ent));
//cerr<<"hit ";
//cerr<<ent->getName()<<endl;
}
}
}
Imagine a line that extends from the viewer's eye
through the screen touch point into your 3D model space.
If that line intersects any of the cube's faces, then the user has touched the cube.
Two solutions present themselves. Both of them should achieve the end goal, albeit by a different means: rather than answering "what world coordinate is under the mouse?", they answer the question "what object is rendered under the mouse?".
One is to draw a simplified version of your model to an off-screen buffer, rendering the center of each face using a distinct color (and adjusting the lighting so color is preserved identically). You can then detect those colors in the buffer (e.g. pixmap), and map mouse locations to them.
The other is to use OpenGL picking. There's a decent-looking tutorial here. The basic idea is to put OpenGL in select mode, restrict the viewport to a small (perhaps 3x3 or 5x5) window around the point of interest, and then render the scene (or a simplified version of it) using OpenGL "names" (integer identifiers) to identify the components making up each face. At the end of this process, OpenGL can give you a list of the names that were rendered in the selection viewport. Mapping these identifiers back to original objects will let you determine what object is under the mouse cursor.
Google for opengl screen to world (for example there’s a thread where somebody wants to do exactly what you are looking for on GameDev.net). There is a gluUnProject function that does precisely this, but it’s not available on iPhone, so that you have to port it (see this source from the Mesa project). Or maybe there’s already some publicly available source somewhere?