iPhone picking/unproject when in landscape view (projection matrix is rotated) - iphone

I am trying to get 'picking' working in a 3D scene, where the view is rotated such that the iPhone is being held in a landscape mode. I'm using OpenGL ES 2.0 (so all shaders, no fixed-function pipeline).
I'm performing the unproject from within the rendering code and immediately drawing the resulting ray using GL_LINES (ray only gets calculated the 1st time that I touch the screen, so afterwards I can move the camera around to observe the resulting line from various angles).
My unproject code/call is fine (lots of examples of gluUnproject online). My matrix-inversion code is fine (even compared with excel for a few matrices). However, the resulting ray is off by at least 5-15 degrees from where I actually 'clicked' (in the Simulator it really is a click, so I'm expecting a lot more precision from the unproject).
My view is rotated to landscape (after I create the perspective-projection matrix, I rotate it around the Z by -90 degrees; the aspect ratio remains at a portrait one). I believe that the problem with the math being off lies here.
Does anyone have any experience doing picking/unprojection with specifically a landscape view?

Is it possible you simply have the field of view off? Assuming you've stuck to something a lot like the traditional pipeline, if you were inverting your modelview matrix then using generic unproject code (ie, code that assumes a 90 degree field of view in both directions to fill eye space) then that would explain it.
A quick diagnostic test is to see how far off it is for different touches. Touches nearer the centre of projection should be closer to the correct answer.
On a screen with square pixels like the iPhone, the aspect ratio is just the proportion of the horizontal field compared to the vertical. So if you want to be unscientific about it, find the field of view you're using, say f, and try multiplying your results by 90/f or f/90. If that doesn't work, try also throwing a factor of 480/320 or 320/480 in there.
A better solution is to follow your code through and figure out what your actual horizontal and vertical fields of view are. And multiply your results by that over 90.

Related

Why is my Unity 2.5D textures changing draw order based on camera x position?

I have a problem where images change z drawing depending on the xposition of my camera - the image shows the problem - I want it so the positions of the images dont change drawing order to camera
https://ibb.co/syHLY8d
Looks like the issue is they're exactly the same depth from the camera, and when the camera moves in the X-coordinate, floating point rounding errors cause the depth values to be slightly different and sort in a different way.
To fix this case you need to move one object slightly further away from the camera, although personally I would say this is generally not an issue worth worrying about, lots of games have small sorting issues like this and most people won't notice.

UnityVR - Having the both renders display the exact same image for each eye

When Unity builds a VR project, by default it is set to make the two views stereoscopic. It slightly offsets the camera position of one eye to give the user a sense of depth.
For example a square will appear slightly to the left on the right view compared to the left view.
I want to make the camera truly monoscopic by removing the offset that is created when i build the project. Each camera should render all objects in exactly the same position for both eyes.
One of things i tried was creating two camera and setting them to the left and right eye. Then i manually set the position/rotation of one camera until it looked monoscopic
It worked fine on my pixel phone, but as soon as i put the project on my test phone i noticed that the difference in resolutions messed up the view i was going for. The blocks were not in the same position when i looked at both renders.
If anyone has any solutions or ideas as to how i can go about this, i would greatly appreciate it.
Thank you!
You can still use 2 cameras, but instead of offsetting them, you can just make the width of the camera half.
Make 2 cameras, set their positions to exactly the same.
On the left eye camera, set the width to 0.5 and the x position to 0.
On the right eye camera, set the width to 0.5 and the x position to 0.5.
You should now have 2 cameras rendering the exact same thing, but twice across the screen, with no sense of depth.

Measuring object width

I'd like to develop an iPhone app that does the following:
1. Starts the device camera.
2. Places a layer on the screen containing a stretchable frame for the user to fit to a desired object.
3. Measures the object's width & height.
You may look at this app which does practically what I need and more:
http://itunes.apple.com/us/app/easymeasure-measure-your-camera!/id349530105?mt=8
Notice that it doesn't need to be super accurate and can definitely bear some aberration.
Any clue how to do it?
10x
The clue: Geometry and Trigonometry.
By knowing the camera Field-of-View angles, entering the height of the camera above ground and assuming a planar, i.e. flat, ground, you can use basic geometry and trigonometry to work out everything.

Manipulating Gyroscope / Accelerometer Values obtained from iPhone 4

I'm developing a project for my university, to manipulate gyroscope/accelerometer values obtained from the iPhone 4. But i'm stuck with a mathmatical issue and I hope you guys can help me out.
I'll give you an example of what's about:
Your iPhone is face up and you move it UP, on Y axis.
Your iPhone is face right and you move it UP, on X axis this time (since you rotated the iphone 90 degrees).
On the second time, the computer interprets that i've moved the iphone to the RIGHT, but it's wrong. I've moved it up, but my axis were rotated since the iphone was face right.
What do I need?
I need a way to VIRTUALY position back the iphone face up (where the 3 axis are correct) and give each axis his correct movement value.
If the iphone is turned 90 degrees, then I can easily switch X-Z axis and its correct. But I want to make it work for any angle of rotation.
I would be really thankfull if anyone can help me with some sort of pseudo-algorithm or mathmatical description of what to do.
NOTE: I only need a way to compensate all three axis acording with the iPhone rotation.
Update:
I don't actually need the precise values, since I'm making a graph coparition between all the records I get from the gyroscope. I'll make it clearer.
-> You draw a LETTER just by moving the iphone in the air and my application will recognize the letter you just drew. The method I use for recognition is based on TFT algorithm, and recording to a database with sample values originated from letters I've previously drawed.
My point is: Don't really matter the values I get, or what they represent. All I need is that all graphs be equal even if the iPhone is on different position. Quite hard to explain, but if you draw the letter 'P' with the iphone turned UP, the graph originated will be different if you draw the 'P' with the iPhone turned RIGHT.
So I need to compensate the axis to their original orientation, that way I'll get always similar graphs.
This post was before iOS5 was released. FYI to anyone coming here, DeviceMotion relative to world - multiplyByInverseOfAttitude shows how to transform device-relative acceleration values to earth-relative for iOS5 and above.
So, what you want is to convert from iPhone's coordinate system (or object/tangent system) to world coordinate system (or vice versa, as you see it, doesn't matter). You know the iPhone coordinate system as you have gyroscope data. So what you want is to create the object->world transformation matrix and multiple each of the velocity vectors (from accelerometer) by it.
Take a look here for a good explanation of tangent space and how to create tangent space -> world transformation matrix. If you aren't familiar with 3D/linear math it might be a bit tricky, but worth the trouble.

Zooming in/out and painting in openGL

I've recently had some issues implementing a zooming feature into a painting application. Please let me start off by giving you some background information.
First, I started off by modifying Apple's glPaint demo app. I think it's a great source, since it shows you how to set up the EAGLView, etc...
Now, what I wanted to do next, was to implement zooming functionality. After doing some research, I tried two different approaches.
1) use glOrthof
2) change the frame size of my EAGLView.
While both ways allow me to perfectly zoom in / out, I experience different problems, when it actually comes to painting while zoomed in.
When I use (1), I have to render the view like this:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(left, right, bottom, top, -1.0f, 1.0f); //those values have been previously calculated
glDisable(GL_BLEND);
//I'm using Apple's Texture2D class here to render an image
[_textures[kTexture_MyImage] drawInRect:[self bounds]];
glEnable(GL_BLEND);
[self swapBuffers];
Now, let's assume I zoom in a little, THEN I paint and after that, I want to zoom out again. In order to get this to work, I need to make sure that "kTexture_MyImage" always contains the latest changes. In order to do that, I need to capture the screen contents after changes have been made and merge them with the original image. The problem here is, that when I zoom in, my screen only shows part of the image (enlarged) and I haven't found a proper way to deal with this yet.
I tried to calculate which part of the screen was enlarged, then do the capturing. After that I'd resize this part to its original size and use yet another method to paste it into the original image at the correct position.
Now, I could go more into detail on how I achieved this, but it's really complicated and I figured, there has to be an easier way. There are already several apps out there, that perfectly do, what I'm trying to achieve, so it must be possible.
As far as approach (2) goes, I can avoid most of the above, since I only change the size of my EAGLView window. However, when painting, the strokes are way off their expected position. I probably need take the zoom level into account when painting and re-calculate the CGPoints in a different way.
However, if you have done similar things in the past or can give me a hint, how I could implement zooming into my painting app, I'd really appreciate it.
Thanks in advance.
Yes, it is definitely possible.
When it comes to paint programs, you should be keeping a linked list or tree of objects to draw for easy insertion / removal. When the user stops painting, (i.e. touchesEnded), you add objects to the data structure containing your scene.
When your user zooms you need to modulate the coordinates of the objects you are drawing with respect to the current viewport, projection, and modelview transforms. In your case, you're not changing the viewport or the modelview transforms so you need only account for the projection transform. You could also implement your zoom using a translation and scale on the modelview matrix but I'll ignore that case for simplicity because it involves inverting the transforms.
The good news is that you are using an orthographic projection so world coordinates correspond to window coordinates when no zooming is in effect. The "world" in your case is a simple canvas that probably corresponds to the size of the device in window coordinates.
Before you add an object to your scene data structure, convert all of the coordinates, using the current projection transform (i.e. the parameters to the glOrthof() call) to world coordinates (i.e. full canvas coordinates). You'll only remain sane if you keep all things in your model in the same coordinate space.
To convert the coordinates, assuming you can never zoom out past full device dimensions in your glOrtho() call, you'll have to scale them down proportional to the ratios of your zoomed ortho dimensions to your unzoomed ortho dimensions then bias them by the difference between your zoomed ortho bottom, left values and those of the original unzoomed ortho values.