Is there any ready made class/formula somewhere I can use it for control my viewpoint by accelerometer's/compass's XYZ?
I want to achieve the same view control like acrossair uses.
I have the parts (OpenGL space, filtered accel-, compass values, and a cubic panoramic view mapped to a cube around my origin).
Can somebody suggest me where to start at least?
I've got into the problem since, so the posts about the steps of the solution can be followed here:
xCode - augmented reality at gotoandplay.freeblog.hu
A brief sketch about the whole process below:
How to get the transformation matrix from the raw iPhone sensor (accelerometer, magnetometer) data http://gotoandplay.freeblog.hu/files/2010/06/iPhoneDeviceOrientationTransformationMatrix-thumb.jpg
If all you are looking for is a means of rotating the model view matrix for your scene, you could look at the source code to my Molecules application or an even simpler cube example that I wrote for my iPhone development class. Both contain code to incrementally rotate the model view matrix in response to touch input, so you would just need to replace the touch input with accelerometer values.
Additionally, Apple's GLGravity sample application does something very similar to what you want.
Related
I have read that it's possible to create a depth image from a stereo camera setup (where two cameras of identical focal length/aperture/other camera settings take photographs of an object from an angle).
Would it be possible to take two snapshots almost immediately after each other(on the iPhone for example) and use the differences between the two pictures to develop a depth image?
Small amounts of hand-movement and shaking will obviously rock the camera creating some angular displacement, and perhaps that displacement can be calculated by looking at the general angle of displacement of features detected in both photographs.
Another way to look at this problem is as structure-from-motion, a nice review of which can be found here.
Generally speaking, resolving spatial correspondence can also be factored as a temporal correspondence problem. If the scene doesn't change, then taking two images simultaneously from different viewpoints - as in stereo - is effectively the same as taking two images using the same camera but moved over time between the viewpoints.
I recently came upon a nice toy example of this in practice - implemented using OpenCV. The article includes some links to other, more robust, implementations.
For a deeper understanding I would recommend you get hold of an actual copy of Hartley and Zisserman's "Multiple View Geometry in Computer Vision" book.
You could probably come up with a very crude depth map from a "cha-cha" stereo image (as it's known in 3D photography circles) but it would be very crude at best.
Matching up the images is EXTREMELY CPU-intensive.
An iPhone is not a great device for doing the number-crunching. It's CPU isn't that fast, and memory bandwidth isn't great either.
Once Apple lets us use OpenCL on iOS you could write OpenCL code, which would help some.
I have to select any particular object visible in my image on i-phone.
Basically my project is to segment image objects on the basis of my touch.
The method I am following is to first detect contours of the image and then select a particular sequence based on finger touch.
Is there any other method which would be more robust because I have to run it on video frames?
I am using OpenCV and iphone for the project.
PLease help if there is any other idea which has been implemented or is feasible to implement.
Have you looked at SIFT or SURF implementations? They both track object features and are resilient (to a certain degree) to rotation, translation and scale.
Also check out FAST which is a corner detection algorithm which might help you, they have an app on the app store showing how quick it is too.
I'm currently trying to write an app, that would be able to show the effects of glas, as seen through the iPhone Camera.
I'm not talking about simple, uniform glas but glass like this:
Now I already broke this into two problems:
1) Apply some Image Filter to the 2D-frames presented by the iPhone Camera. This has been done and seems possible, e.g. in the app: faceman
2) I need to get the individual lighting properties of a sheet of glas that my client supplies me with. Now basicly, there must be a way to read the information about how the glas distorts ands skews the image. I think It might be somehow possible to make a high-res picture of the plate of glasplate, laid on a checkerboard-image and somehow analyze this.
Now, I'm mostly searching for literature, weblinks on how you guys think I could start at 2. It doesn't need to be exact, in the end I just need something that looks approximately like the sheet of glass I want to show. And I'm don't even know where to search, Physics, Image Filtering or Comupational Photography books.
EDIT: I'm currently thinking, that one easy solution could be bump-mapping the texture on top of the camera-feed, I asked another question on this here.
You need to start with OpenGL. You want to effectively have a texture - similar to the one you've got above - displace the texture below it (the live camera view) to give the impression of depth and distortion. This is a 'non-trivial' problem, in that whilst it's a fairly standard problem in its field if you're coming from a background with no graphics or OpenGL experience you can expect a very steep learning curve.
So in short, the only way you can achieve this realistically on iOS is to use OpenGL, and that should be your starting point. Apple have a few guides on the matter, but you'll be better off looking elsewhere. There are some useful books such as the OpenGL ES 2.0 Programming Guide that can get you off on the right track, but where you start would depend on how comfortable you are with 3D graphics and C.
Just wanted to add that I solved this old answer using the refraction example in the Khronos OpenGl ES SDK.
Wrote a blog-entry with pictures about it :
simulating windows with refraction
how to do morphing of two images in iphone programming.?
Your question is not iphone related.. the kind of algorithm you are looking for is language-agnostic since it just work with images.
By the way it's quite complex to morph two images, usually you have to
embed a grid of points over the two images that links characteristics that should be morphed. For example if you have two faces you would use a grid that connects eyes, the mouth, ears, the nose, the edge of the face and so on: these two grid tells the morpher how to "translate" a point into another one while blending the two images
the previous step can be done automatically (with specific software) or by hand. more points you place better will be your results
then you can do the real morphing sequence: basically you do an interpolation between the two images (in which the parameter that you use will decide how much will be the final risult similar to the first or the second image)
you should also apply some blending effect to actually create a believable result, always using a parametric function according to the morphing position
You can use UIView animation to transition from one UIView to another. This should provide some sort of lame morphing.
You can use XMRM, which is written in C++: http://www.cg.tuwien.ac.at/~xmrm/
There is no image morphing API in the iOS SDK.
No, there isn't an API for it. You'll have to do it yourself.
...ask a short question, get a short answer...
I have a few different OBJ files that I am able to parse and display. This code is based on Jeff LaMarche's The Start of a WaveFront OBJ File Loader Class. However, I need some means of detecting what coordinates I have selected within a displayed model. Usually there is one model displayed at a time but sometimes there will be two or more on the screen and I want to set up a NSNotificationCenter object to notify other sections of code as to which object is "selected". I have also looked at javacom's "OpenGL ES for iPhone : A Simple Tutorial" and would like to model the behavior of what I'm trying to program after his.
This is my current line of logic:
Setup a means to detect where a user has touched the screen
Have those coordinates compared with the current coordinates of a OBJ-based model
If they match, indicate said touch as being within the bounds of the object
The touchable set of coordinates must scale with the model. Currently the model is able to scale so I will most likely need to be able follow this scaling.
Also note, I don't need to move the model around on the screen. Just detect when it's been touched whether there is one model or several being displayed.
While this is most likely quite simple, I've been stumped by this for months now. I would really appreciate any light others can shed on this topic.
Use gluUnProject on the touch coordinates to get a vector going from the screen into the world, and then intersect it with your models to see if one of them has been touched. gluUnProject isn't by default available on iPhone, but you can look up implementations of it. http://www.mesa3d.org/ has an open source implementation.
Read about gluUnProject here: http://web.iiit.ac.in/~vkrishna/data/unproj.html