Conversion of 2D image to 3D [duplicate] - iphone

This question already has answers here:
How to convert a 2D image to 3D in iphone app [closed]
(1 answer)
iOS - 2d image turn into a 3d
(4 answers)
Closed 9 years ago.
I need to convert a 2D image into 3D in my ios application. How would I do this in my ios application? What are the frameworks I need to use? Will OpenGL-es helps in this? Or are there any other packages in IOS to do this? What are the steps do I need to follow to make a 2D image to 3D?

I'm not sure how you plan on doing this programmatically as a 2D image doesn't have any Z-axis information other than 0.
Take a simple 2D square for example and you want to make it into a cube. Your square has 4 points of data representing its corners: x1y1, x2y2, x3y3, x4y4. To "convert" that square into a cube you now need to add the z-axis data on those points AND provide the coordinates for the missing 4 sets of data since your cube has 8 coordinate references (x1y1z1 - x8y8z8) AND provide which points are connected to each other. That's just assuming a planar shape with no other depth or curvature. You also have no idea what the non-visible sides of the 2D image look like.
The only way I can see this as even slightly feasible is if you allow the user to add their own points on the screen (allowing for manually editing of depth since your screen can't infer depth) and decide which existing points to connect the new points to. At this point, though, you're no longer doing a conversion...

I think you should clarify a little bit what you are trying to achieve. There are no ways that I would know of to automatically find depth information from a single 2d image.
However, if you are talking about something shot in stereo, then you could look for features present in both images and figure out the depth of each feature relative to each other.
And in the case of a video or image sequence, there are a few interesting papers, but one approach I find specifically interesting is "Depth Extraction from Video Using Non-parametric Sampling" by Kevin Karsch, Ce Liu and Sing Bing Kang.
you can find it here:
http://www.kevinkarsch.com/depthtransfer/eccv12-depthtransfer.pdf

Related

How to detect contours of object and describe it to compare on server with ARKit

I want to detect shape and then describe it (somehow) to compare it with server data.
So the first question is, is it possible to detect shape like blob with ARKit?
To be more specific, let's describe my usecase generally.
I want to scan image by phone, get the specific shape, send it on server, compare two images on server (server image is the real one, scanned image would be very similar) and then send back some data. I am not asking about server side, the only question about server side is what should I compare - images using OpenCV, some mathematical description of both images and try to find similarity, etc.).
If the question is hard to understand, let's split it on two easy questions:
1) How to scan 2D object by iPhone and save it (trim the specific shape from its background when object is black and background white).
2) Describe scanned object for comparision with almost the same object.
ARKit has no use here.
You will probably need a lot of CoreImage (for fixing perspective distortion and binarization) and OpenCV logic.
Perhaps Vision can help you a little bit with getting ROI from the entire frame, especially if the waveform image is located in some kind of rectangle.
Perhaps you can train a custom ML model that will recognize specific waveforms or waveforms in general to use with Vision.
In any case, it is not a trivial task.

Is it possible to create a 3D photo from a normal photo?

If I have understand well, 3D 360 photos are created from a panorama photo, so I guess it should be possible to create a 3D photo (non 360) from a normal photo. But how? I did not find anything in Google! Any idea of what should I search??
So far, if nothing available (I don't think so), I'll try to duplicate the same photo in each eye. One of the pictures a little bit moved to the right, and the other one moved a little bit to the left. But I think the distortion algorithm is much more complicated.
Note: I'm also receiving answers here: https://plus.google.com/u/0/115463690952639951338/posts/4KdqFcqUTT9
I am in no way certain of this, but my intuition on how 3D 360 images are created in GoogleVR is this:
As you take a panorama image, it actually takes a series of images. As you turn the phone around, the perspective changes slightly with each image, not only by angle, but also offset (except in the unlikely event you spin the phone around its own axis). When it stitches together the final image, it creates one image for each eye, picking suitable images from the series so that it creates a 3D effect when viewed together. The same "area" of the image for each eye comes from a different source image.
You can't do anything similar with a single image. It's the multitude of images produced, each with a different perspective coming from the turning of the phone, that enables the algorithm to create a 3D image.
2D lacks a dimension hence cannot be converted to 3D just like that, but there are clever ways for example Google Pixel even though doesn't have 2 camera can make it seem like the image is 3D by applying some Machine learning algorithm that create the effect of perspective and depth by selective blurring.
3d photos can't be taken by normal but you can take 360 photos with normal camera ..... There are many apps via which you can do this ..... Also there are many algorithms to do it programmatically

Algorithm for ortho-rectification; mosaic-ing aerial images [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I'm working on ways of collecting farm aerial images (images collected from a helicopter in a perpendicular fashion) that I'd want to stitch them together to build the whole photo of the area that's being covered and then I wanted to run analytics.
I'm assuming the images will come with [latitudue, longitude] coordinates, to help me determine the spots to place the images.
To understand the issues with this technology, I tried manually stitching pictures taken from my phone of some sample area in my back yard. I experienced that the edges don't usually look the same because they are being seen by the camera from different sides or angles. I guess this is a distortion in image that could potentially be fixed by ortho-rectification (not completely sure).
I quickly created the following picture to help explain my problem.
My question to you:
What are the algorithms/techniques used to do ortho-rectification?
What tools would best suit my needs: opencv, or processing or matlab or any other tool that could easily help in rectification of images and creating a mosaic photo?
What other issues should be considered in doing aerial imagery mosaic-ing and analytics?
Thank you!
Image stitching usually assumes that the camera center is fixed across all photos, and uses homographies to transform the images so that they seem continuous. When the fixed camera center assumption is not strictly valid, artifacts/distortions may appear due to the 3D of the scene. If the camera center moved by a small distance compared to the relief of the scene, "seamless image blending" techniques may be sufficient to blur out the distortions.
In more extreme cases, ortho-rectification is required. Ortho-rectification (Wikipedia entry) is the task of transforming an image observed from a given perspective camera into an orthographic (Wikipedia entry) and usually vertical point of view. The orthographic property is interesting because it makes the stitching of several images much easier. The following picture from Wikipedia is particularly clear (left is an orthographic or directional projection, right is a perspective or central projection):
The task of ortho-rectification usually requires having a 3D model of the scene, in order to map appropriately intensities observed by the perspective camera to their location with respect to the orthographic camera. In the context of aerial/satellite images, Digital Elevation Models (DEM) are often used for that purpose, but generally have the serious drawback of not including man-made structures (only Earth relief). The NASA provides freely the DEM acquired by the SRTM missions (DEM link).
Another approach, if you have two images acquired at different positions, you could try to do a 3D reconstruction using one of the stereo matching technique, and then to generate the ortho-rectified image by mapping the two images as seen by a third orthographic and vertical camera.
OpenCV has several interesting function for that purpose (e.g. stereo reconstruction, image mapping functions, etc) and might be more appropriate for intensive usage. Matlab probably has interesting functions as well, and might be more appropriate for quick tests.
First, rectification is some kind of warping, but not the one that you need. Regular rectification is used in stereo to ensure that matching points lie on the same row - not your case. Ortho-rectification warps perspective projection into orthographic - again not your case. Not only you lack a 3D model for this warping to calculate but also you dont need it since your perspective distortions are negligible and you already have image pretty close to ortho ( that is when the size of the objects is small compared to the viewing distance perspective effects are small).
You problems in aligning two images stem from small camera rotations between shots. To start fixing the problem you need to ensure that your images actually overlap by say 30%. To read about this see chapter 9 of this book.
What you need is to review a regular image stitching techniques that use homography to map two images. Note that doing so assumes that images are essentially flat. To find homography you can first manually select 4 points in one image and 4 matching points in another image and run openCV function findHomography(). Note that overlap is required to find the matches (in your picture there is no overlap). warpPerspective() can warp images for you after homogrphy is found.
"What are the algorithms/techniques used to do ortho-rectification?
If you want a good overview of the techniques, then the book "Multiple View Geometry in Computer Vision" by Hartley & Zisserman might be a good place to start: http://www.robots.ox.ac.uk:5000/~vgg/hzbook/
Andrew Zisserman also has some tutorials available here at www.robots.ox.ac.uk/~az/tutorials/ which might be more accessible/make it easier for you to find the particular technique you want to use.
"What tools would best suit my needs: opencv, or processing or matlab or any other tool that could easily help in rectification of images and creating a mosaic photo?"
OpenCV has a fair few number of tools available - take a look at Images stitching for starters. There's also a lot available for correcting distortion. However, it doesn't have to be the tool you use, there are others!

Shape Detection (circle, square, rectangle, triangle, ellipse) for a camera captured image + i OS 5 + Open CV

I am new to OpenCV and need to know the method of OpenCV which detects different shapes (circle, square, rectangle, triangle, ellipse) in a camera captured image for iPhone.
so, could someone directs me to the right direction (references/articles/anything) that which techniques are better to get it done.
Thanks..
iOmi
First you will probably need to look at an edge detector such as Canny to extract the shapes into a binary image. (Although this may be expensive for the iphone)
For circles I would have a look at the HoughCircles.
For squares and rectangles you should look at the findContours method and the sample code squares.cpp in the samples directory when you downloaded opencv.
With a quick google search I was able to find an article about detecting shapes in C# which roughly corresponds to the methods you would use in another language while using the opencv library.
I have not used opencv in ios but I hope this will help get you started.

Match some predefined pattern ispresent in the image captured from camera or not and if then found their coordinates [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Recognize Black patterns appearing on the four corners of the image ios using opencv or some other technique
I want to create a app like shotnote app(basically its camera feature).
It is a page scanning app that is used to scan special type of paper.
http://itunes.apple.com/us/app/shot-note/id411332997?mt=8
Refer first two screenshots
You can check the functionality what I am searching in the following video:-
http://www.youtube.com/watch?v=8F_1iu4pDkQ
In my app
The paper background will be white and the pattern will be in black.
What I want is when I take a image from camera then it should match with the pattern (image or mages) and if patterns are present in that captured image then image should be cropped like in the way so that the patterns should appear at the corners of the image.
Basically I need to detect that the patterns are present and after that I want their coordinates so that I can crop the image accordingly.
Patterns you can assume as rectangle or L shape .
I am surfing for this from the last two weeks and still struggling with it.
Please provide me some sample code or some suggestions which I should follow.
Any suggestions would be highly appreciated.
Thanks in advance.
If you know that the pattern is always undistorted (i.e. no or very littly perspective in the original image) template matching, e.g. with a normalised cross correlation will do the trick just fine.