iphone, Image processing - iphone

I am building an application on night vision but i don't find any useful algorithm which I can apply on the dark images to make it clear. Anyone please suggest me some good algorithm.
Thanks in advance

With the size of the iphone lens and sensor, you are going to have a lot of noise no matter what you do. I would practice manipulating the image in Photoshop first, and you'll probably find that it is useful to select a white point out of a sample of the brighter pixels in the image and to use a curve. You'll probably also need to run an anti-noise filter and smoother. Edge detection or condensation may allow you to bold some areas of the image. As for specific algorithms to perform each of these filters there are a lot of Computer Science books and lists on the subject. Here is one list:
http://www.efg2.com/Lab/Library/ImageProcessing/Algorithms.htm
Many OpenGL implementations can be found if you find a standard name for an algorithm you need.

Real (useful) night vision typically uses an infrared light and an infrared-tuned camera. I think you're out of luck.
Of course using the iPhone 4's camera light could be considered "night vision" ...

Your real problem is the camera and not the algorithm.
You can apply algorithm to clarify images, but it won't make from dark to real like by magic ^^
But if you want to try some algorithms you should take a look at OpenCV (http://opencv.willowgarage.com/wiki/) there is some port like here http://ildan.blogspot.com/2008/07/creating-universal-static-opencv.html

I suppose there are two ways to refine the dark image. first is active which use infrared and other is passive which manipulates the pixel of the image....

The images will be noisy, but you can always try scaling up the pixel values (all of the components in RGB or just the luminance of HSV, either linear or applying some sort of curve, either globally or local to just the darker areas) and saturating them, and/or using a contrast edge enhancement filter algorithm.
If the camera and subject matter are sufficiently motionless (tripod, etc.) you could try summing each pixel over several image captures. Or you could do what some HDR apps do, and try aligning images before pixel processing across time.
I haven't seen any documentation on whether the iPhone's camera sensor has a wider wavelength gamut than the human eye.

I suggest conducting a simple test before trying to actually implement this:
Save a photo made in a dark room.
Open in GIMP (or a similar application).
Apply "Stretch HSV" algorithm (or equivalent).
Check if the resulting image quality is good enough.
This should give you an idea as to whether your camera is good enough to try it.

Related

Is it possible to reproduce "Texture" Effect of Adobe Lightroom in iOS?

I'm trying to implement Texture effect in iOS but can't figure it out how to do that. Can anyone share some idea or resources or steps about that? See the attached image for clarification.
I know the workings of Adobe Lightroom "Texture" effect. According to Max Wendt, a Senior Computer Scientist on ACR and the lead engineer of the Texture project
Just like you can break an image into color channels (for example; red, green, and blue), an image can also be broken up into different “frequencies.” There are high frequency details, mid frequency features, and low frequency areas; together, they all make the image. If we apply "Texture" the medium-frequency features of an image are enhanced without affecting the other frequencies. link
Actually, I'm Exploring CIFilters and chaining them together to achieve custom filters. Unfortunately I stuck here for Texture

Algorithm for ortho-rectification; mosaic-ing aerial images [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I'm working on ways of collecting farm aerial images (images collected from a helicopter in a perpendicular fashion) that I'd want to stitch them together to build the whole photo of the area that's being covered and then I wanted to run analytics.
I'm assuming the images will come with [latitudue, longitude] coordinates, to help me determine the spots to place the images.
To understand the issues with this technology, I tried manually stitching pictures taken from my phone of some sample area in my back yard. I experienced that the edges don't usually look the same because they are being seen by the camera from different sides or angles. I guess this is a distortion in image that could potentially be fixed by ortho-rectification (not completely sure).
I quickly created the following picture to help explain my problem.
My question to you:
What are the algorithms/techniques used to do ortho-rectification?
What tools would best suit my needs: opencv, or processing or matlab or any other tool that could easily help in rectification of images and creating a mosaic photo?
What other issues should be considered in doing aerial imagery mosaic-ing and analytics?
Thank you!
Image stitching usually assumes that the camera center is fixed across all photos, and uses homographies to transform the images so that they seem continuous. When the fixed camera center assumption is not strictly valid, artifacts/distortions may appear due to the 3D of the scene. If the camera center moved by a small distance compared to the relief of the scene, "seamless image blending" techniques may be sufficient to blur out the distortions.
In more extreme cases, ortho-rectification is required. Ortho-rectification (Wikipedia entry) is the task of transforming an image observed from a given perspective camera into an orthographic (Wikipedia entry) and usually vertical point of view. The orthographic property is interesting because it makes the stitching of several images much easier. The following picture from Wikipedia is particularly clear (left is an orthographic or directional projection, right is a perspective or central projection):
The task of ortho-rectification usually requires having a 3D model of the scene, in order to map appropriately intensities observed by the perspective camera to their location with respect to the orthographic camera. In the context of aerial/satellite images, Digital Elevation Models (DEM) are often used for that purpose, but generally have the serious drawback of not including man-made structures (only Earth relief). The NASA provides freely the DEM acquired by the SRTM missions (DEM link).
Another approach, if you have two images acquired at different positions, you could try to do a 3D reconstruction using one of the stereo matching technique, and then to generate the ortho-rectified image by mapping the two images as seen by a third orthographic and vertical camera.
OpenCV has several interesting function for that purpose (e.g. stereo reconstruction, image mapping functions, etc) and might be more appropriate for intensive usage. Matlab probably has interesting functions as well, and might be more appropriate for quick tests.
First, rectification is some kind of warping, but not the one that you need. Regular rectification is used in stereo to ensure that matching points lie on the same row - not your case. Ortho-rectification warps perspective projection into orthographic - again not your case. Not only you lack a 3D model for this warping to calculate but also you dont need it since your perspective distortions are negligible and you already have image pretty close to ortho ( that is when the size of the objects is small compared to the viewing distance perspective effects are small).
You problems in aligning two images stem from small camera rotations between shots. To start fixing the problem you need to ensure that your images actually overlap by say 30%. To read about this see chapter 9 of this book.
What you need is to review a regular image stitching techniques that use homography to map two images. Note that doing so assumes that images are essentially flat. To find homography you can first manually select 4 points in one image and 4 matching points in another image and run openCV function findHomography(). Note that overlap is required to find the matches (in your picture there is no overlap). warpPerspective() can warp images for you after homogrphy is found.
"What are the algorithms/techniques used to do ortho-rectification?
If you want a good overview of the techniques, then the book "Multiple View Geometry in Computer Vision" by Hartley & Zisserman might be a good place to start: http://www.robots.ox.ac.uk:5000/~vgg/hzbook/
Andrew Zisserman also has some tutorials available here at www.robots.ox.ac.uk/~az/tutorials/ which might be more accessible/make it easier for you to find the particular technique you want to use.
"What tools would best suit my needs: opencv, or processing or matlab or any other tool that could easily help in rectification of images and creating a mosaic photo?"
OpenCV has a fair few number of tools available - take a look at Images stitching for starters. There's also a lot available for correcting distortion. However, it doesn't have to be the tool you use, there are others!

How to improve edge detection on IPhone apps?

I'm currently developing an IPhone app that uses edge detection. I took some sample pictures and I noticed that they came out pretty dark in doors. Flash is obviously an option but it usually blinding the camera and miss some edges.
Update: I'm more interested in IPhone tips. If there is a wat to get better pictures.
Have you tried playing with contrast and/or brightness? If you increase contrast before doing the edge detection, you should get better results (although it depends on the edge detection algorithm you're using and whether it auto-magically fixes contrast first).
Histogram equalisation may prove useful here as it should allow you to maintain approximately equal contrast levels between pictures. I'm sure there's an algorithm been implemented in OpenCV to handle it (although I've never used it on iOS, so I can't be sure).
UPDATE: I found this page on performing Histogram Equalization in OpenCV

Blur Effect (Wet in Wet effect) in Paint Application Using OpenGL-ES

I am developing Paint application using OpenGL-ES for iPhone and i want to implement Gaussian blur effect(Wet in Wet) for painting. Please have look at the image describing my requirement for Blur effect :
I tried to search how for OpenGL function but did not get anything. Can anyone guide me to a right direction in this problem.. Any kind of help or suggestion will be highly appreciated.. Thanks..
You should be able to render the same brush stroke many times pixels apart to get the effect you want. If you jitter the renders with a Gaussian distribution you will get a Gaussian blur.
This would be similar to jitter antialiasing with an accumulation buffer, but instead of using subpixel offsets you would use multi-pixel offsets as big as you want the blur effect. You'd would want to probably render around 16 times to make it look smooth. http://www.opengl.org/resources/code/samples/advanced/advanced97/notes/node63.html
This is also similar(or really the same thing) as jittering to create motion blur. http://glprogramming.com/red/chapter10.html
You wouldn't even NEED to use a separate accumulation buffer here, just render each pass with alpha that adds up to solid. One thing to remember, you want to always jitter across the same offsets so that successive frames look the same(i.e. if you are using random offsets then every frame will have slightly different blur effect).
I am assuming you would want to apply this on an Image. I have no idea how this could be done in OpenGL ES. But you could try using this awesome image processing library. It provides other image effects other than Guassian-Blur...
Happy Blurring...

Infrared image processing in Matlab

i would like to process infrared imaging in Matlab. Any kind of processing or techniques.
Is there any built-in function in Matlab?
And can anyone suggest any books or articles,as well as resources for sample Far Infrared images.
Thanks!
You may want to have a look at the image processing toolbox. There, you find plenty of built-in functionality for denoising and segmentation of any kind of images.
For more detailed answers, I suggest that you let us know in more detail what kind of processing that you want to do.
EDIT
Infrared images are normally grayscale images. Thus, it is very straightforward to false-color them by mapping the gray levels to colors (i.e. by applying a different colormap).
%# load a grayscale image
img = imread('coins.png');
%# display the image
figure
imshow(img,[]);
%# false-color
colormap('hot')
For more information about general techniques, you may want to Google 'infrared image processing' and start looking at the hits related to your specific application.
In general, processing of infrared images is not different from processing other grayscale images. What specific algorithms you apply depends very much on the image and the purpose of the processing.
LWIR imagery can be used for a large number of different applications. In general, each application domain has its own history, terminology and mathematical conventions.
As an example, we can use LWIR imagery for:
Detecting faulty components or components that are likely to fail.
Medical imaging for diagnosis of skin disorders.
Finding humans in Search & Rescue or border-control applications.
Detecting & classifying aircraft, missiles, vehicles etc... for various defense applications.
Geographical or Oceanographic research (using LWIR satellite imagery).
Each of these applications will rely upon very different techniques. The image processing toolbox may well be useful for some of these application areas, but, in general, you need to look at resources (software, textbooks, journals etc...) that are specific to the application domain or the specific sensor system that you will be using.
I don't think that the processing infrared images in general will not be different from processing visible-color images. As far as i came to know, during the processing of infrared images, we have to use raw data image which contains temperature information rather than pseudo color image which contains only the color intensity from 0-255.