Implementing a Simple Adaptive Threshold in GPUImage - iphone

I'm developing an iPhone app to recognize some well known symbols from pictures.
I'm basically following these tutorials http://aishack.in/tutorials/sudoku-grabber-with-opencv-detection/ and http://sudokugrab.blogspot.it/2009/07/how-does-it-all-work.html, using OpenCv for template matching and GPUImage for image processing.
When all images are with the same luminance level, I can adjust the threshold of GPUImageLuminanceThresholdFilter and all works smooth, but, of course, I can't be sure of the luminance.
So, I need a simple adaptive threshold filter, like the one in those tutorials, which calculate the luminance into the area surrounding each pixel.
The GPUImageAdaptiveThresholdFilter doesn't fit my needs, because it detects and sharps the edges, while I need to enhance the symbols.
How can I implement that kind of filter?

Asked to, the awesome Brad Larson added a blur size property to the box blur, and modified the adaptive threshold filter, so it works as expected!
Thanx #BradLarson!

Related

Is it possible to reproduce "Texture" Effect of Adobe Lightroom in iOS?

I'm trying to implement Texture effect in iOS but can't figure it out how to do that. Can anyone share some idea or resources or steps about that? See the attached image for clarification.
I know the workings of Adobe Lightroom "Texture" effect. According to Max Wendt, a Senior Computer Scientist on ACR and the lead engineer of the Texture project
Just like you can break an image into color channels (for example; red, green, and blue), an image can also be broken up into different “frequencies.” There are high frequency details, mid frequency features, and low frequency areas; together, they all make the image. If we apply "Texture" the medium-frequency features of an image are enhanced without affecting the other frequencies. link
Actually, I'm Exploring CIFilters and chaining them together to achieve custom filters. Unfortunately I stuck here for Texture

Human Detection using edge detection

I am trying to detect exact silhouette of human body in this dataset using background subtraction. After doing some thresholding I was getting split blobs so I looked at this tutorial by Steve but now I am getting blob other that human body as shown below
So here is the original
After Subtracting it from background, background was considered as the first frame of the video, so after subtracting it from orignal image I get the following image
so I did basic thresholding and I get the following image, which is split from further areas
and using Steve's method I get this
But this contains a lot of area which is not a part of human body, any suggestion if somehow or using edges I can get good blob of human body.
EDIT
As #lennon310 asked me to upload color image so here it is
and as #NKN asked me to upload edge information of the same image so here it is
Instead of literally subtracting the background, try using the vision.ForegroundDetector object, which is part of the Computer Vision System Toolbox. It implements the mixture-of-gaussians adaptive background modeling, and it may give you a cleaner segmentation.
Having said that, it is very unlikely that you will get the "exact" silhouette. Some error is inevitable.
In your result image, you have tow types of black regions. one is moving and the other is stationary.
So when you you want to fill the human body, you have to choose only the moving region, for this purpose, I suggest to segment your image by adding optical flow technique to know where the moving regions are.
This is an interesting tutorial doing what you need to do:
http://docs.opencv.org/trunk/doc/py_tutorials/py_video/py_lucas_kanade/py_lucas_kanade.html

Photo Edge Detection using a mask on iPhone

I'm looking for code to be able to contrast detect edges in a photo.
Basically, the user will roughly paint the mask with their finger on Iphone, iPod or iPad. Then the code would detect the edges and adjust the mask to the edges.
Thanks for your help!
http://www.image-y.com/before.jpg
http://www.image-y.com/after.jpg
I recommend taking a look at OpenCV (which is also compilable on iOS (take a look at https://github.com/aptogo/OpenCVForiPhone)). A nice addition (with explanations) could be provided by this article http://b2cloud.com.au/tutorial/uiimage-pre-processing-category.
When having gained a basic understanding of what you can do with OpenCV, I'd personally try to do some kind of thresholding and contour detection (take a look at cv::findContours). Afterwards you could filter the found contours by using the given input by your user.

How to improve edge detection on IPhone apps?

I'm currently developing an IPhone app that uses edge detection. I took some sample pictures and I noticed that they came out pretty dark in doors. Flash is obviously an option but it usually blinding the camera and miss some edges.
Update: I'm more interested in IPhone tips. If there is a wat to get better pictures.
Have you tried playing with contrast and/or brightness? If you increase contrast before doing the edge detection, you should get better results (although it depends on the edge detection algorithm you're using and whether it auto-magically fixes contrast first).
Histogram equalisation may prove useful here as it should allow you to maintain approximately equal contrast levels between pictures. I'm sure there's an algorithm been implemented in OpenCV to handle it (although I've never used it on iOS, so I can't be sure).
UPDATE: I found this page on performing Histogram Equalization in OpenCV

iphone, Image processing

I am building an application on night vision but i don't find any useful algorithm which I can apply on the dark images to make it clear. Anyone please suggest me some good algorithm.
Thanks in advance
With the size of the iphone lens and sensor, you are going to have a lot of noise no matter what you do. I would practice manipulating the image in Photoshop first, and you'll probably find that it is useful to select a white point out of a sample of the brighter pixels in the image and to use a curve. You'll probably also need to run an anti-noise filter and smoother. Edge detection or condensation may allow you to bold some areas of the image. As for specific algorithms to perform each of these filters there are a lot of Computer Science books and lists on the subject. Here is one list:
http://www.efg2.com/Lab/Library/ImageProcessing/Algorithms.htm
Many OpenGL implementations can be found if you find a standard name for an algorithm you need.
Real (useful) night vision typically uses an infrared light and an infrared-tuned camera. I think you're out of luck.
Of course using the iPhone 4's camera light could be considered "night vision" ...
Your real problem is the camera and not the algorithm.
You can apply algorithm to clarify images, but it won't make from dark to real like by magic ^^
But if you want to try some algorithms you should take a look at OpenCV (http://opencv.willowgarage.com/wiki/) there is some port like here http://ildan.blogspot.com/2008/07/creating-universal-static-opencv.html
I suppose there are two ways to refine the dark image. first is active which use infrared and other is passive which manipulates the pixel of the image....
The images will be noisy, but you can always try scaling up the pixel values (all of the components in RGB or just the luminance of HSV, either linear or applying some sort of curve, either globally or local to just the darker areas) and saturating them, and/or using a contrast edge enhancement filter algorithm.
If the camera and subject matter are sufficiently motionless (tripod, etc.) you could try summing each pixel over several image captures. Or you could do what some HDR apps do, and try aligning images before pixel processing across time.
I haven't seen any documentation on whether the iPhone's camera sensor has a wider wavelength gamut than the human eye.
I suggest conducting a simple test before trying to actually implement this:
Save a photo made in a dark room.
Open in GIMP (or a similar application).
Apply "Stretch HSV" algorithm (or equivalent).
Check if the resulting image quality is good enough.
This should give you an idea as to whether your camera is good enough to try it.